May 13 23:57:26.086978 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 13 23:57:26.087019 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:57:26.087037 kernel: BIOS-provided physical RAM map: May 13 23:57:26.087048 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 23:57:26.087058 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 13 23:57:26.087069 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 13 23:57:26.087082 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved May 13 23:57:26.087094 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data May 13 23:57:26.087108 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 13 23:57:26.087119 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 13 23:57:26.087130 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 13 23:57:26.087141 kernel: printk: bootconsole [earlyser0] enabled May 13 23:57:26.087153 kernel: NX (Execute Disable) protection: active May 13 23:57:26.087163 kernel: APIC: Static calls initialized May 13 23:57:26.087181 kernel: efi: EFI v2.7 by Microsoft May 13 23:57:26.087194 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 May 13 23:57:26.087207 kernel: random: crng init done May 13 23:57:26.087219 kernel: secureboot: Secure boot disabled May 13 23:57:26.087230 kernel: SMBIOS 3.1.0 present. May 13 23:57:26.087243 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 May 13 23:57:26.087255 kernel: Hypervisor detected: Microsoft Hyper-V May 13 23:57:26.087268 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 May 13 23:57:26.087279 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 May 13 23:57:26.087291 kernel: Hyper-V: Nested features: 0x1e0101 May 13 23:57:26.087303 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 13 23:57:26.087318 kernel: Hyper-V: Using hypercall for remote TLB flush May 13 23:57:26.087330 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 13 23:57:26.087343 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 13 23:57:26.087355 kernel: tsc: Marking TSC unstable due to running on Hyper-V May 13 23:57:26.087367 kernel: tsc: Detected 2593.908 MHz processor May 13 23:57:26.087379 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 23:57:26.087391 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 23:57:26.087403 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 May 13 23:57:26.087415 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 13 23:57:26.087430 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 23:57:26.087442 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved May 13 23:57:26.087454 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 May 13 23:57:26.087465 kernel: Using GB pages for direct mapping May 13 23:57:26.087477 kernel: ACPI: Early table checksum verification disabled May 13 23:57:26.087490 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 13 23:57:26.087507 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:26.091293 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:26.091308 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) May 13 23:57:26.091321 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 13 23:57:26.091334 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:26.091348 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:26.091361 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:26.091375 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:26.091393 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:26.091407 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:26.091420 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:57:26.091434 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 13 23:57:26.091447 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] May 13 23:57:26.091459 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 13 23:57:26.091470 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 13 23:57:26.091482 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 13 23:57:26.091499 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 13 23:57:26.091527 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] May 13 23:57:26.091541 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] May 13 23:57:26.091555 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 13 23:57:26.091568 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] May 13 23:57:26.091581 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 13 23:57:26.091592 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 13 23:57:26.091603 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 13 23:57:26.091617 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug May 13 23:57:26.091634 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug May 13 23:57:26.091646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 13 23:57:26.091657 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 13 23:57:26.091672 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 13 23:57:26.091685 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 13 23:57:26.091698 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 13 23:57:26.091711 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 13 23:57:26.091724 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 13 23:57:26.091737 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 13 23:57:26.091753 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 13 23:57:26.091766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug May 13 23:57:26.091778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug May 13 23:57:26.091791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug May 13 23:57:26.091805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug May 13 23:57:26.091819 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] May 13 23:57:26.091832 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] May 13 23:57:26.091847 kernel: Zone ranges: May 13 23:57:26.091861 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 23:57:26.091879 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 23:57:26.091893 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 13 23:57:26.091907 kernel: Movable zone start for each node May 13 23:57:26.091921 kernel: Early memory node ranges May 13 23:57:26.091935 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 13 23:57:26.091949 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 13 23:57:26.091964 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 13 23:57:26.091977 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 13 23:57:26.091992 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 13 23:57:26.092009 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:57:26.092023 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 13 23:57:26.092037 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges May 13 23:57:26.092051 kernel: ACPI: PM-Timer IO Port: 0x408 May 13 23:57:26.092065 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) May 13 23:57:26.092079 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 May 13 23:57:26.092093 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 23:57:26.092107 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 23:57:26.092121 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 13 23:57:26.092138 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 23:57:26.092151 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 13 23:57:26.092165 kernel: Booting paravirtualized kernel on Hyper-V May 13 23:57:26.092180 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 23:57:26.092193 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 13 23:57:26.092207 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 13 23:57:26.092222 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 13 23:57:26.092235 kernel: pcpu-alloc: [0] 0 1 May 13 23:57:26.092248 kernel: Hyper-V: PV spinlocks enabled May 13 23:57:26.092265 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 23:57:26.092281 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:57:26.092296 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:57:26.092310 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 13 23:57:26.092324 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:57:26.092338 kernel: Fallback order for Node 0: 0 May 13 23:57:26.092352 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 May 13 23:57:26.092365 kernel: Policy zone: Normal May 13 23:57:26.092396 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:57:26.092410 kernel: software IO TLB: area num 2. May 13 23:57:26.092426 kernel: Memory: 8072992K/8387460K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 314212K reserved, 0K cma-reserved) May 13 23:57:26.092443 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 23:57:26.092458 kernel: ftrace: allocating 37993 entries in 149 pages May 13 23:57:26.092472 kernel: ftrace: allocated 149 pages with 4 groups May 13 23:57:26.092486 kernel: Dynamic Preempt: voluntary May 13 23:57:26.092501 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:57:26.092530 kernel: rcu: RCU event tracing is enabled. May 13 23:57:26.092546 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 23:57:26.092565 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:57:26.092580 kernel: Rude variant of Tasks RCU enabled. May 13 23:57:26.092595 kernel: Tracing variant of Tasks RCU enabled. May 13 23:57:26.092610 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:57:26.092625 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 23:57:26.092639 kernel: Using NULL legacy PIC May 13 23:57:26.092657 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 13 23:57:26.092672 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:57:26.092686 kernel: Console: colour dummy device 80x25 May 13 23:57:26.092700 kernel: printk: console [tty1] enabled May 13 23:57:26.092714 kernel: printk: console [ttyS0] enabled May 13 23:57:26.092728 kernel: printk: bootconsole [earlyser0] disabled May 13 23:57:26.092743 kernel: ACPI: Core revision 20230628 May 13 23:57:26.092757 kernel: Failed to register legacy timer interrupt May 13 23:57:26.092769 kernel: APIC: Switch to symmetric I/O mode setup May 13 23:57:26.092783 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 13 23:57:26.092800 kernel: Hyper-V: Using IPI hypercalls May 13 23:57:26.092813 kernel: APIC: send_IPI() replaced with hv_send_ipi() May 13 23:57:26.092826 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() May 13 23:57:26.092840 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() May 13 23:57:26.092854 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() May 13 23:57:26.092868 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() May 13 23:57:26.092882 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() May 13 23:57:26.092896 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) May 13 23:57:26.092912 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 13 23:57:26.092926 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 13 23:57:26.092939 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 23:57:26.092953 kernel: Spectre V2 : Mitigation: Retpolines May 13 23:57:26.092966 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 23:57:26.092979 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 13 23:57:26.092993 kernel: RETBleed: Vulnerable May 13 23:57:26.093007 kernel: Speculative Store Bypass: Vulnerable May 13 23:57:26.093022 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode May 13 23:57:26.093037 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 13 23:57:26.093053 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 23:57:26.093072 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 23:57:26.093087 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 23:57:26.093102 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 13 23:57:26.093118 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 13 23:57:26.093133 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 13 23:57:26.093148 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 23:57:26.093163 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 13 23:57:26.093178 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 13 23:57:26.093193 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 13 23:57:26.093208 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. May 13 23:57:26.093223 kernel: Freeing SMP alternatives memory: 32K May 13 23:57:26.093241 kernel: pid_max: default: 32768 minimum: 301 May 13 23:57:26.093256 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:57:26.093271 kernel: landlock: Up and running. May 13 23:57:26.093286 kernel: SELinux: Initializing. May 13 23:57:26.093302 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 23:57:26.093318 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 23:57:26.093333 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) May 13 23:57:26.093347 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:57:26.093361 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:57:26.093376 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:57:26.093394 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 13 23:57:26.093408 kernel: signal: max sigframe size: 3632 May 13 23:57:26.093421 kernel: rcu: Hierarchical SRCU implementation. May 13 23:57:26.093434 kernel: rcu: Max phase no-delay instances is 400. May 13 23:57:26.093448 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 13 23:57:26.093462 kernel: smp: Bringing up secondary CPUs ... May 13 23:57:26.093475 kernel: smpboot: x86: Booting SMP configuration: May 13 23:57:26.093488 kernel: .... node #0, CPUs: #1 May 13 23:57:26.093504 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. May 13 23:57:26.096412 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 13 23:57:26.096429 kernel: smp: Brought up 1 node, 2 CPUs May 13 23:57:26.096444 kernel: smpboot: Max logical packages: 1 May 13 23:57:26.096459 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) May 13 23:57:26.096474 kernel: devtmpfs: initialized May 13 23:57:26.096489 kernel: x86/mm: Memory block size: 128MB May 13 23:57:26.096504 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 13 23:57:26.096537 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:57:26.096553 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 23:57:26.096574 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:57:26.096590 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:57:26.096604 kernel: audit: initializing netlink subsys (disabled) May 13 23:57:26.096618 kernel: audit: type=2000 audit(1747180645.027:1): state=initialized audit_enabled=0 res=1 May 13 23:57:26.096632 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:57:26.096646 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 23:57:26.096659 kernel: cpuidle: using governor menu May 13 23:57:26.096672 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:57:26.096685 kernel: dca service started, version 1.12.1 May 13 23:57:26.096701 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] May 13 23:57:26.096714 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 23:57:26.096728 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:57:26.096741 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:57:26.096754 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:57:26.096768 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:57:26.096781 kernel: ACPI: Added _OSI(Module Device) May 13 23:57:26.096794 kernel: ACPI: Added _OSI(Processor Device) May 13 23:57:26.096810 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:57:26.096823 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:57:26.096836 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:57:26.096849 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 23:57:26.096862 kernel: ACPI: Interpreter enabled May 13 23:57:26.096875 kernel: ACPI: PM: (supports S0 S5) May 13 23:57:26.096888 kernel: ACPI: Using IOAPIC for interrupt routing May 13 23:57:26.096902 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 23:57:26.096915 kernel: PCI: Ignoring E820 reservations for host bridge windows May 13 23:57:26.096932 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 13 23:57:26.096946 kernel: iommu: Default domain type: Translated May 13 23:57:26.096960 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 23:57:26.096973 kernel: efivars: Registered efivars operations May 13 23:57:26.096987 kernel: PCI: Using ACPI for IRQ routing May 13 23:57:26.097001 kernel: PCI: System does not support PCI May 13 23:57:26.097013 kernel: vgaarb: loaded May 13 23:57:26.097027 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page May 13 23:57:26.097041 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:57:26.097054 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:57:26.097071 kernel: pnp: PnP ACPI init May 13 23:57:26.097085 kernel: pnp: PnP ACPI: found 3 devices May 13 23:57:26.097099 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 23:57:26.097113 kernel: NET: Registered PF_INET protocol family May 13 23:57:26.097126 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 13 23:57:26.097140 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 13 23:57:26.097154 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:57:26.097168 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:57:26.097185 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 13 23:57:26.097199 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 13 23:57:26.097213 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 13 23:57:26.097227 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 13 23:57:26.097241 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:57:26.097254 kernel: NET: Registered PF_XDP protocol family May 13 23:57:26.097268 kernel: PCI: CLS 0 bytes, default 64 May 13 23:57:26.097283 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 23:57:26.097296 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) May 13 23:57:26.097314 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 13 23:57:26.097327 kernel: Initialise system trusted keyrings May 13 23:57:26.097341 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 13 23:57:26.097354 kernel: Key type asymmetric registered May 13 23:57:26.097368 kernel: Asymmetric key parser 'x509' registered May 13 23:57:26.097382 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 23:57:26.097396 kernel: io scheduler mq-deadline registered May 13 23:57:26.097410 kernel: io scheduler kyber registered May 13 23:57:26.097424 kernel: io scheduler bfq registered May 13 23:57:26.097438 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 23:57:26.097454 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:57:26.097469 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 23:57:26.097483 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 13 23:57:26.097496 kernel: i8042: PNP: No PS/2 controller found. May 13 23:57:26.097715 kernel: rtc_cmos 00:02: registered as rtc0 May 13 23:57:26.097850 kernel: rtc_cmos 00:02: setting system clock to 2025-05-13T23:57:25 UTC (1747180645) May 13 23:57:26.097970 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 13 23:57:26.097993 kernel: intel_pstate: CPU model not supported May 13 23:57:26.098008 kernel: efifb: probing for efifb May 13 23:57:26.098022 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 13 23:57:26.098036 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 13 23:57:26.098050 kernel: efifb: scrolling: redraw May 13 23:57:26.098064 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 23:57:26.098077 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:57:26.098092 kernel: fb0: EFI VGA frame buffer device May 13 23:57:26.098107 kernel: pstore: Using crash dump compression: deflate May 13 23:57:26.098125 kernel: pstore: Registered efi_pstore as persistent store backend May 13 23:57:26.098139 kernel: NET: Registered PF_INET6 protocol family May 13 23:57:26.098154 kernel: Segment Routing with IPv6 May 13 23:57:26.098166 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:57:26.098179 kernel: NET: Registered PF_PACKET protocol family May 13 23:57:26.098191 kernel: Key type dns_resolver registered May 13 23:57:26.098203 kernel: IPI shorthand broadcast: enabled May 13 23:57:26.098217 kernel: sched_clock: Marking stable (817003200, 45254400)->(1064366600, -202109000) May 13 23:57:26.098231 kernel: registered taskstats version 1 May 13 23:57:26.098249 kernel: Loading compiled-in X.509 certificates May 13 23:57:26.098262 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 13 23:57:26.098276 kernel: Key type .fscrypt registered May 13 23:57:26.098290 kernel: Key type fscrypt-provisioning registered May 13 23:57:26.098306 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:57:26.098320 kernel: ima: Allocated hash algorithm: sha1 May 13 23:57:26.098333 kernel: ima: No architecture policies found May 13 23:57:26.098346 kernel: clk: Disabling unused clocks May 13 23:57:26.098361 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 23:57:26.098376 kernel: Write protecting the kernel read-only data: 40960k May 13 23:57:26.098389 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 23:57:26.098402 kernel: Run /init as init process May 13 23:57:26.098416 kernel: with arguments: May 13 23:57:26.098429 kernel: /init May 13 23:57:26.098442 kernel: with environment: May 13 23:57:26.098455 kernel: HOME=/ May 13 23:57:26.098468 kernel: TERM=linux May 13 23:57:26.098480 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:57:26.098499 systemd[1]: Successfully made /usr/ read-only. May 13 23:57:26.104034 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:57:26.104057 systemd[1]: Detected virtualization microsoft. May 13 23:57:26.104074 systemd[1]: Detected architecture x86-64. May 13 23:57:26.104089 systemd[1]: Running in initrd. May 13 23:57:26.104104 systemd[1]: No hostname configured, using default hostname. May 13 23:57:26.104121 systemd[1]: Hostname set to . May 13 23:57:26.104142 systemd[1]: Initializing machine ID from random generator. May 13 23:57:26.104158 systemd[1]: Queued start job for default target initrd.target. May 13 23:57:26.104173 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:57:26.104189 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:57:26.104206 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:57:26.104223 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:57:26.104238 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:57:26.104259 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:57:26.104276 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:57:26.104292 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:57:26.104308 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:57:26.104323 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:57:26.104339 systemd[1]: Reached target paths.target - Path Units. May 13 23:57:26.104355 systemd[1]: Reached target slices.target - Slice Units. May 13 23:57:26.104370 systemd[1]: Reached target swap.target - Swaps. May 13 23:57:26.104389 systemd[1]: Reached target timers.target - Timer Units. May 13 23:57:26.104405 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:57:26.104420 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:57:26.104437 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:57:26.104453 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:57:26.104468 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:57:26.104484 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:57:26.104500 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:57:26.104541 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:57:26.104560 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:57:26.104575 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:57:26.104590 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:57:26.104607 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:57:26.104623 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:57:26.104639 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:57:26.104655 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:26.104670 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:57:26.104715 systemd-journald[177]: Collecting audit messages is disabled. May 13 23:57:26.104755 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:57:26.104775 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:57:26.104792 systemd-journald[177]: Journal started May 13 23:57:26.104830 systemd-journald[177]: Runtime Journal (/run/log/journal/a9b542ca8154495ca8a6e978869abcab) is 8M, max 158.7M, 150.7M free. May 13 23:57:26.085436 systemd-modules-load[179]: Inserted module 'overlay' May 13 23:57:26.113531 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:57:26.125601 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:57:26.129242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:26.140803 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:57:26.140836 kernel: Bridge firewalling registered May 13 23:57:26.140732 systemd-modules-load[179]: Inserted module 'br_netfilter' May 13 23:57:26.141568 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:57:26.145788 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:57:26.157291 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:26.163629 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:57:26.179192 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:57:26.181105 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:57:26.201088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:57:26.207861 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:57:26.210676 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:57:26.214631 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:57:26.230916 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:26.235885 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:57:26.267190 dracut-cmdline[213]: dracut-dracut-053 May 13 23:57:26.271665 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:57:26.276060 systemd-resolved[210]: Positive Trust Anchors: May 13 23:57:26.276070 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:57:26.276110 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:57:26.278828 systemd-resolved[210]: Defaulting to hostname 'linux'. May 13 23:57:26.280135 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:57:26.288857 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:57:26.357539 kernel: SCSI subsystem initialized May 13 23:57:26.368534 kernel: Loading iSCSI transport class v2.0-870. May 13 23:57:26.379540 kernel: iscsi: registered transport (tcp) May 13 23:57:26.401383 kernel: iscsi: registered transport (qla4xxx) May 13 23:57:26.401472 kernel: QLogic iSCSI HBA Driver May 13 23:57:26.437839 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:57:26.445707 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:57:26.478608 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:57:26.478715 kernel: device-mapper: uevent: version 1.0.3 May 13 23:57:26.481930 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:57:26.522542 kernel: raid6: avx512x4 gen() 18159 MB/s May 13 23:57:26.544548 kernel: raid6: avx512x2 gen() 18200 MB/s May 13 23:57:26.562526 kernel: raid6: avx512x1 gen() 18077 MB/s May 13 23:57:26.581528 kernel: raid6: avx2x4 gen() 18106 MB/s May 13 23:57:26.600529 kernel: raid6: avx2x2 gen() 18193 MB/s May 13 23:57:26.620351 kernel: raid6: avx2x1 gen() 13513 MB/s May 13 23:57:26.620390 kernel: raid6: using algorithm avx512x2 gen() 18200 MB/s May 13 23:57:26.643477 kernel: raid6: .... xor() 30500 MB/s, rmw enabled May 13 23:57:26.643532 kernel: raid6: using avx512x2 recovery algorithm May 13 23:57:26.666545 kernel: xor: automatically using best checksumming function avx May 13 23:57:26.807539 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:57:26.816623 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:57:26.822217 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:57:26.842459 systemd-udevd[395]: Using default interface naming scheme 'v255'. May 13 23:57:26.847475 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:57:26.859107 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:57:26.879381 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation May 13 23:57:26.905182 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:57:26.909638 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:57:26.959660 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:57:26.968073 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:57:27.001037 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:57:27.008126 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:57:27.015434 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:57:27.023371 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:57:27.034687 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:57:27.044535 kernel: cryptd: max_cpu_qlen set to 1000 May 13 23:57:27.071826 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:57:27.108254 kernel: AVX2 version of gcm_enc/dec engaged. May 13 23:57:27.108316 kernel: AES CTR mode by8 optimization enabled May 13 23:57:27.092984 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:57:27.093274 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:27.114980 kernel: hv_vmbus: Vmbus version:5.2 May 13 23:57:27.097170 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:27.099974 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:57:27.100230 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:27.103250 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:27.130934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:27.137660 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:27.150888 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:57:27.152042 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:27.160098 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:27.166607 kernel: hv_vmbus: registering driver hyperv_keyboard May 13 23:57:27.166457 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:27.180028 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 23:57:27.180080 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 23:57:27.186057 kernel: hv_vmbus: registering driver hv_netvsc May 13 23:57:27.198102 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 13 23:57:27.212574 kernel: hv_vmbus: registering driver hv_storvsc May 13 23:57:27.212633 kernel: PTP clock support registered May 13 23:57:27.224525 kernel: scsi host0: storvsc_host_t May 13 23:57:27.224745 kernel: scsi host1: storvsc_host_t May 13 23:57:27.224772 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:57:27.224787 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 13 23:57:27.231618 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:27.241566 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 13 23:57:27.244310 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:57:27.265545 kernel: hv_utils: Registering HyperV Utility Driver May 13 23:57:27.265626 kernel: hv_vmbus: registering driver hv_utils May 13 23:57:27.273773 kernel: hv_utils: Heartbeat IC version 3.0 May 13 23:57:27.273832 kernel: hv_utils: Shutdown IC version 3.2 May 13 23:57:27.273851 kernel: hv_utils: TimeSync IC version 4.0 May 13 23:57:27.798338 kernel: hv_vmbus: registering driver hid_hyperv May 13 23:57:27.798399 systemd-resolved[210]: Clock change detected. Flushing caches. May 13 23:57:27.830588 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 13 23:57:27.830620 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 13 23:57:27.830801 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 13 23:57:27.830960 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 23:57:27.830977 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 13 23:57:27.833391 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:27.853038 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 13 23:57:27.853319 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 13 23:57:27.859556 kernel: sd 0:0:0:0: [sda] Write Protect is off May 13 23:57:27.859828 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 13 23:57:27.860029 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 13 23:57:27.867961 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:57:27.868015 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 13 23:57:27.904045 kernel: hv_netvsc 6045bddd-a4ab-6045-bddd-a4ab6045bddd eth0: VF slot 1 added May 13 23:57:27.912082 kernel: hv_vmbus: registering driver hv_pci May 13 23:57:27.916820 kernel: hv_pci 7bb627a7-137b-4a0c-9a89-ba7d2a8f319e: PCI VMBus probing: Using version 0x10004 May 13 23:57:27.917050 kernel: hv_pci 7bb627a7-137b-4a0c-9a89-ba7d2a8f319e: PCI host bridge to bus 137b:00 May 13 23:57:27.922404 kernel: pci_bus 137b:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] May 13 23:57:27.925510 kernel: pci_bus 137b:00: No busn resource found for root bus, will use [bus 00-ff] May 13 23:57:27.930416 kernel: pci 137b:00:02.0: [15b3:1016] type 00 class 0x020000 May 13 23:57:27.938465 kernel: pci 137b:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] May 13 23:57:27.942156 kernel: pci 137b:00:02.0: enabling Extended Tags May 13 23:57:27.954136 kernel: pci 137b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 137b:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 13 23:57:27.959862 kernel: pci_bus 137b:00: busn_res: [bus 00-ff] end is updated to 00 May 13 23:57:27.960197 kernel: pci 137b:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] May 13 23:57:28.204084 kernel: mlx5_core 137b:00:02.0: enabling device (0000 -> 0002) May 13 23:57:28.208100 kernel: mlx5_core 137b:00:02.0: firmware version: 14.30.5000 May 13 23:57:28.421719 kernel: hv_netvsc 6045bddd-a4ab-6045-bddd-a4ab6045bddd eth0: VF registering: eth1 May 13 23:57:28.421938 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (440) May 13 23:57:28.421952 kernel: mlx5_core 137b:00:02.0 eth1: joined to eth0 May 13 23:57:28.430097 kernel: mlx5_core 137b:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 13 23:57:28.443085 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (453) May 13 23:57:28.455838 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 13 23:57:28.462163 kernel: mlx5_core 137b:00:02.0 enP4987s1: renamed from eth1 May 13 23:57:28.487194 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 13 23:57:28.517061 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 13 23:57:28.520502 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 13 23:57:28.538051 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 13 23:57:28.543193 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:57:28.575786 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:57:28.583100 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:57:29.591171 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:57:29.592102 disk-uuid[603]: The operation has completed successfully. May 13 23:57:29.677041 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:57:29.677163 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:57:29.721537 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:57:29.742843 sh[689]: Success May 13 23:57:29.780642 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 13 23:57:29.991259 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:57:29.996564 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:57:30.006004 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:57:30.021087 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 13 23:57:30.021128 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:30.025897 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:57:30.028742 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:57:30.031244 kernel: BTRFS info (device dm-0): using free space tree May 13 23:57:30.353040 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:57:30.358474 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:57:30.364285 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:57:30.370327 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:57:30.401748 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:30.401823 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:30.406607 kernel: BTRFS info (device sda6): using free space tree May 13 23:57:30.426099 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:57:30.433091 kernel: BTRFS info (device sda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:30.437135 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:57:30.445231 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:57:30.470618 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:57:30.474201 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:57:30.509043 systemd-networkd[870]: lo: Link UP May 13 23:57:30.509054 systemd-networkd[870]: lo: Gained carrier May 13 23:57:30.511323 systemd-networkd[870]: Enumeration completed May 13 23:57:30.511424 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:57:30.513934 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:30.513939 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:57:30.516546 systemd[1]: Reached target network.target - Network. May 13 23:57:30.584094 kernel: mlx5_core 137b:00:02.0 enP4987s1: Link up May 13 23:57:30.616115 kernel: hv_netvsc 6045bddd-a4ab-6045-bddd-a4ab6045bddd eth0: Data path switched to VF: enP4987s1 May 13 23:57:30.616453 systemd-networkd[870]: enP4987s1: Link UP May 13 23:57:30.616575 systemd-networkd[870]: eth0: Link UP May 13 23:57:30.616720 systemd-networkd[870]: eth0: Gained carrier May 13 23:57:30.616734 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:30.631346 systemd-networkd[870]: enP4987s1: Gained carrier May 13 23:57:30.661133 systemd-networkd[870]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 13 23:57:31.257486 ignition[841]: Ignition 2.20.0 May 13 23:57:31.257498 ignition[841]: Stage: fetch-offline May 13 23:57:31.257535 ignition[841]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:31.260347 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:57:31.257547 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:57:31.267198 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 23:57:31.257654 ignition[841]: parsed url from cmdline: "" May 13 23:57:31.257659 ignition[841]: no config URL provided May 13 23:57:31.257667 ignition[841]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:57:31.257677 ignition[841]: no config at "/usr/lib/ignition/user.ign" May 13 23:57:31.257684 ignition[841]: failed to fetch config: resource requires networking May 13 23:57:31.257918 ignition[841]: Ignition finished successfully May 13 23:57:31.289842 ignition[882]: Ignition 2.20.0 May 13 23:57:31.289853 ignition[882]: Stage: fetch May 13 23:57:31.290064 ignition[882]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:31.290098 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:57:31.290201 ignition[882]: parsed url from cmdline: "" May 13 23:57:31.290204 ignition[882]: no config URL provided May 13 23:57:31.290208 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:57:31.290217 ignition[882]: no config at "/usr/lib/ignition/user.ign" May 13 23:57:31.290245 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 13 23:57:31.368729 ignition[882]: GET result: OK May 13 23:57:31.368932 ignition[882]: config has been read from IMDS userdata May 13 23:57:31.368966 ignition[882]: parsing config with SHA512: d9fbd2e4a01f4378dafa49b5720dbc1b864e90bc95b3dc25f2942363e7af0e5e812c142420f8133a79afcf3bb5fd303852a622b9054806821314f21da26a7a20 May 13 23:57:31.374629 unknown[882]: fetched base config from "system" May 13 23:57:31.374643 unknown[882]: fetched base config from "system" May 13 23:57:31.375174 ignition[882]: fetch: fetch complete May 13 23:57:31.374653 unknown[882]: fetched user config from "azure" May 13 23:57:31.375179 ignition[882]: fetch: fetch passed May 13 23:57:31.376933 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 23:57:31.375228 ignition[882]: Ignition finished successfully May 13 23:57:31.388266 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:57:31.412166 ignition[889]: Ignition 2.20.0 May 13 23:57:31.412177 ignition[889]: Stage: kargs May 13 23:57:31.412400 ignition[889]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:31.412413 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:57:31.413312 ignition[889]: kargs: kargs passed May 13 23:57:31.413358 ignition[889]: Ignition finished successfully May 13 23:57:31.420276 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:57:31.426214 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:57:31.454512 ignition[895]: Ignition 2.20.0 May 13 23:57:31.454524 ignition[895]: Stage: disks May 13 23:57:31.456481 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:57:31.454744 ignition[895]: no configs at "/usr/lib/ignition/base.d" May 13 23:57:31.460202 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:57:31.454757 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:57:31.464311 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:57:31.455616 ignition[895]: disks: disks passed May 13 23:57:31.469677 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:57:31.455660 ignition[895]: Ignition finished successfully May 13 23:57:31.472143 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:57:31.476869 systemd[1]: Reached target basic.target - Basic System. May 13 23:57:31.482195 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:57:31.550230 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 13 23:57:31.555534 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:57:31.563193 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:57:31.660109 kernel: EXT4-fs (sda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 13 23:57:31.660830 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:57:31.663341 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:57:31.695030 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:57:31.700179 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:57:31.710884 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 13 23:57:31.719819 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:57:31.728546 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (914) May 13 23:57:31.720233 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:57:31.734463 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:57:31.741621 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:31.741646 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:31.741657 kernel: BTRFS info (device sda6): using free space tree May 13 23:57:31.747969 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:57:31.756098 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:57:31.757355 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:57:31.772176 systemd-networkd[870]: eth0: Gained IPv6LL May 13 23:57:32.092312 systemd-networkd[870]: enP4987s1: Gained IPv6LL May 13 23:57:32.370357 coreos-metadata[916]: May 13 23:57:32.370 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 13 23:57:32.376886 coreos-metadata[916]: May 13 23:57:32.376 INFO Fetch successful May 13 23:57:32.381290 coreos-metadata[916]: May 13 23:57:32.377 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 13 23:57:32.390349 coreos-metadata[916]: May 13 23:57:32.390 INFO Fetch successful May 13 23:57:32.392929 coreos-metadata[916]: May 13 23:57:32.392 INFO wrote hostname ci-4284.0.0-n-84802b4006 to /sysroot/etc/hostname May 13 23:57:32.394846 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:57:32.416151 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:57:32.437704 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory May 13 23:57:32.456778 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:57:32.475186 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:57:33.073837 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:57:33.079666 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:57:33.088212 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:57:33.094911 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:57:33.100247 kernel: BTRFS info (device sda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:33.129799 ignition[1033]: INFO : Ignition 2.20.0 May 13 23:57:33.129799 ignition[1033]: INFO : Stage: mount May 13 23:57:33.136128 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:33.136128 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:57:33.135765 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:57:33.141184 ignition[1033]: INFO : mount: mount passed May 13 23:57:33.141184 ignition[1033]: INFO : Ignition finished successfully May 13 23:57:33.149259 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:57:33.155832 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:57:33.169627 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:57:33.188091 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1044) May 13 23:57:33.188145 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:57:33.192084 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:57:33.196090 kernel: BTRFS info (device sda6): using free space tree May 13 23:57:33.203100 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:57:33.204810 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:57:33.230253 ignition[1061]: INFO : Ignition 2.20.0 May 13 23:57:33.230253 ignition[1061]: INFO : Stage: files May 13 23:57:33.233946 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:33.233946 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:57:33.233946 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping May 13 23:57:33.247579 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:57:33.247579 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:57:33.305126 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:57:33.309265 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:57:33.309265 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:57:33.305761 unknown[1061]: wrote ssh authorized keys file for user: core May 13 23:57:33.334041 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:57:33.339048 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 23:57:33.393619 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:57:33.589387 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:57:33.589387 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:57:33.599401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:57:33.599401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:57:33.599401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:57:33.599401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:57:33.599401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:57:33.599401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:57:33.599401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:57:33.599401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:57:33.599401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:57:33.599401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:57:33.599401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:57:33.599401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:57:33.599401 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 13 23:57:34.201272 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:57:35.174571 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:57:35.174571 ignition[1061]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:57:35.188722 ignition[1061]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:57:35.193515 ignition[1061]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:57:35.193515 ignition[1061]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:57:35.193515 ignition[1061]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 13 23:57:35.206794 ignition[1061]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:57:35.206794 ignition[1061]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:57:35.206794 ignition[1061]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:57:35.206794 ignition[1061]: INFO : files: files passed May 13 23:57:35.206794 ignition[1061]: INFO : Ignition finished successfully May 13 23:57:35.195062 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:57:35.204217 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:57:35.227105 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:57:35.233835 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:57:35.235674 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:57:35.258090 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:35.258090 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:35.267704 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:57:35.262196 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:57:35.271146 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:57:35.279191 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:57:35.313946 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:57:35.314059 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:57:35.319852 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:57:35.324829 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:57:35.329561 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:57:35.336175 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:57:35.354213 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:57:35.359198 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:57:35.376785 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:57:35.378079 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:57:35.378462 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:57:35.378842 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:57:35.378959 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:57:35.380038 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:57:35.380869 systemd[1]: Stopped target basic.target - Basic System. May 13 23:57:35.381270 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:57:35.381651 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:57:35.382055 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:57:35.382509 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:57:35.382887 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:57:35.383297 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:57:35.383686 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:57:35.384083 systemd[1]: Stopped target swap.target - Swaps. May 13 23:57:35.384441 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:57:35.384574 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:57:35.385261 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:57:35.385783 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:57:35.386193 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:57:35.419647 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:57:35.427155 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:57:35.429330 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:57:35.478796 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:57:35.479023 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:57:35.487534 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:57:35.487695 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:57:35.493054 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 13 23:57:35.493212 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:57:35.500281 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:57:35.510158 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:57:35.515320 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:57:35.515513 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:57:35.524651 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:57:35.524811 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:57:35.538341 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:57:35.539399 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:57:35.543372 ignition[1114]: INFO : Ignition 2.20.0 May 13 23:57:35.543372 ignition[1114]: INFO : Stage: umount May 13 23:57:35.543372 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:57:35.543372 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:57:35.543372 ignition[1114]: INFO : umount: umount passed May 13 23:57:35.543372 ignition[1114]: INFO : Ignition finished successfully May 13 23:57:35.548184 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:57:35.549773 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:57:35.551368 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:57:35.551481 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:57:35.552553 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:57:35.552603 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:57:35.565285 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 23:57:35.565343 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 23:57:35.572018 systemd[1]: Stopped target network.target - Network. May 13 23:57:35.577127 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:57:35.577188 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:57:35.583567 systemd[1]: Stopped target paths.target - Path Units. May 13 23:57:35.587795 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:57:35.592027 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:57:35.599333 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:57:35.603863 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:57:35.608619 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:57:35.610849 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:57:35.626316 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:57:35.626381 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:57:35.633301 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:57:35.635526 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:57:35.640277 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:57:35.640354 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:57:35.645493 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:57:35.650381 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:57:35.656547 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:57:35.657153 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:57:35.657247 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:57:35.660754 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:57:35.660863 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:57:35.675202 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:57:35.675325 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:57:35.681249 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:57:35.681794 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:57:35.681871 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:57:35.686277 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:57:35.690435 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:57:35.690498 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:57:35.693831 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:57:35.700543 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:57:35.700630 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:57:35.715110 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:57:35.718265 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:57:35.718394 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:57:35.739308 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:57:35.739380 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:57:35.744048 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:57:35.748713 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:57:35.751767 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:57:35.751821 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:57:35.761403 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:57:35.761473 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:57:35.765969 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:57:35.766018 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:57:35.791333 kernel: hv_netvsc 6045bddd-a4ab-6045-bddd-a4ab6045bddd eth0: Data path switched from VF: enP4987s1 May 13 23:57:35.773193 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:57:35.779111 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:57:35.779184 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:57:35.781903 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:57:35.781958 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:57:35.785303 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:57:35.785360 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:57:35.791445 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 23:57:35.791503 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:57:35.794515 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:57:35.794561 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:57:35.800522 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:57:35.800577 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:57:35.805720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:57:35.805807 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:35.844467 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:57:35.844550 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:57:35.844605 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:35.844642 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:57:35.845025 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:57:35.845162 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:57:35.862497 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:57:35.862607 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:57:35.868295 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:57:35.871969 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:57:35.909909 systemd[1]: Switching root. May 13 23:57:36.012423 systemd-journald[177]: Journal stopped May 13 23:57:41.077518 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). May 13 23:57:41.077557 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:57:41.077579 kernel: SELinux: policy capability open_perms=1 May 13 23:57:41.077598 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:57:41.077610 kernel: SELinux: policy capability always_check_network=0 May 13 23:57:41.077618 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:57:41.077633 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:57:41.077650 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:57:41.077667 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:57:41.077676 kernel: audit: type=1403 audit(1747180657.419:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:57:41.077692 systemd[1]: Successfully loaded SELinux policy in 128.789ms. May 13 23:57:41.077711 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.970ms. May 13 23:57:41.077733 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:57:41.077744 systemd[1]: Detected virtualization microsoft. May 13 23:57:41.077762 systemd[1]: Detected architecture x86-64. May 13 23:57:41.077785 systemd[1]: Detected first boot. May 13 23:57:41.077808 systemd[1]: Hostname set to . May 13 23:57:41.077819 systemd[1]: Initializing machine ID from random generator. May 13 23:57:41.077830 zram_generator::config[1159]: No configuration found. May 13 23:57:41.077855 kernel: Guest personality initialized and is inactive May 13 23:57:41.077867 kernel: VMCI host device registered (name=vmci, major=10, minor=124) May 13 23:57:41.077876 kernel: Initialized host personality May 13 23:57:41.077893 kernel: NET: Registered PF_VSOCK protocol family May 13 23:57:41.077910 systemd[1]: Populated /etc with preset unit settings. May 13 23:57:41.077920 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:57:41.077936 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:57:41.077954 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:57:41.077964 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:57:41.077981 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:57:41.078002 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:57:41.078025 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:57:41.078039 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:57:41.078049 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:57:41.078086 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:57:41.078100 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:57:41.078121 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:57:41.078145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:57:41.078157 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:57:41.078166 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:57:41.078176 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:57:41.078190 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:57:41.078213 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:57:41.078235 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 23:57:41.078254 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:57:41.078264 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:57:41.078277 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:57:41.078300 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:57:41.078314 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:57:41.078325 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:57:41.078345 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:57:41.078386 systemd[1]: Reached target slices.target - Slice Units. May 13 23:57:41.078404 systemd[1]: Reached target swap.target - Swaps. May 13 23:57:41.078427 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:57:41.078441 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:57:41.078452 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:57:41.078476 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:57:41.078501 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:57:41.078513 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:57:41.078523 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:57:41.078533 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:57:41.078544 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:57:41.078562 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:57:41.078580 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:41.078593 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:57:41.078612 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:57:41.078637 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:57:41.078662 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:57:41.078679 systemd[1]: Reached target machines.target - Containers. May 13 23:57:41.078690 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:57:41.078711 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:57:41.078729 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:57:41.078739 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:57:41.078764 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:57:41.078780 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:57:41.078791 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:57:41.078809 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:57:41.078830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:57:41.078841 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:57:41.078863 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:57:41.078884 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:57:41.078899 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:57:41.078916 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:57:41.078934 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:57:41.078951 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:57:41.078971 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:57:41.078993 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:57:41.079014 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:57:41.079034 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:57:41.079061 kernel: fuse: init (API version 7.39) May 13 23:57:41.079104 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:57:41.079125 kernel: loop: module loaded May 13 23:57:41.079148 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:57:41.079169 systemd[1]: Stopped verity-setup.service. May 13 23:57:41.079194 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:41.079214 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:57:41.079234 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:57:41.079267 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:57:41.079293 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:57:41.079316 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:57:41.079341 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:57:41.079362 kernel: ACPI: bus type drm_connector registered May 13 23:57:41.079383 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:57:41.079442 systemd-journald[1252]: Collecting audit messages is disabled. May 13 23:57:41.079489 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:57:41.079516 systemd-journald[1252]: Journal started May 13 23:57:41.079555 systemd-journald[1252]: Runtime Journal (/run/log/journal/c42c17d7dfe94f3c80e536a68d09397b) is 8M, max 158.7M, 150.7M free. May 13 23:57:40.458403 systemd[1]: Queued start job for default target multi-user.target. May 13 23:57:40.466982 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 13 23:57:40.467387 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:57:41.088963 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:57:41.090306 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:57:41.090636 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:57:41.094393 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:57:41.094759 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:57:41.098023 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:57:41.098457 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:57:41.101810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:57:41.102195 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:57:41.105882 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:57:41.106221 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:57:41.109443 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:57:41.109766 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:57:41.113096 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:57:41.116547 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:57:41.120512 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:57:41.124310 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:57:41.146507 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:57:41.151179 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:57:41.159167 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:57:41.164139 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:57:41.164186 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:57:41.167834 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:57:41.177802 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:57:41.183254 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:57:41.185942 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:57:41.188002 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:57:41.194516 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:57:41.197263 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:57:41.198505 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:57:41.201364 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:57:41.203220 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:57:41.209180 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:57:41.218416 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:57:41.224838 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:57:41.227823 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:57:41.230801 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:57:41.234805 systemd-journald[1252]: Time spent on flushing to /var/log/journal/c42c17d7dfe94f3c80e536a68d09397b is 38.667ms for 974 entries. May 13 23:57:41.234805 systemd-journald[1252]: System Journal (/var/log/journal/c42c17d7dfe94f3c80e536a68d09397b) is 8M, max 2.6G, 2.6G free. May 13 23:57:41.295850 systemd-journald[1252]: Received client request to flush runtime journal. May 13 23:57:41.295899 kernel: loop0: detected capacity change from 0 to 205544 May 13 23:57:41.235158 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:57:41.261248 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:57:41.274829 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:57:41.281497 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:57:41.291219 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:57:41.299396 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:57:41.304158 udevadm[1308]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 23:57:41.331190 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. May 13 23:57:41.331212 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. May 13 23:57:41.336717 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:57:41.348667 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:57:41.345251 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:57:41.366876 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:57:41.379300 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:57:41.392114 kernel: loop1: detected capacity change from 0 to 28424 May 13 23:57:41.468895 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:57:41.746095 kernel: loop2: detected capacity change from 0 to 109808 May 13 23:57:41.808695 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:57:41.812870 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:57:41.844450 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. May 13 23:57:41.844478 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. May 13 23:57:41.849178 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:57:42.248098 kernel: loop3: detected capacity change from 0 to 151640 May 13 23:57:42.483372 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:57:42.488361 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:57:42.525892 systemd-udevd[1329]: Using default interface naming scheme 'v255'. May 13 23:57:42.686234 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:57:42.704327 kernel: loop4: detected capacity change from 0 to 205544 May 13 23:57:42.700967 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:57:42.759942 kernel: loop5: detected capacity change from 0 to 28424 May 13 23:57:42.776764 kernel: loop6: detected capacity change from 0 to 109808 May 13 23:57:42.787038 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:57:42.793977 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 23:57:42.807433 kernel: loop7: detected capacity change from 0 to 151640 May 13 23:57:42.836095 (sd-merge)[1336]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 13 23:57:42.839931 (sd-merge)[1336]: Merged extensions into '/usr'. May 13 23:57:42.850314 systemd[1]: Reload requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:57:42.850331 systemd[1]: Reloading... May 13 23:57:42.931103 kernel: hv_vmbus: registering driver hyperv_fb May 13 23:57:42.941062 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 13 23:57:42.941157 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:57:42.941177 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 13 23:57:42.945543 kernel: Console: switching to colour dummy device 80x25 May 13 23:57:42.949087 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:57:42.988113 kernel: hv_vmbus: registering driver hv_balloon May 13 23:57:43.091113 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 13 23:57:43.112484 zram_generator::config[1406]: No configuration found. May 13 23:57:43.170940 systemd-networkd[1337]: lo: Link UP May 13 23:57:43.170954 systemd-networkd[1337]: lo: Gained carrier May 13 23:57:43.174821 systemd-networkd[1337]: Enumeration completed May 13 23:57:43.175320 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:43.175333 systemd-networkd[1337]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:57:43.239784 kernel: mlx5_core 137b:00:02.0 enP4987s1: Link up May 13 23:57:43.262494 kernel: hv_netvsc 6045bddd-a4ab-6045-bddd-a4ab6045bddd eth0: Data path switched to VF: enP4987s1 May 13 23:57:43.269308 systemd-networkd[1337]: enP4987s1: Link UP May 13 23:57:43.269854 systemd-networkd[1337]: eth0: Link UP May 13 23:57:43.270119 systemd-networkd[1337]: eth0: Gained carrier May 13 23:57:43.270577 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:43.279435 systemd-networkd[1337]: enP4987s1: Gained carrier May 13 23:57:43.324789 systemd-networkd[1337]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 13 23:57:43.340299 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1338) May 13 23:57:43.525706 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:57:43.553130 kernel: kvm_intel: Using Hyper-V Enlightened VMCS May 13 23:57:43.690343 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 13 23:57:43.694041 systemd[1]: Reloading finished in 843 ms. May 13 23:57:43.714994 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:57:43.717866 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:57:43.720975 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:57:43.778403 systemd[1]: Starting ensure-sysext.service... May 13 23:57:43.784298 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:57:43.789598 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:57:43.794530 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:57:43.803333 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:57:43.809198 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:57:43.846307 systemd[1]: Reload requested from client PID 1524 ('systemctl') (unit ensure-sysext.service)... May 13 23:57:43.846327 systemd[1]: Reloading... May 13 23:57:43.860656 systemd-tmpfiles[1528]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:57:43.861040 systemd-tmpfiles[1528]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:57:43.862294 systemd-tmpfiles[1528]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:57:43.862705 systemd-tmpfiles[1528]: ACLs are not supported, ignoring. May 13 23:57:43.862796 systemd-tmpfiles[1528]: ACLs are not supported, ignoring. May 13 23:57:43.883858 systemd-tmpfiles[1528]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:57:43.883871 systemd-tmpfiles[1528]: Skipping /boot May 13 23:57:43.895965 systemd-tmpfiles[1528]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:57:43.901823 systemd-tmpfiles[1528]: Skipping /boot May 13 23:57:43.962130 zram_generator::config[1567]: No configuration found. May 13 23:57:44.094875 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:57:44.214905 systemd[1]: Reloading finished in 367 ms. May 13 23:57:44.230284 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:57:44.247198 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:57:44.251811 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:57:44.255548 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:57:44.265424 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:57:44.275335 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:57:44.282175 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:57:44.288911 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:57:44.294337 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:57:44.297551 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:57:44.314689 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:44.314970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:57:44.320553 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:57:44.328235 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:57:44.338881 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:57:44.346412 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:57:44.346597 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:57:44.346728 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:44.354699 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:57:44.355529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:57:44.377758 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:57:44.378521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:57:44.385945 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:57:44.386276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:57:44.401160 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:57:44.405512 lvm[1632]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:57:44.422260 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:44.422640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:57:44.429177 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:57:44.436571 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:57:44.444577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:57:44.452776 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:57:44.457506 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:57:44.457684 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:57:44.457919 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:57:44.459731 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:57:44.466560 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:57:44.471731 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:57:44.472117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:57:44.475757 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:57:44.476166 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:57:44.478017 systemd-resolved[1635]: Positive Trust Anchors: May 13 23:57:44.478034 systemd-resolved[1635]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:57:44.478086 systemd-resolved[1635]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:57:44.479248 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:57:44.479436 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:57:44.480910 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:57:44.481059 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:57:44.484462 systemd[1]: Finished ensure-sysext.service. May 13 23:57:44.492596 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:57:44.497300 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:57:44.498216 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:57:44.498275 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:57:44.504482 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:57:44.512047 lvm[1671]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:57:44.516468 systemd-resolved[1635]: Using system hostname 'ci-4284.0.0-n-84802b4006'. May 13 23:57:44.519258 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:57:44.523214 systemd[1]: Reached target network.target - Network. May 13 23:57:44.527329 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:57:44.536862 augenrules[1675]: No rules May 13 23:57:44.539187 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:57:44.539450 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:57:44.563054 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:57:44.654079 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:57:44.828220 systemd-networkd[1337]: eth0: Gained IPv6LL May 13 23:57:44.831135 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:57:44.834912 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:57:45.012668 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:57:45.016134 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:57:45.212366 systemd-networkd[1337]: enP4987s1: Gained IPv6LL May 13 23:57:47.078843 ldconfig[1296]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:57:47.089179 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:57:47.094952 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:57:47.110833 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:57:47.113728 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:57:47.116286 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:57:47.119054 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:57:47.124580 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:57:47.127277 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:57:47.130158 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:57:47.133194 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:57:47.133241 systemd[1]: Reached target paths.target - Path Units. May 13 23:57:47.135306 systemd[1]: Reached target timers.target - Timer Units. May 13 23:57:47.138599 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:57:47.142608 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:57:47.147747 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:57:47.150955 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:57:47.154162 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:57:47.164657 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:57:47.167791 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:57:47.171462 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:57:47.174006 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:57:47.176279 systemd[1]: Reached target basic.target - Basic System. May 13 23:57:47.178526 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:57:47.178559 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:57:47.180800 systemd[1]: Starting chronyd.service - NTP client/server... May 13 23:57:47.186164 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:57:47.192847 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 23:57:47.197513 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:57:47.203259 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:57:47.211272 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:57:47.213747 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:57:47.213795 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 13 23:57:47.215821 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 13 23:57:47.218328 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 13 23:57:47.222818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:47.230138 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:57:47.241261 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:57:47.245615 jq[1696]: false May 13 23:57:47.251931 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:57:47.257284 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:57:47.259134 KVP[1698]: KVP starting; pid is:1698 May 13 23:57:47.266093 kernel: hv_utils: KVP IC version 4.0 May 13 23:57:47.268039 KVP[1698]: KVP LIC Version: 3.1 May 13 23:57:47.272235 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:57:47.279913 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:57:47.284514 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:57:47.296376 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:57:47.297651 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:57:47.301820 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:57:47.312526 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:57:47.312809 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:57:47.340542 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:57:47.340846 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:57:47.354573 dbus-daemon[1695]: [system] SELinux support is enabled May 13 23:57:47.356321 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:57:47.364976 extend-filesystems[1697]: Found loop4 May 13 23:57:47.379295 extend-filesystems[1697]: Found loop5 May 13 23:57:47.379295 extend-filesystems[1697]: Found loop6 May 13 23:57:47.379295 extend-filesystems[1697]: Found loop7 May 13 23:57:47.379295 extend-filesystems[1697]: Found sda May 13 23:57:47.379295 extend-filesystems[1697]: Found sda1 May 13 23:57:47.379295 extend-filesystems[1697]: Found sda2 May 13 23:57:47.379295 extend-filesystems[1697]: Found sda3 May 13 23:57:47.379295 extend-filesystems[1697]: Found usr May 13 23:57:47.379295 extend-filesystems[1697]: Found sda4 May 13 23:57:47.379295 extend-filesystems[1697]: Found sda6 May 13 23:57:47.379295 extend-filesystems[1697]: Found sda7 May 13 23:57:47.379295 extend-filesystems[1697]: Found sda9 May 13 23:57:47.379295 extend-filesystems[1697]: Checking size of /dev/sda9 May 13 23:57:47.365102 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:57:47.416913 update_engine[1713]: I20250513 23:57:47.387856 1713 main.cc:92] Flatcar Update Engine starting May 13 23:57:47.417214 jq[1714]: true May 13 23:57:47.365142 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:57:47.370351 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:57:47.370390 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:57:47.404748 (chronyd)[1692]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 13 23:57:47.419502 systemd[1]: Started update-engine.service - Update Engine. May 13 23:57:47.419859 update_engine[1713]: I20250513 23:57:47.419601 1713 update_check_scheduler.cc:74] Next update check in 9m6s May 13 23:57:47.424034 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:57:47.430641 (ntainerd)[1733]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:57:47.437625 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:57:47.437956 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:57:47.440928 extend-filesystems[1697]: Old size kept for /dev/sda9 May 13 23:57:47.443482 extend-filesystems[1697]: Found sr0 May 13 23:57:47.447618 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:57:47.447902 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:57:47.449789 chronyd[1737]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 13 23:57:47.476521 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:57:47.482398 chronyd[1737]: Timezone right/UTC failed leap second check, ignoring May 13 23:57:47.482637 chronyd[1737]: Loaded seccomp filter (level 2) May 13 23:57:47.491063 systemd[1]: Started chronyd.service - NTP client/server. May 13 23:57:47.502835 jq[1742]: true May 13 23:57:47.553721 tar[1721]: linux-amd64/helm May 13 23:57:47.561514 coreos-metadata[1694]: May 13 23:57:47.561 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 13 23:57:47.567255 coreos-metadata[1694]: May 13 23:57:47.567 INFO Fetch successful May 13 23:57:47.569705 coreos-metadata[1694]: May 13 23:57:47.569 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 13 23:57:47.575193 coreos-metadata[1694]: May 13 23:57:47.575 INFO Fetch successful May 13 23:57:47.575193 coreos-metadata[1694]: May 13 23:57:47.575 INFO Fetching http://168.63.129.16/machine/6cdd8cf4-8274-4ca4-85b6-d55909decc7e/b615fd3d%2Dd60a%2D4fd6%2D92f8%2Dda80f5d4af10.%5Fci%2D4284.0.0%2Dn%2D84802b4006?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 13 23:57:47.577776 coreos-metadata[1694]: May 13 23:57:47.577 INFO Fetch successful May 13 23:57:47.577776 coreos-metadata[1694]: May 13 23:57:47.577 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 13 23:57:47.587535 coreos-metadata[1694]: May 13 23:57:47.587 INFO Fetch successful May 13 23:57:47.613317 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1779) May 13 23:57:47.652995 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 23:57:47.657097 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:57:47.699370 systemd-logind[1711]: New seat seat0. May 13 23:57:47.773053 systemd-logind[1711]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:57:47.777465 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:57:47.814867 bash[1769]: Updated "/home/core/.ssh/authorized_keys" May 13 23:57:47.817651 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:57:47.825266 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:57:48.185903 locksmithd[1739]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:57:48.238521 sshd_keygen[1750]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:57:48.291769 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:57:48.302443 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:57:48.307630 tar[1721]: linux-amd64/LICENSE May 13 23:57:48.307630 tar[1721]: linux-amd64/README.md May 13 23:57:48.308420 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 13 23:57:48.334107 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:57:48.334747 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:57:48.344213 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:57:48.349843 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 13 23:57:48.361241 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:57:48.457161 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:57:48.462819 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:57:48.471680 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 23:57:48.474607 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:57:48.846708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:49.048121 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:57:49.527461 containerd[1733]: time="2025-05-13T23:57:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:57:49.529091 containerd[1733]: time="2025-05-13T23:57:49.528252900Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:57:49.542543 containerd[1733]: time="2025-05-13T23:57:49.542489700Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.3µs" May 13 23:57:49.542543 containerd[1733]: time="2025-05-13T23:57:49.542538000Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:57:49.542677 containerd[1733]: time="2025-05-13T23:57:49.542562300Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:57:49.542779 containerd[1733]: time="2025-05-13T23:57:49.542755200Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:57:49.542825 containerd[1733]: time="2025-05-13T23:57:49.542796000Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:57:49.542857 containerd[1733]: time="2025-05-13T23:57:49.542840100Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:57:49.542944 containerd[1733]: time="2025-05-13T23:57:49.542919900Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:57:49.543004 containerd[1733]: time="2025-05-13T23:57:49.542943600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:57:49.543305 containerd[1733]: time="2025-05-13T23:57:49.543270100Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:57:49.543305 containerd[1733]: time="2025-05-13T23:57:49.543301500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:57:49.543412 containerd[1733]: time="2025-05-13T23:57:49.543318000Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:57:49.543412 containerd[1733]: time="2025-05-13T23:57:49.543328700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:57:49.543486 containerd[1733]: time="2025-05-13T23:57:49.543438600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:57:49.544079 containerd[1733]: time="2025-05-13T23:57:49.543722000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:57:49.544079 containerd[1733]: time="2025-05-13T23:57:49.543768600Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:57:49.544079 containerd[1733]: time="2025-05-13T23:57:49.543785300Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:57:49.544079 containerd[1733]: time="2025-05-13T23:57:49.543820200Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:57:49.544250 containerd[1733]: time="2025-05-13T23:57:49.544163300Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:57:49.544289 containerd[1733]: time="2025-05-13T23:57:49.544246100Z" level=info msg="metadata content store policy set" policy=shared May 13 23:57:49.556941 containerd[1733]: time="2025-05-13T23:57:49.556554300Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:57:49.556941 containerd[1733]: time="2025-05-13T23:57:49.556629600Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:57:49.556941 containerd[1733]: time="2025-05-13T23:57:49.556652200Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:57:49.556941 containerd[1733]: time="2025-05-13T23:57:49.556673800Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:57:49.556941 containerd[1733]: time="2025-05-13T23:57:49.556693000Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:57:49.556941 containerd[1733]: time="2025-05-13T23:57:49.556718200Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:57:49.556941 containerd[1733]: time="2025-05-13T23:57:49.556749800Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:57:49.556941 containerd[1733]: time="2025-05-13T23:57:49.556770800Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:57:49.556941 containerd[1733]: time="2025-05-13T23:57:49.556786700Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:57:49.556941 containerd[1733]: time="2025-05-13T23:57:49.556803500Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:57:49.556941 containerd[1733]: time="2025-05-13T23:57:49.556819100Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:57:49.556941 containerd[1733]: time="2025-05-13T23:57:49.556843500Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:57:49.557436 containerd[1733]: time="2025-05-13T23:57:49.557024300Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:57:49.557436 containerd[1733]: time="2025-05-13T23:57:49.557053700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:57:49.557436 containerd[1733]: time="2025-05-13T23:57:49.557132300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:57:49.557436 containerd[1733]: time="2025-05-13T23:57:49.557151900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:57:49.557436 containerd[1733]: time="2025-05-13T23:57:49.557182200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:57:49.557436 containerd[1733]: time="2025-05-13T23:57:49.557199000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:57:49.557436 containerd[1733]: time="2025-05-13T23:57:49.557217600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:57:49.557436 containerd[1733]: time="2025-05-13T23:57:49.557233100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:57:49.557436 containerd[1733]: time="2025-05-13T23:57:49.557262700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:57:49.557436 containerd[1733]: time="2025-05-13T23:57:49.557280800Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:57:49.557436 containerd[1733]: time="2025-05-13T23:57:49.557296500Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:57:49.557436 containerd[1733]: time="2025-05-13T23:57:49.557422800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:57:49.557823 containerd[1733]: time="2025-05-13T23:57:49.557443700Z" level=info msg="Start snapshots syncer" May 13 23:57:49.557823 containerd[1733]: time="2025-05-13T23:57:49.557468700Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:57:49.557886 containerd[1733]: time="2025-05-13T23:57:49.557843100Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:57:49.558025 containerd[1733]: time="2025-05-13T23:57:49.557926100Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:57:49.558088 containerd[1733]: time="2025-05-13T23:57:49.558037000Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:57:49.559860 containerd[1733]: time="2025-05-13T23:57:49.558215700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:57:49.559860 containerd[1733]: time="2025-05-13T23:57:49.558272700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:57:49.559860 containerd[1733]: time="2025-05-13T23:57:49.558308100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:57:49.559860 containerd[1733]: time="2025-05-13T23:57:49.558324300Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:57:49.559860 containerd[1733]: time="2025-05-13T23:57:49.558341100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:57:49.559860 containerd[1733]: time="2025-05-13T23:57:49.558357000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:57:49.559860 containerd[1733]: time="2025-05-13T23:57:49.558385400Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:57:49.559860 containerd[1733]: time="2025-05-13T23:57:49.558416300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:57:49.559860 containerd[1733]: time="2025-05-13T23:57:49.558433200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:57:49.559860 containerd[1733]: time="2025-05-13T23:57:49.558458900Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:57:49.559860 containerd[1733]: time="2025-05-13T23:57:49.558498800Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:57:49.559860 containerd[1733]: time="2025-05-13T23:57:49.558527900Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:57:49.559860 containerd[1733]: time="2025-05-13T23:57:49.558541900Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:57:49.560362 containerd[1733]: time="2025-05-13T23:57:49.558555000Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:57:49.560362 containerd[1733]: time="2025-05-13T23:57:49.558567400Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:57:49.560362 containerd[1733]: time="2025-05-13T23:57:49.558581600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:57:49.560362 containerd[1733]: time="2025-05-13T23:57:49.558624500Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:57:49.560362 containerd[1733]: time="2025-05-13T23:57:49.558646100Z" level=info msg="runtime interface created" May 13 23:57:49.560362 containerd[1733]: time="2025-05-13T23:57:49.558653700Z" level=info msg="created NRI interface" May 13 23:57:49.560362 containerd[1733]: time="2025-05-13T23:57:49.558665600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:57:49.560362 containerd[1733]: time="2025-05-13T23:57:49.558702500Z" level=info msg="Connect containerd service" May 13 23:57:49.560362 containerd[1733]: time="2025-05-13T23:57:49.558738400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:57:49.560362 containerd[1733]: time="2025-05-13T23:57:49.560090500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:57:49.562275 kubelet[1877]: E0513 23:57:49.562188 1877 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:57:49.564600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:57:49.564801 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:57:49.565225 systemd[1]: kubelet.service: Consumed 865ms CPU time, 233.6M memory peak. May 13 23:57:50.566337 waagent[1865]: 2025-05-13T23:57:50.566243Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 13 23:57:50.569224 waagent[1865]: 2025-05-13T23:57:50.569140Z INFO Daemon Daemon OS: flatcar 4284.0.0 May 13 23:57:50.571236 waagent[1865]: 2025-05-13T23:57:50.571186Z INFO Daemon Daemon Python: 3.11.11 May 13 23:57:50.574624 waagent[1865]: 2025-05-13T23:57:50.573254Z INFO Daemon Daemon Run daemon May 13 23:57:50.576077 waagent[1865]: 2025-05-13T23:57:50.575089Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4284.0.0' May 13 23:57:50.579202 waagent[1865]: 2025-05-13T23:57:50.579087Z INFO Daemon Daemon Using waagent for provisioning May 13 23:57:50.581585 waagent[1865]: 2025-05-13T23:57:50.581537Z INFO Daemon Daemon Activate resource disk May 13 23:57:50.585873 waagent[1865]: 2025-05-13T23:57:50.583599Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 13 23:57:50.594089 waagent[1865]: 2025-05-13T23:57:50.592049Z INFO Daemon Daemon Found device: None May 13 23:57:50.597089 waagent[1865]: 2025-05-13T23:57:50.594227Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 13 23:57:50.598956 waagent[1865]: 2025-05-13T23:57:50.597767Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 13 23:57:50.603226 waagent[1865]: 2025-05-13T23:57:50.603179Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 13 23:57:50.605781 waagent[1865]: 2025-05-13T23:57:50.605736Z INFO Daemon Daemon Running default provisioning handler May 13 23:57:50.615862 waagent[1865]: 2025-05-13T23:57:50.615572Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 13 23:57:50.622105 waagent[1865]: 2025-05-13T23:57:50.622035Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 13 23:57:50.626622 waagent[1865]: 2025-05-13T23:57:50.626501Z INFO Daemon Daemon cloud-init is enabled: False May 13 23:57:50.629161 waagent[1865]: 2025-05-13T23:57:50.628758Z INFO Daemon Daemon Copying ovf-env.xml May 13 23:57:50.713094 waagent[1865]: 2025-05-13T23:57:50.712177Z INFO Daemon Daemon Successfully mounted dvd May 13 23:57:50.727450 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 13 23:57:50.729627 waagent[1865]: 2025-05-13T23:57:50.728007Z INFO Daemon Daemon Detect protocol endpoint May 13 23:57:50.737754 waagent[1865]: 2025-05-13T23:57:50.730588Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 13 23:57:50.737754 waagent[1865]: 2025-05-13T23:57:50.733264Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 13 23:57:50.737754 waagent[1865]: 2025-05-13T23:57:50.736093Z INFO Daemon Daemon Test for route to 168.63.129.16 May 13 23:57:50.739319 waagent[1865]: 2025-05-13T23:57:50.738681Z INFO Daemon Daemon Route to 168.63.129.16 exists May 13 23:57:50.741712 waagent[1865]: 2025-05-13T23:57:50.740999Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 13 23:57:50.748088 containerd[1733]: time="2025-05-13T23:57:50.747908700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:57:50.748088 containerd[1733]: time="2025-05-13T23:57:50.748000600Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:57:50.748088 containerd[1733]: time="2025-05-13T23:57:50.747940200Z" level=info msg="Start subscribing containerd event" May 13 23:57:50.748525 containerd[1733]: time="2025-05-13T23:57:50.748060000Z" level=info msg="Start recovering state" May 13 23:57:50.748525 containerd[1733]: time="2025-05-13T23:57:50.748209500Z" level=info msg="Start event monitor" May 13 23:57:50.748525 containerd[1733]: time="2025-05-13T23:57:50.748234600Z" level=info msg="Start cni network conf syncer for default" May 13 23:57:50.748525 containerd[1733]: time="2025-05-13T23:57:50.748249200Z" level=info msg="Start streaming server" May 13 23:57:50.748525 containerd[1733]: time="2025-05-13T23:57:50.748260600Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:57:50.748525 containerd[1733]: time="2025-05-13T23:57:50.748278100Z" level=info msg="runtime interface starting up..." May 13 23:57:50.748525 containerd[1733]: time="2025-05-13T23:57:50.748286500Z" level=info msg="starting plugins..." May 13 23:57:50.748525 containerd[1733]: time="2025-05-13T23:57:50.748306500Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:57:50.748921 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:57:50.751546 containerd[1733]: time="2025-05-13T23:57:50.751483800Z" level=info msg="containerd successfully booted in 1.225053s" May 13 23:57:50.752682 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:57:50.755621 systemd[1]: Startup finished in 924ms (firmware) + 27.872s (loader) + 955ms (kernel) + 11.069s (initrd) + 13.463s (userspace) = 54.285s. May 13 23:57:50.766096 waagent[1865]: 2025-05-13T23:57:50.764936Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 13 23:57:50.769536 waagent[1865]: 2025-05-13T23:57:50.769505Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 13 23:57:50.772240 waagent[1865]: 2025-05-13T23:57:50.772190Z INFO Daemon Daemon Server preferred version:2015-04-05 May 13 23:57:50.909729 waagent[1865]: 2025-05-13T23:57:50.909637Z INFO Daemon Daemon Initializing goal state during protocol detection May 13 23:57:50.915444 waagent[1865]: 2025-05-13T23:57:50.910827Z INFO Daemon Daemon Forcing an update of the goal state. May 13 23:57:50.915884 waagent[1865]: 2025-05-13T23:57:50.915381Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 13 23:57:50.932044 waagent[1865]: 2025-05-13T23:57:50.931984Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 13 23:57:50.940932 waagent[1865]: 2025-05-13T23:57:50.933487Z INFO Daemon May 13 23:57:50.940932 waagent[1865]: 2025-05-13T23:57:50.935151Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 29b66356-9648-4d64-b2d6-03e9fdf358a4 eTag: 4910779180287141092 source: Fabric] May 13 23:57:50.940932 waagent[1865]: 2025-05-13T23:57:50.936557Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 13 23:57:50.940932 waagent[1865]: 2025-05-13T23:57:50.937338Z INFO Daemon May 13 23:57:50.940932 waagent[1865]: 2025-05-13T23:57:50.937505Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 13 23:57:50.949951 waagent[1865]: 2025-05-13T23:57:50.949908Z INFO Daemon Daemon Downloading artifacts profile blob May 13 23:57:51.101568 waagent[1865]: 2025-05-13T23:57:51.101490Z INFO Daemon Downloaded certificate {'thumbprint': 'C33983DCEFE61F4CD6313607FE980FE1CF69230D', 'hasPrivateKey': False} May 13 23:57:51.112669 waagent[1865]: 2025-05-13T23:57:51.103158Z INFO Daemon Downloaded certificate {'thumbprint': 'A37CD1314A4C07D94C62FB3A67A3C34C07AB9C09', 'hasPrivateKey': True} May 13 23:57:51.112669 waagent[1865]: 2025-05-13T23:57:51.104058Z INFO Daemon Fetch goal state completed May 13 23:57:51.123732 login[1870]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 13 23:57:51.125647 login[1871]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 23:57:51.138710 systemd-logind[1711]: New session 2 of user core. May 13 23:57:51.139332 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:57:51.140700 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:57:51.152846 waagent[1865]: 2025-05-13T23:57:51.152802Z INFO Daemon Daemon Starting provisioning May 13 23:57:51.159507 waagent[1865]: 2025-05-13T23:57:51.154178Z INFO Daemon Daemon Handle ovf-env.xml. May 13 23:57:51.159507 waagent[1865]: 2025-05-13T23:57:51.155039Z INFO Daemon Daemon Set hostname [ci-4284.0.0-n-84802b4006] May 13 23:57:51.171851 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:57:51.174368 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:57:51.184321 waagent[1865]: 2025-05-13T23:57:51.184246Z INFO Daemon Daemon Publish hostname [ci-4284.0.0-n-84802b4006] May 13 23:57:51.187642 waagent[1865]: 2025-05-13T23:57:51.185699Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 13 23:57:51.187642 waagent[1865]: 2025-05-13T23:57:51.186650Z INFO Daemon Daemon Primary interface is [eth0] May 13 23:57:51.192612 (systemd)[1926]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:57:51.196311 systemd-logind[1711]: New session c1 of user core. May 13 23:57:51.198378 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:57:51.198388 systemd-networkd[1337]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:57:51.198437 systemd-networkd[1337]: eth0: DHCP lease lost May 13 23:57:51.203028 waagent[1865]: 2025-05-13T23:57:51.200637Z INFO Daemon Daemon Create user account if not exists May 13 23:57:51.203740 waagent[1865]: 2025-05-13T23:57:51.203677Z INFO Daemon Daemon User core already exists, skip useradd May 13 23:57:51.206352 waagent[1865]: 2025-05-13T23:57:51.206293Z INFO Daemon Daemon Configure sudoer May 13 23:57:51.210578 waagent[1865]: 2025-05-13T23:57:51.210525Z INFO Daemon Daemon Configure sshd May 13 23:57:51.212581 waagent[1865]: 2025-05-13T23:57:51.212530Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 13 23:57:51.218454 waagent[1865]: 2025-05-13T23:57:51.213757Z INFO Daemon Daemon Deploy ssh public key. May 13 23:57:51.229149 systemd-networkd[1337]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 13 23:57:51.513284 systemd[1926]: Queued start job for default target default.target. May 13 23:57:51.520186 systemd[1926]: Created slice app.slice - User Application Slice. May 13 23:57:51.520223 systemd[1926]: Reached target paths.target - Paths. May 13 23:57:51.520275 systemd[1926]: Reached target timers.target - Timers. May 13 23:57:51.521526 systemd[1926]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:57:51.532181 systemd[1926]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:57:51.532426 systemd[1926]: Reached target sockets.target - Sockets. May 13 23:57:51.532487 systemd[1926]: Reached target basic.target - Basic System. May 13 23:57:51.532533 systemd[1926]: Reached target default.target - Main User Target. May 13 23:57:51.532568 systemd[1926]: Startup finished in 327ms. May 13 23:57:51.532967 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:57:51.544229 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:57:52.124427 login[1870]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 23:57:52.130097 systemd-logind[1711]: New session 1 of user core. May 13 23:57:52.135223 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:57:52.346673 waagent[1865]: 2025-05-13T23:57:52.346590Z INFO Daemon Daemon Provisioning complete May 13 23:57:52.358456 waagent[1865]: 2025-05-13T23:57:52.358403Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 13 23:57:52.365095 waagent[1865]: 2025-05-13T23:57:52.359610Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 13 23:57:52.365095 waagent[1865]: 2025-05-13T23:57:52.360409Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 13 23:57:52.488389 waagent[1962]: 2025-05-13T23:57:52.488224Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 13 23:57:52.488816 waagent[1962]: 2025-05-13T23:57:52.488387Z INFO ExtHandler ExtHandler OS: flatcar 4284.0.0 May 13 23:57:52.488816 waagent[1962]: 2025-05-13T23:57:52.488463Z INFO ExtHandler ExtHandler Python: 3.11.11 May 13 23:57:52.488816 waagent[1962]: 2025-05-13T23:57:52.488530Z INFO ExtHandler ExtHandler CPU Arch: x86_64 May 13 23:57:55.457513 waagent[1962]: 2025-05-13T23:57:55.457419Z INFO ExtHandler ExtHandler Distro: flatcar-4284.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 13 23:57:55.994646 waagent[1962]: 2025-05-13T23:57:55.994557Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:57:55.995016 waagent[1962]: 2025-05-13T23:57:55.994968Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:57:56.003803 waagent[1962]: 2025-05-13T23:57:56.003729Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 13 23:57:56.012225 waagent[1962]: 2025-05-13T23:57:56.012169Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 13 23:57:56.012695 waagent[1962]: 2025-05-13T23:57:56.012647Z INFO ExtHandler May 13 23:57:56.012777 waagent[1962]: 2025-05-13T23:57:56.012736Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d9d9e988-e723-496f-92d5-e6e9bd3ff8ce eTag: 4910779180287141092 source: Fabric] May 13 23:57:56.013092 waagent[1962]: 2025-05-13T23:57:56.013034Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 13 23:57:56.013623 waagent[1962]: 2025-05-13T23:57:56.013567Z INFO ExtHandler May 13 23:57:56.013692 waagent[1962]: 2025-05-13T23:57:56.013646Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 13 23:57:56.017682 waagent[1962]: 2025-05-13T23:57:56.017643Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 13 23:57:56.084711 waagent[1962]: 2025-05-13T23:57:56.084629Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C33983DCEFE61F4CD6313607FE980FE1CF69230D', 'hasPrivateKey': False} May 13 23:57:56.085160 waagent[1962]: 2025-05-13T23:57:56.085109Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A37CD1314A4C07D94C62FB3A67A3C34C07AB9C09', 'hasPrivateKey': True} May 13 23:57:56.085598 waagent[1962]: 2025-05-13T23:57:56.085553Z INFO ExtHandler Fetch goal state completed May 13 23:57:56.099253 waagent[1962]: 2025-05-13T23:57:56.099197Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 13 23:57:56.104145 waagent[1962]: 2025-05-13T23:57:56.104064Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1962 May 13 23:57:56.104302 waagent[1962]: 2025-05-13T23:57:56.104266Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 13 23:57:56.104637 waagent[1962]: 2025-05-13T23:57:56.104596Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 13 23:57:56.106030 waagent[1962]: 2025-05-13T23:57:56.105982Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 13 23:57:56.106488 waagent[1962]: 2025-05-13T23:57:56.106444Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 13 23:57:56.106646 waagent[1962]: 2025-05-13T23:57:56.106610Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 13 23:57:56.107236 waagent[1962]: 2025-05-13T23:57:56.107197Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 13 23:57:57.760434 waagent[1962]: 2025-05-13T23:57:57.760380Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 13 23:57:57.760940 waagent[1962]: 2025-05-13T23:57:57.760666Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 13 23:57:57.768202 waagent[1962]: 2025-05-13T23:57:57.767957Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 13 23:57:57.774950 systemd[1]: Reload requested from client PID 1980 ('systemctl') (unit waagent.service)... May 13 23:57:57.774967 systemd[1]: Reloading... May 13 23:57:57.878145 zram_generator::config[2022]: No configuration found. May 13 23:57:58.005200 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:57:58.118394 systemd[1]: Reloading finished in 342 ms. May 13 23:57:58.136100 waagent[1962]: 2025-05-13T23:57:58.135289Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 13 23:57:58.136100 waagent[1962]: 2025-05-13T23:57:58.135486Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 13 23:57:58.487381 waagent[1962]: 2025-05-13T23:57:58.487222Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 13 23:57:58.487747 waagent[1962]: 2025-05-13T23:57:58.487686Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 13 23:57:58.488676 waagent[1962]: 2025-05-13T23:57:58.488605Z INFO ExtHandler ExtHandler Starting env monitor service. May 13 23:57:58.488846 waagent[1962]: 2025-05-13T23:57:58.488773Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:57:58.489315 waagent[1962]: 2025-05-13T23:57:58.489262Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 13 23:57:58.489519 waagent[1962]: 2025-05-13T23:57:58.489481Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:57:58.489838 waagent[1962]: 2025-05-13T23:57:58.489793Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 13 23:57:58.490082 waagent[1962]: 2025-05-13T23:57:58.490030Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 13 23:57:58.490247 waagent[1962]: 2025-05-13T23:57:58.490213Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:57:58.490346 waagent[1962]: 2025-05-13T23:57:58.490279Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 13 23:57:58.490822 waagent[1962]: 2025-05-13T23:57:58.490777Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 13 23:57:58.490928 waagent[1962]: 2025-05-13T23:57:58.490865Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 13 23:57:58.490928 waagent[1962]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 13 23:57:58.490928 waagent[1962]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 May 13 23:57:58.490928 waagent[1962]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 13 23:57:58.490928 waagent[1962]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 13 23:57:58.490928 waagent[1962]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 13 23:57:58.490928 waagent[1962]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 13 23:57:58.491189 waagent[1962]: 2025-05-13T23:57:58.490944Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 13 23:57:58.491580 waagent[1962]: 2025-05-13T23:57:58.491539Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 13 23:57:58.491985 waagent[1962]: 2025-05-13T23:57:58.491882Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:57:58.492624 waagent[1962]: 2025-05-13T23:57:58.492552Z INFO EnvHandler ExtHandler Configure routes May 13 23:57:58.493376 waagent[1962]: 2025-05-13T23:57:58.493333Z INFO EnvHandler ExtHandler Gateway:None May 13 23:57:58.493452 waagent[1962]: 2025-05-13T23:57:58.493428Z INFO EnvHandler ExtHandler Routes:None May 13 23:57:58.497122 waagent[1962]: 2025-05-13T23:57:58.497056Z INFO ExtHandler ExtHandler May 13 23:57:58.497470 waagent[1962]: 2025-05-13T23:57:58.497434Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5ef1c742-cbea-4d6f-b292-d40dd0940a34 correlation 16079168-ad4a-4874-89ce-4788f45c55b0 created: 2025-05-13T23:56:44.247192Z] May 13 23:57:58.498505 waagent[1962]: 2025-05-13T23:57:58.498467Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 13 23:57:58.500945 waagent[1962]: 2025-05-13T23:57:58.500909Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] May 13 23:57:58.531099 waagent[1962]: 2025-05-13T23:57:58.530915Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9C9B9B16-F5DD-40CC-8B10-C45795A006F4;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 13 23:57:58.600671 waagent[1962]: 2025-05-13T23:57:58.600592Z INFO MonitorHandler ExtHandler Network interfaces: May 13 23:57:58.600671 waagent[1962]: Executing ['ip', '-a', '-o', 'link']: May 13 23:57:58.600671 waagent[1962]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 13 23:57:58.600671 waagent[1962]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:dd:a4:ab brd ff:ff:ff:ff:ff:ff May 13 23:57:58.600671 waagent[1962]: 3: enP4987s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:dd:a4:ab brd ff:ff:ff:ff:ff:ff\ altname enP4987p0s2 May 13 23:57:58.600671 waagent[1962]: Executing ['ip', '-4', '-a', '-o', 'address']: May 13 23:57:58.600671 waagent[1962]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 13 23:57:58.600671 waagent[1962]: 2: eth0 inet 10.200.8.37/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever May 13 23:57:58.600671 waagent[1962]: Executing ['ip', '-6', '-a', '-o', 'address']: May 13 23:57:58.600671 waagent[1962]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 13 23:57:58.600671 waagent[1962]: 2: eth0 inet6 fe80::6245:bdff:fedd:a4ab/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 13 23:57:58.600671 waagent[1962]: 3: enP4987s1 inet6 fe80::6245:bdff:fedd:a4ab/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 13 23:57:58.632923 waagent[1962]: 2025-05-13T23:57:58.632869Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 13 23:57:58.632923 waagent[1962]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:57:58.632923 waagent[1962]: pkts bytes target prot opt in out source destination May 13 23:57:58.632923 waagent[1962]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 13 23:57:58.632923 waagent[1962]: pkts bytes target prot opt in out source destination May 13 23:57:58.632923 waagent[1962]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:57:58.632923 waagent[1962]: pkts bytes target prot opt in out source destination May 13 23:57:58.632923 waagent[1962]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 13 23:57:58.632923 waagent[1962]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 13 23:57:58.632923 waagent[1962]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 13 23:57:58.636684 waagent[1962]: 2025-05-13T23:57:58.636632Z INFO EnvHandler ExtHandler Current Firewall rules: May 13 23:57:58.636684 waagent[1962]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:57:58.636684 waagent[1962]: pkts bytes target prot opt in out source destination May 13 23:57:58.636684 waagent[1962]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 13 23:57:58.636684 waagent[1962]: pkts bytes target prot opt in out source destination May 13 23:57:58.636684 waagent[1962]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:57:58.636684 waagent[1962]: pkts bytes target prot opt in out source destination May 13 23:57:58.636684 waagent[1962]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 13 23:57:58.636684 waagent[1962]: 14 1463 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 13 23:57:58.636684 waagent[1962]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 13 23:57:58.637083 waagent[1962]: 2025-05-13T23:57:58.636921Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 13 23:57:59.597723 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:57:59.599832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:59.726699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:59.730820 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:58:00.311100 kubelet[2118]: E0513 23:58:00.311021 2118 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:58:00.314453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:58:00.314658 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:58:00.315058 systemd[1]: kubelet.service: Consumed 156ms CPU time, 95.7M memory peak. May 13 23:58:01.639413 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:58:01.640852 systemd[1]: Started sshd@0-10.200.8.37:22-10.200.16.10:41762.service - OpenSSH per-connection server daemon (10.200.16.10:41762). May 13 23:58:03.857603 sshd[2125]: Accepted publickey for core from 10.200.16.10 port 41762 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:58:03.859322 sshd-session[2125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:03.864502 systemd-logind[1711]: New session 3 of user core. May 13 23:58:03.872237 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:58:04.411814 systemd[1]: Started sshd@1-10.200.8.37:22-10.200.16.10:41768.service - OpenSSH per-connection server daemon (10.200.16.10:41768). May 13 23:58:05.048130 sshd[2130]: Accepted publickey for core from 10.200.16.10 port 41768 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:58:05.049657 sshd-session[2130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:05.054301 systemd-logind[1711]: New session 4 of user core. May 13 23:58:05.060217 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:58:05.492416 sshd[2132]: Connection closed by 10.200.16.10 port 41768 May 13 23:58:05.493284 sshd-session[2130]: pam_unix(sshd:session): session closed for user core May 13 23:58:05.496875 systemd[1]: sshd@1-10.200.8.37:22-10.200.16.10:41768.service: Deactivated successfully. May 13 23:58:05.498884 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:58:05.500362 systemd-logind[1711]: Session 4 logged out. Waiting for processes to exit. May 13 23:58:05.501489 systemd-logind[1711]: Removed session 4. May 13 23:58:05.610845 systemd[1]: Started sshd@2-10.200.8.37:22-10.200.16.10:41782.service - OpenSSH per-connection server daemon (10.200.16.10:41782). May 13 23:58:06.248556 sshd[2138]: Accepted publickey for core from 10.200.16.10 port 41782 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:58:06.250234 sshd-session[2138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:06.254543 systemd-logind[1711]: New session 5 of user core. May 13 23:58:06.263230 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:58:06.688927 sshd[2140]: Connection closed by 10.200.16.10 port 41782 May 13 23:58:06.689672 sshd-session[2138]: pam_unix(sshd:session): session closed for user core May 13 23:58:06.692853 systemd[1]: sshd@2-10.200.8.37:22-10.200.16.10:41782.service: Deactivated successfully. May 13 23:58:06.694931 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:58:06.696350 systemd-logind[1711]: Session 5 logged out. Waiting for processes to exit. May 13 23:58:06.697448 systemd-logind[1711]: Removed session 5. May 13 23:58:06.800754 systemd[1]: Started sshd@3-10.200.8.37:22-10.200.16.10:41786.service - OpenSSH per-connection server daemon (10.200.16.10:41786). May 13 23:58:07.438777 sshd[2146]: Accepted publickey for core from 10.200.16.10 port 41786 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:58:07.440449 sshd-session[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:07.445423 systemd-logind[1711]: New session 6 of user core. May 13 23:58:07.452213 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:58:07.884089 sshd[2148]: Connection closed by 10.200.16.10 port 41786 May 13 23:58:07.884979 sshd-session[2146]: pam_unix(sshd:session): session closed for user core May 13 23:58:07.889268 systemd[1]: sshd@3-10.200.8.37:22-10.200.16.10:41786.service: Deactivated successfully. May 13 23:58:07.891441 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:58:07.892368 systemd-logind[1711]: Session 6 logged out. Waiting for processes to exit. May 13 23:58:07.893487 systemd-logind[1711]: Removed session 6. May 13 23:58:07.995794 systemd[1]: Started sshd@4-10.200.8.37:22-10.200.16.10:41790.service - OpenSSH per-connection server daemon (10.200.16.10:41790). May 13 23:58:08.632616 sshd[2154]: Accepted publickey for core from 10.200.16.10 port 41790 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:58:08.634286 sshd-session[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:08.639557 systemd-logind[1711]: New session 7 of user core. May 13 23:58:08.646231 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:58:09.112445 sudo[2157]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:58:09.112793 sudo[2157]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:58:10.347368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:58:10.349401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:10.512606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:10.525387 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:58:11.021046 kubelet[2182]: E0513 23:58:11.020989 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:58:11.023365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:58:11.023557 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:58:11.023952 systemd[1]: kubelet.service: Consumed 145ms CPU time, 96.4M memory peak. May 13 23:58:11.270568 chronyd[1737]: Selected source PHC0 May 13 23:58:11.924808 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:58:11.935416 (dockerd)[2190]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:58:13.939826 dockerd[2190]: time="2025-05-13T23:58:13.939761175Z" level=info msg="Starting up" May 13 23:58:13.942052 dockerd[2190]: time="2025-05-13T23:58:13.942019175Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:58:14.992764 dockerd[2190]: time="2025-05-13T23:58:14.992470375Z" level=info msg="Loading containers: start." May 13 23:58:15.269143 kernel: Initializing XFRM netlink socket May 13 23:58:15.357658 systemd-networkd[1337]: docker0: Link UP May 13 23:58:15.420420 dockerd[2190]: time="2025-05-13T23:58:15.420374675Z" level=info msg="Loading containers: done." May 13 23:58:15.435754 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1342161320-merged.mount: Deactivated successfully. May 13 23:58:15.838830 dockerd[2190]: time="2025-05-13T23:58:15.838687775Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:58:15.838830 dockerd[2190]: time="2025-05-13T23:58:15.838802475Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:58:15.839051 dockerd[2190]: time="2025-05-13T23:58:15.838971775Z" level=info msg="Daemon has completed initialization" May 13 23:58:16.051991 dockerd[2190]: time="2025-05-13T23:58:16.051198875Z" level=info msg="API listen on /run/docker.sock" May 13 23:58:16.052400 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:58:17.633506 containerd[1733]: time="2025-05-13T23:58:17.633461875Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 23:58:18.345117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3402950364.mount: Deactivated successfully. May 13 23:58:21.097735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 23:58:21.099940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:21.489595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:21.499393 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:58:22.060715 kubelet[2407]: E0513 23:58:22.060618 2407 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:58:22.062854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:58:22.063051 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:58:22.063452 systemd[1]: kubelet.service: Consumed 140ms CPU time, 97.7M memory peak. May 13 23:58:27.134585 containerd[1733]: time="2025-05-13T23:58:27.134522387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:27.137941 containerd[1733]: time="2025-05-13T23:58:27.137864157Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960995" May 13 23:58:27.140493 containerd[1733]: time="2025-05-13T23:58:27.140372210Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:27.144113 containerd[1733]: time="2025-05-13T23:58:27.144045287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:27.145428 containerd[1733]: time="2025-05-13T23:58:27.144926206Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 9.511419231s" May 13 23:58:27.145428 containerd[1733]: time="2025-05-13T23:58:27.144969907Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 13 23:58:27.146815 containerd[1733]: time="2025-05-13T23:58:27.146791745Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 23:58:30.520813 containerd[1733]: time="2025-05-13T23:58:30.520753852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:30.526380 containerd[1733]: time="2025-05-13T23:58:30.526165792Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713784" May 13 23:58:30.529496 containerd[1733]: time="2025-05-13T23:58:30.529427677Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:30.534116 containerd[1733]: time="2025-05-13T23:58:30.534033996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:30.535049 containerd[1733]: time="2025-05-13T23:58:30.534910519Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 3.388050273s" May 13 23:58:30.535049 containerd[1733]: time="2025-05-13T23:58:30.534948920Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 13 23:58:30.535804 containerd[1733]: time="2025-05-13T23:58:30.535719540Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 23:58:31.205550 kernel: hv_balloon: Max. dynamic memory size: 8192 MB May 13 23:58:31.928545 containerd[1733]: time="2025-05-13T23:58:31.928491972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:31.931100 containerd[1733]: time="2025-05-13T23:58:31.931011837Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780394" May 13 23:58:31.934764 containerd[1733]: time="2025-05-13T23:58:31.934708933Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:31.939518 containerd[1733]: time="2025-05-13T23:58:31.939485757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:31.940689 containerd[1733]: time="2025-05-13T23:58:31.940358780Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.40460314s" May 13 23:58:31.940689 containerd[1733]: time="2025-05-13T23:58:31.940403781Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 13 23:58:31.941231 containerd[1733]: time="2025-05-13T23:58:31.941203502Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 23:58:32.097449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 23:58:32.101305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:32.220053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:32.227404 (kubelet)[2470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:58:32.284570 update_engine[1713]: I20250513 23:58:32.284456 1713 update_attempter.cc:509] Updating boot flags... May 13 23:58:32.834715 kubelet[2470]: E0513 23:58:32.834546 2470 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:58:32.836858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:58:32.837051 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:58:32.837481 systemd[1]: kubelet.service: Consumed 155ms CPU time, 98M memory peak. May 13 23:58:32.860823 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2492) May 13 23:58:33.022117 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2495) May 13 23:58:34.117164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount477594283.mount: Deactivated successfully. May 13 23:58:34.618727 containerd[1733]: time="2025-05-13T23:58:34.618671014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:34.624923 containerd[1733]: time="2025-05-13T23:58:34.624848548Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354633" May 13 23:58:34.631980 containerd[1733]: time="2025-05-13T23:58:34.631941401Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:34.635505 containerd[1733]: time="2025-05-13T23:58:34.635429677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:34.636884 containerd[1733]: time="2025-05-13T23:58:34.636108692Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.694840388s" May 13 23:58:34.636884 containerd[1733]: time="2025-05-13T23:58:34.636151792Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 13 23:58:34.636884 containerd[1733]: time="2025-05-13T23:58:34.636658303Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:58:35.245589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2138957783.mount: Deactivated successfully. May 13 23:58:36.305013 containerd[1733]: time="2025-05-13T23:58:36.304954416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:36.306863 containerd[1733]: time="2025-05-13T23:58:36.306796859Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" May 13 23:58:36.311237 containerd[1733]: time="2025-05-13T23:58:36.311181460Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:36.315917 containerd[1733]: time="2025-05-13T23:58:36.315846868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:36.316792 containerd[1733]: time="2025-05-13T23:58:36.316758089Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.680065985s" May 13 23:58:36.317062 containerd[1733]: time="2025-05-13T23:58:36.316901393Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 23:58:36.317611 containerd[1733]: time="2025-05-13T23:58:36.317576908Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:58:36.835029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3958773829.mount: Deactivated successfully. May 13 23:58:36.865011 containerd[1733]: time="2025-05-13T23:58:36.864953178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:58:36.867262 containerd[1733]: time="2025-05-13T23:58:36.867185130Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 13 23:58:36.872185 containerd[1733]: time="2025-05-13T23:58:36.872118544Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:58:36.876779 containerd[1733]: time="2025-05-13T23:58:36.876721950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:58:36.877536 containerd[1733]: time="2025-05-13T23:58:36.877344065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 559.711456ms" May 13 23:58:36.877536 containerd[1733]: time="2025-05-13T23:58:36.877381766Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 23:58:36.878119 containerd[1733]: time="2025-05-13T23:58:36.878093682Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 23:58:37.542368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount39290998.mount: Deactivated successfully. May 13 23:58:41.558000 containerd[1733]: time="2025-05-13T23:58:41.557943003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:41.561439 containerd[1733]: time="2025-05-13T23:58:41.561360982Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" May 13 23:58:41.564387 containerd[1733]: time="2025-05-13T23:58:41.564317751Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:41.568264 containerd[1733]: time="2025-05-13T23:58:41.568206941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:58:41.569716 containerd[1733]: time="2025-05-13T23:58:41.569225764Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.691096481s" May 13 23:58:41.569716 containerd[1733]: time="2025-05-13T23:58:41.569264665Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 13 23:58:42.847666 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 13 23:58:42.852295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:42.997245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:43.006375 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:58:43.059795 kubelet[2729]: E0513 23:58:43.059745 2729 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:58:43.062188 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:58:43.062361 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:58:43.062735 systemd[1]: kubelet.service: Consumed 176ms CPU time, 99M memory peak. May 13 23:58:44.398982 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:44.399246 systemd[1]: kubelet.service: Consumed 176ms CPU time, 99M memory peak. May 13 23:58:44.401731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:44.430183 systemd[1]: Reload requested from client PID 2743 ('systemctl') (unit session-7.scope)... May 13 23:58:44.430199 systemd[1]: Reloading... May 13 23:58:44.568103 zram_generator::config[2787]: No configuration found. May 13 23:58:44.704992 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:58:44.822773 systemd[1]: Reloading finished in 392 ms. May 13 23:58:44.889005 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:44.892471 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:58:44.892750 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:44.892819 systemd[1]: kubelet.service: Consumed 121ms CPU time, 83.6M memory peak. May 13 23:58:44.894558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:45.165117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:45.175427 (kubelet)[2861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:58:45.823410 kubelet[2861]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:58:45.823410 kubelet[2861]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:58:45.823410 kubelet[2861]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:58:45.824709 kubelet[2861]: I0513 23:58:45.824662 2861 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:58:46.167322 kubelet[2861]: I0513 23:58:46.167273 2861 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:58:46.167322 kubelet[2861]: I0513 23:58:46.167309 2861 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:58:46.167616 kubelet[2861]: I0513 23:58:46.167595 2861 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:58:46.191092 kubelet[2861]: E0513 23:58:46.189839 2861 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.37:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:46.191092 kubelet[2861]: I0513 23:58:46.189895 2861 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:58:46.201328 kubelet[2861]: I0513 23:58:46.201303 2861 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:58:46.206378 kubelet[2861]: I0513 23:58:46.206340 2861 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:58:46.207764 kubelet[2861]: I0513 23:58:46.207732 2861 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:58:46.207976 kubelet[2861]: I0513 23:58:46.207926 2861 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:58:46.208188 kubelet[2861]: I0513 23:58:46.207971 2861 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-84802b4006","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:58:46.208354 kubelet[2861]: I0513 23:58:46.208188 2861 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:58:46.208354 kubelet[2861]: I0513 23:58:46.208203 2861 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:58:46.208354 kubelet[2861]: I0513 23:58:46.208339 2861 state_mem.go:36] "Initialized new in-memory state store" May 13 23:58:46.210695 kubelet[2861]: I0513 23:58:46.210412 2861 kubelet.go:408] "Attempting to sync node with API server" May 13 23:58:46.210695 kubelet[2861]: I0513 23:58:46.210447 2861 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:58:46.210695 kubelet[2861]: I0513 23:58:46.210492 2861 kubelet.go:314] "Adding apiserver pod source" May 13 23:58:46.210695 kubelet[2861]: I0513 23:58:46.210520 2861 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:58:46.214867 kubelet[2861]: W0513 23:58:46.213742 2861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-84802b4006&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused May 13 23:58:46.214867 kubelet[2861]: E0513 23:58:46.213823 2861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-84802b4006&limit=500&resourceVersion=0\": dial tcp 10.200.8.37:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:46.214867 kubelet[2861]: W0513 23:58:46.214684 2861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused May 13 23:58:46.214867 kubelet[2861]: E0513 23:58:46.214738 2861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.37:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:46.215522 kubelet[2861]: I0513 23:58:46.215502 2861 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:58:46.217592 kubelet[2861]: I0513 23:58:46.217562 2861 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:58:46.217684 kubelet[2861]: W0513 23:58:46.217646 2861 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:58:46.219631 kubelet[2861]: I0513 23:58:46.219485 2861 server.go:1269] "Started kubelet" May 13 23:58:46.222030 kubelet[2861]: I0513 23:58:46.221991 2861 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:58:46.223148 kubelet[2861]: I0513 23:58:46.223125 2861 server.go:460] "Adding debug handlers to kubelet server" May 13 23:58:46.226125 kubelet[2861]: I0513 23:58:46.225662 2861 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:58:46.226125 kubelet[2861]: I0513 23:58:46.225916 2861 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:58:46.226125 kubelet[2861]: I0513 23:58:46.225921 2861 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:58:46.226125 kubelet[2861]: I0513 23:58:46.226049 2861 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:58:46.231725 kubelet[2861]: E0513 23:58:46.229250 2861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-84802b4006.183f3b9d10b83cb3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-84802b4006,UID:ci-4284.0.0-n-84802b4006,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-84802b4006,},FirstTimestamp:2025-05-13 23:58:46.219455667 +0000 UTC m=+1.040687772,LastTimestamp:2025-05-13 23:58:46.219455667 +0000 UTC m=+1.040687772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-84802b4006,}" May 13 23:58:46.232998 kubelet[2861]: I0513 23:58:46.232981 2861 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:58:46.233308 kubelet[2861]: I0513 23:58:46.233294 2861 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:58:46.233485 kubelet[2861]: I0513 23:58:46.233475 2861 reconciler.go:26] "Reconciler: start to sync state" May 13 23:58:46.234097 kubelet[2861]: E0513 23:58:46.233648 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:46.234097 kubelet[2861]: E0513 23:58:46.233943 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-84802b4006?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="200ms" May 13 23:58:46.234097 kubelet[2861]: W0513 23:58:46.234033 2861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused May 13 23:58:46.234340 kubelet[2861]: E0513 23:58:46.234319 2861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.37:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:46.234620 kubelet[2861]: I0513 23:58:46.234601 2861 factory.go:221] Registration of the systemd container factory successfully May 13 23:58:46.234813 kubelet[2861]: I0513 23:58:46.234794 2861 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:58:46.238089 kubelet[2861]: I0513 23:58:46.236834 2861 factory.go:221] Registration of the containerd container factory successfully May 13 23:58:46.252293 kubelet[2861]: I0513 23:58:46.252247 2861 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:58:46.253384 kubelet[2861]: I0513 23:58:46.253351 2861 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:58:46.253384 kubelet[2861]: I0513 23:58:46.253386 2861 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:58:46.253528 kubelet[2861]: I0513 23:58:46.253410 2861 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:58:46.253528 kubelet[2861]: E0513 23:58:46.253457 2861 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:58:46.263916 kubelet[2861]: E0513 23:58:46.263890 2861 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:58:46.265377 kubelet[2861]: W0513 23:58:46.264599 2861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused May 13 23:58:46.265377 kubelet[2861]: E0513 23:58:46.264664 2861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.37:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:46.270427 kubelet[2861]: I0513 23:58:46.270404 2861 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:58:46.270427 kubelet[2861]: I0513 23:58:46.270425 2861 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:58:46.270564 kubelet[2861]: I0513 23:58:46.270446 2861 state_mem.go:36] "Initialized new in-memory state store" May 13 23:58:46.333869 kubelet[2861]: E0513 23:58:46.333713 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:46.354189 kubelet[2861]: E0513 23:58:46.354134 2861 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:58:46.434654 kubelet[2861]: E0513 23:58:46.434469 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:46.435125 kubelet[2861]: E0513 23:58:46.435042 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-84802b4006?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="400ms" May 13 23:58:46.535555 kubelet[2861]: E0513 23:58:46.535501 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:46.554831 kubelet[2861]: E0513 23:58:46.554765 2861 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:58:48.500439 kubelet[2861]: E0513 23:58:46.636306 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.500439 kubelet[2861]: E0513 23:58:46.737416 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.500439 kubelet[2861]: E0513 23:58:46.836342 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-84802b4006?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="800ms" May 13 23:58:48.500439 kubelet[2861]: E0513 23:58:46.838402 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.500439 kubelet[2861]: E0513 23:58:46.938996 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.500439 kubelet[2861]: E0513 23:58:46.955270 2861 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:58:48.500439 kubelet[2861]: E0513 23:58:47.039727 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.500439 kubelet[2861]: E0513 23:58:47.140247 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.500439 kubelet[2861]: E0513 23:58:47.241210 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.500439 kubelet[2861]: E0513 23:58:47.342165 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.500439 kubelet[2861]: E0513 23:58:47.442753 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.501331 kubelet[2861]: W0513 23:58:47.476517 2861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused May 13 23:58:48.501331 kubelet[2861]: W0513 23:58:47.476517 2861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-84802b4006&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused May 13 23:58:48.501331 kubelet[2861]: E0513 23:58:47.476581 2861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-84802b4006&limit=500&resourceVersion=0\": dial tcp 10.200.8.37:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:48.501331 kubelet[2861]: E0513 23:58:47.476578 2861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.37:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:48.501331 kubelet[2861]: E0513 23:58:47.543315 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.501331 kubelet[2861]: W0513 23:58:47.622037 2861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused May 13 23:58:48.501534 kubelet[2861]: E0513 23:58:47.622135 2861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.37:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:48.501534 kubelet[2861]: E0513 23:58:47.637189 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-84802b4006?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="1.6s" May 13 23:58:48.501534 kubelet[2861]: E0513 23:58:47.644262 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.501534 kubelet[2861]: E0513 23:58:47.745358 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.501534 kubelet[2861]: E0513 23:58:47.755577 2861 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:58:48.501534 kubelet[2861]: W0513 23:58:47.833371 2861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused May 13 23:58:48.501534 kubelet[2861]: E0513 23:58:47.833447 2861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.37:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:48.501534 kubelet[2861]: E0513 23:58:47.845663 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.501789 kubelet[2861]: E0513 23:58:47.892374 2861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-84802b4006.183f3b9d10b83cb3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-84802b4006,UID:ci-4284.0.0-n-84802b4006,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-84802b4006,},FirstTimestamp:2025-05-13 23:58:46.219455667 +0000 UTC m=+1.040687772,LastTimestamp:2025-05-13 23:58:46.219455667 +0000 UTC m=+1.040687772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-84802b4006,}" May 13 23:58:48.501789 kubelet[2861]: E0513 23:58:47.946718 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.501789 kubelet[2861]: E0513 23:58:48.047263 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.501789 kubelet[2861]: E0513 23:58:48.147925 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.501789 kubelet[2861]: E0513 23:58:48.206303 2861 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.37:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:48.501789 kubelet[2861]: E0513 23:58:48.248947 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.502028 kubelet[2861]: E0513 23:58:48.349871 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.502028 kubelet[2861]: E0513 23:58:48.450409 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.543193 kubelet[2861]: I0513 23:58:48.543126 2861 policy_none.go:49] "None policy: Start" May 13 23:58:48.544404 kubelet[2861]: I0513 23:58:48.544340 2861 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:58:48.544404 kubelet[2861]: I0513 23:58:48.544376 2861 state_mem.go:35] "Initializing new in-memory state store" May 13 23:58:48.550572 kubelet[2861]: E0513 23:58:48.550538 2861 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.595519 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:58:48.607223 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:58:48.610685 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:58:48.617816 kubelet[2861]: I0513 23:58:48.617786 2861 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:58:48.618292 kubelet[2861]: I0513 23:58:48.618213 2861 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:58:48.618292 kubelet[2861]: I0513 23:58:48.618227 2861 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:58:48.618642 kubelet[2861]: I0513 23:58:48.618620 2861 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:58:48.620935 kubelet[2861]: E0513 23:58:48.620896 2861 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:58:48.720918 kubelet[2861]: I0513 23:58:48.720857 2861 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-84802b4006" May 13 23:58:48.721348 kubelet[2861]: E0513 23:58:48.721307 2861 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-4284.0.0-n-84802b4006" May 13 23:58:48.924269 kubelet[2861]: I0513 23:58:48.924226 2861 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-84802b4006" May 13 23:58:48.924671 kubelet[2861]: E0513 23:58:48.924634 2861 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-4284.0.0-n-84802b4006" May 13 23:58:49.238447 kubelet[2861]: E0513 23:58:49.238300 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-84802b4006?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="3.2s" May 13 23:58:49.326815 kubelet[2861]: I0513 23:58:49.326775 2861 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-84802b4006" May 13 23:58:49.327220 kubelet[2861]: E0513 23:58:49.327182 2861 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-4284.0.0-n-84802b4006" May 13 23:58:49.331577 kubelet[2861]: W0513 23:58:49.331550 2861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-84802b4006&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused May 13 23:58:49.331669 kubelet[2861]: E0513 23:58:49.331590 2861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-84802b4006&limit=500&resourceVersion=0\": dial tcp 10.200.8.37:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:49.366990 systemd[1]: Created slice kubepods-burstable-pod5438b53c3717cbd88eed8d0639c873d2.slice - libcontainer container kubepods-burstable-pod5438b53c3717cbd88eed8d0639c873d2.slice. May 13 23:58:49.378818 systemd[1]: Created slice kubepods-burstable-pod81477c0db4677ea6aee72684299d85d1.slice - libcontainer container kubepods-burstable-pod81477c0db4677ea6aee72684299d85d1.slice. May 13 23:58:49.385156 systemd[1]: Created slice kubepods-burstable-pod4c65d15f31b80104775e628e93a01ae9.slice - libcontainer container kubepods-burstable-pod4c65d15f31b80104775e628e93a01ae9.slice. May 13 23:58:49.455615 kubelet[2861]: I0513 23:58:49.455495 2861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c65d15f31b80104775e628e93a01ae9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-84802b4006\" (UID: \"4c65d15f31b80104775e628e93a01ae9\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-84802b4006" May 13 23:58:49.455615 kubelet[2861]: I0513 23:58:49.455554 2861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c65d15f31b80104775e628e93a01ae9-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-84802b4006\" (UID: \"4c65d15f31b80104775e628e93a01ae9\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-84802b4006" May 13 23:58:49.455615 kubelet[2861]: I0513 23:58:49.455620 2861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c65d15f31b80104775e628e93a01ae9-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-84802b4006\" (UID: \"4c65d15f31b80104775e628e93a01ae9\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-84802b4006" May 13 23:58:49.455967 kubelet[2861]: I0513 23:58:49.455647 2861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c65d15f31b80104775e628e93a01ae9-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-84802b4006\" (UID: \"4c65d15f31b80104775e628e93a01ae9\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-84802b4006" May 13 23:58:49.455967 kubelet[2861]: I0513 23:58:49.455675 2861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81477c0db4677ea6aee72684299d85d1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-84802b4006\" (UID: \"81477c0db4677ea6aee72684299d85d1\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-84802b4006" May 13 23:58:49.455967 kubelet[2861]: I0513 23:58:49.455705 2861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c65d15f31b80104775e628e93a01ae9-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-84802b4006\" (UID: \"4c65d15f31b80104775e628e93a01ae9\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-84802b4006" May 13 23:58:49.455967 kubelet[2861]: I0513 23:58:49.455731 2861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5438b53c3717cbd88eed8d0639c873d2-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-84802b4006\" (UID: \"5438b53c3717cbd88eed8d0639c873d2\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-84802b4006" May 13 23:58:49.455967 kubelet[2861]: I0513 23:58:49.455754 2861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81477c0db4677ea6aee72684299d85d1-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-84802b4006\" (UID: \"81477c0db4677ea6aee72684299d85d1\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-84802b4006" May 13 23:58:49.456160 kubelet[2861]: I0513 23:58:49.455776 2861 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81477c0db4677ea6aee72684299d85d1-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-84802b4006\" (UID: \"81477c0db4677ea6aee72684299d85d1\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-84802b4006" May 13 23:58:49.532376 kubelet[2861]: W0513 23:58:49.532227 2861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused May 13 23:58:49.532376 kubelet[2861]: E0513 23:58:49.532288 2861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.37:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:49.676661 containerd[1733]: time="2025-05-13T23:58:49.676602558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-84802b4006,Uid:5438b53c3717cbd88eed8d0639c873d2,Namespace:kube-system,Attempt:0,}" May 13 23:58:49.684141 containerd[1733]: time="2025-05-13T23:58:49.684108526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-84802b4006,Uid:81477c0db4677ea6aee72684299d85d1,Namespace:kube-system,Attempt:0,}" May 13 23:58:49.687853 containerd[1733]: time="2025-05-13T23:58:49.687822110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-84802b4006,Uid:4c65d15f31b80104775e628e93a01ae9,Namespace:kube-system,Attempt:0,}" May 13 23:58:50.129908 kubelet[2861]: I0513 23:58:50.129863 2861 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-84802b4006" May 13 23:58:50.130336 kubelet[2861]: E0513 23:58:50.130299 2861 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-4284.0.0-n-84802b4006" May 13 23:58:50.461618 kubelet[2861]: W0513 23:58:50.461476 2861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused May 13 23:58:50.461618 kubelet[2861]: E0513 23:58:50.461539 2861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.37:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:50.464952 kubelet[2861]: W0513 23:58:50.464915 2861 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused May 13 23:58:50.465111 kubelet[2861]: E0513 23:58:50.464964 2861 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.37:6443: connect: connection refused" logger="UnhandledError" May 13 23:58:51.011254 containerd[1733]: time="2025-05-13T23:58:51.011154518Z" level=info msg="connecting to shim fc36c6c984dd8d084da51a2021dec57832fe47cada079349500006eeb270ee6c" address="unix:///run/containerd/s/2400e51b5396f40d65fb15bf5331989c3b9c94cd95eba6f602c0868e4f9a78c0" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:51.025516 containerd[1733]: time="2025-05-13T23:58:51.022872181Z" level=info msg="connecting to shim cdc0dfd3d7f050ddb465075237e29ea5fda99a5ce5f03a228e2061e8fc2f7f10" address="unix:///run/containerd/s/3795af4a93cf9520f2254891298226e5a07560bdc61598636f98601087ff443f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:51.045682 containerd[1733]: time="2025-05-13T23:58:51.045628792Z" level=info msg="connecting to shim 4811b4c3ce8dddbeaee9a98972cdbfdf777d044d29a53c81d694c5baf169dc25" address="unix:///run/containerd/s/66fb13f49d33d7fb8d7b38cf2e0ad6b5f8584e50fbf733c6544792c42e472747" namespace=k8s.io protocol=ttrpc version=3 May 13 23:58:51.066671 systemd[1]: Started cri-containerd-fc36c6c984dd8d084da51a2021dec57832fe47cada079349500006eeb270ee6c.scope - libcontainer container fc36c6c984dd8d084da51a2021dec57832fe47cada079349500006eeb270ee6c. May 13 23:58:51.080659 systemd[1]: Started cri-containerd-cdc0dfd3d7f050ddb465075237e29ea5fda99a5ce5f03a228e2061e8fc2f7f10.scope - libcontainer container cdc0dfd3d7f050ddb465075237e29ea5fda99a5ce5f03a228e2061e8fc2f7f10. May 13 23:58:51.099446 systemd[1]: Started cri-containerd-4811b4c3ce8dddbeaee9a98972cdbfdf777d044d29a53c81d694c5baf169dc25.scope - libcontainer container 4811b4c3ce8dddbeaee9a98972cdbfdf777d044d29a53c81d694c5baf169dc25. May 13 23:58:51.182964 containerd[1733]: time="2025-05-13T23:58:51.182913874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-84802b4006,Uid:81477c0db4677ea6aee72684299d85d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdc0dfd3d7f050ddb465075237e29ea5fda99a5ce5f03a228e2061e8fc2f7f10\"" May 13 23:58:51.186562 containerd[1733]: time="2025-05-13T23:58:51.186495354Z" level=info msg="CreateContainer within sandbox \"cdc0dfd3d7f050ddb465075237e29ea5fda99a5ce5f03a228e2061e8fc2f7f10\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:58:51.187390 containerd[1733]: time="2025-05-13T23:58:51.187298072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-84802b4006,Uid:4c65d15f31b80104775e628e93a01ae9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4811b4c3ce8dddbeaee9a98972cdbfdf777d044d29a53c81d694c5baf169dc25\"" May 13 23:58:51.189611 containerd[1733]: time="2025-05-13T23:58:51.189578423Z" level=info msg="CreateContainer within sandbox \"4811b4c3ce8dddbeaee9a98972cdbfdf777d044d29a53c81d694c5baf169dc25\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:58:51.191568 containerd[1733]: time="2025-05-13T23:58:51.191303962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-84802b4006,Uid:5438b53c3717cbd88eed8d0639c873d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc36c6c984dd8d084da51a2021dec57832fe47cada079349500006eeb270ee6c\"" May 13 23:58:51.193510 containerd[1733]: time="2025-05-13T23:58:51.193463911Z" level=info msg="CreateContainer within sandbox \"fc36c6c984dd8d084da51a2021dec57832fe47cada079349500006eeb270ee6c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:58:51.216391 containerd[1733]: time="2025-05-13T23:58:51.216348224Z" level=info msg="Container 7eab17512d93de2f56118180afb1112c7041a093fa0accd7780d425257f2c7d4: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:51.245701 containerd[1733]: time="2025-05-13T23:58:51.245571580Z" level=info msg="Container ed4a400381128805af2a4456649bd6c3271af3b081c8e5bbcdf91768838f4215: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:51.297121 containerd[1733]: time="2025-05-13T23:58:51.296406222Z" level=info msg="Container 2e19ae1b5afe903bcd7320ae9713321db3658af846674f942a0d2c3948ae52db: CDI devices from CRI Config.CDIDevices: []" May 13 23:58:51.641329 containerd[1733]: time="2025-05-13T23:58:51.641268889Z" level=info msg="CreateContainer within sandbox \"cdc0dfd3d7f050ddb465075237e29ea5fda99a5ce5f03a228e2061e8fc2f7f10\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7eab17512d93de2f56118180afb1112c7041a093fa0accd7780d425257f2c7d4\"" May 13 23:58:51.686430 containerd[1733]: time="2025-05-13T23:58:51.642088807Z" level=info msg="StartContainer for \"7eab17512d93de2f56118180afb1112c7041a093fa0accd7780d425257f2c7d4\"" May 13 23:58:51.688694 containerd[1733]: time="2025-05-13T23:58:51.687928955Z" level=info msg="connecting to shim 7eab17512d93de2f56118180afb1112c7041a093fa0accd7780d425257f2c7d4" address="unix:///run/containerd/s/3795af4a93cf9520f2254891298226e5a07560bdc61598636f98601087ff443f" protocol=ttrpc version=3 May 13 23:58:51.690058 containerd[1733]: time="2025-05-13T23:58:51.690017403Z" level=info msg="CreateContainer within sandbox \"fc36c6c984dd8d084da51a2021dec57832fe47cada079349500006eeb270ee6c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ed4a400381128805af2a4456649bd6c3271af3b081c8e5bbcdf91768838f4215\"" May 13 23:58:51.690612 containerd[1733]: time="2025-05-13T23:58:51.690581316Z" level=info msg="StartContainer for \"ed4a400381128805af2a4456649bd6c3271af3b081c8e5bbcdf91768838f4215\"" May 13 23:58:51.691870 containerd[1733]: time="2025-05-13T23:58:51.691832744Z" level=info msg="connecting to shim ed4a400381128805af2a4456649bd6c3271af3b081c8e5bbcdf91768838f4215" address="unix:///run/containerd/s/2400e51b5396f40d65fb15bf5331989c3b9c94cd95eba6f602c0868e4f9a78c0" protocol=ttrpc version=3 May 13 23:58:51.710230 systemd[1]: Started cri-containerd-7eab17512d93de2f56118180afb1112c7041a093fa0accd7780d425257f2c7d4.scope - libcontainer container 7eab17512d93de2f56118180afb1112c7041a093fa0accd7780d425257f2c7d4. May 13 23:58:51.715209 systemd[1]: Started cri-containerd-ed4a400381128805af2a4456649bd6c3271af3b081c8e5bbcdf91768838f4215.scope - libcontainer container ed4a400381128805af2a4456649bd6c3271af3b081c8e5bbcdf91768838f4215. May 13 23:58:51.734796 kubelet[2861]: I0513 23:58:51.734727 2861 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-84802b4006" May 13 23:58:51.738218 kubelet[2861]: E0513 23:58:51.735276 2861 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-4284.0.0-n-84802b4006" May 13 23:58:51.750628 containerd[1733]: time="2025-05-13T23:58:51.750581587Z" level=info msg="CreateContainer within sandbox \"4811b4c3ce8dddbeaee9a98972cdbfdf777d044d29a53c81d694c5baf169dc25\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2e19ae1b5afe903bcd7320ae9713321db3658af846674f942a0d2c3948ae52db\"" May 13 23:58:51.751170 containerd[1733]: time="2025-05-13T23:58:51.751140300Z" level=info msg="StartContainer for \"2e19ae1b5afe903bcd7320ae9713321db3658af846674f942a0d2c3948ae52db\"" May 13 23:58:51.754580 containerd[1733]: time="2025-05-13T23:58:51.754535577Z" level=info msg="connecting to shim 2e19ae1b5afe903bcd7320ae9713321db3658af846674f942a0d2c3948ae52db" address="unix:///run/containerd/s/66fb13f49d33d7fb8d7b38cf2e0ad6b5f8584e50fbf733c6544792c42e472747" protocol=ttrpc version=3 May 13 23:58:51.784505 systemd[1]: Started cri-containerd-2e19ae1b5afe903bcd7320ae9713321db3658af846674f942a0d2c3948ae52db.scope - libcontainer container 2e19ae1b5afe903bcd7320ae9713321db3658af846674f942a0d2c3948ae52db. May 13 23:58:51.813261 containerd[1733]: time="2025-05-13T23:58:51.812919911Z" level=info msg="StartContainer for \"7eab17512d93de2f56118180afb1112c7041a093fa0accd7780d425257f2c7d4\" returns successfully" May 13 23:58:51.815492 containerd[1733]: time="2025-05-13T23:58:51.815272665Z" level=info msg="StartContainer for \"ed4a400381128805af2a4456649bd6c3271af3b081c8e5bbcdf91768838f4215\" returns successfully" May 13 23:58:51.911858 containerd[1733]: time="2025-05-13T23:58:51.911653068Z" level=info msg="StartContainer for \"2e19ae1b5afe903bcd7320ae9713321db3658af846674f942a0d2c3948ae52db\" returns successfully" May 13 23:58:53.833635 kubelet[2861]: E0513 23:58:53.833561 2861 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-n-84802b4006\" not found" node="ci-4284.0.0-n-84802b4006" May 13 23:58:54.223831 kubelet[2861]: I0513 23:58:54.223779 2861 apiserver.go:52] "Watching apiserver" May 13 23:58:54.234475 kubelet[2861]: I0513 23:58:54.234437 2861 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:58:54.349279 kubelet[2861]: E0513 23:58:54.349230 2861 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4284.0.0-n-84802b4006" not found May 13 23:58:54.873223 kubelet[2861]: E0513 23:58:54.873182 2861 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4284.0.0-n-84802b4006" not found May 13 23:58:54.938185 kubelet[2861]: I0513 23:58:54.938141 2861 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-84802b4006" May 13 23:58:54.947985 kubelet[2861]: I0513 23:58:54.947950 2861 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284.0.0-n-84802b4006" May 13 23:58:55.749084 systemd[1]: Reload requested from client PID 3129 ('systemctl') (unit session-7.scope)... May 13 23:58:55.749105 systemd[1]: Reloading... May 13 23:58:55.859104 zram_generator::config[3172]: No configuration found. May 13 23:58:56.001019 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:58:56.134096 systemd[1]: Reloading finished in 384 ms. May 13 23:58:56.166580 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:56.182413 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:58:56.182716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:56.182791 systemd[1]: kubelet.service: Consumed 851ms CPU time, 116.9M memory peak. May 13 23:58:56.184787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:02.563399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:02.573614 (kubelet)[3243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:59:02.796917 kubelet[3243]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:59:02.796917 kubelet[3243]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:59:02.796917 kubelet[3243]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:59:02.796917 kubelet[3243]: I0513 23:59:02.625354 3243 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:59:02.796917 kubelet[3243]: I0513 23:59:02.630460 3243 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:59:02.796917 kubelet[3243]: I0513 23:59:02.630477 3243 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:59:02.796917 kubelet[3243]: I0513 23:59:02.630669 3243 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:59:02.796917 kubelet[3243]: I0513 23:59:02.631780 3243 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:59:02.796917 kubelet[3243]: I0513 23:59:02.636634 3243 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:59:02.796917 kubelet[3243]: I0513 23:59:02.652465 3243 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:59:02.796917 kubelet[3243]: I0513 23:59:02.658268 3243 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:59:02.796917 kubelet[3243]: I0513 23:59:02.658454 3243 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:59:02.796917 kubelet[3243]: I0513 23:59:02.658590 3243 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:59:02.797684 kubelet[3243]: I0513 23:59:02.658640 3243 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-84802b4006","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:59:02.797684 kubelet[3243]: I0513 23:59:02.658978 3243 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:59:02.797684 kubelet[3243]: I0513 23:59:02.658990 3243 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:59:02.797684 kubelet[3243]: I0513 23:59:02.659030 3243 state_mem.go:36] "Initialized new in-memory state store" May 13 23:59:02.797684 kubelet[3243]: I0513 23:59:02.659180 3243 kubelet.go:408] "Attempting to sync node with API server" May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.659201 3243 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.659235 3243 kubelet.go:314] "Adding apiserver pod source" May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.659256 3243 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.661472 3243 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.661937 3243 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.662431 3243 server.go:1269] "Started kubelet" May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.667431 3243 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.677353 3243 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.678455 3243 server.go:460] "Adding debug handlers to kubelet server" May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.681863 3243 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.682768 3243 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.683682 3243 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:59:02.797930 kubelet[3243]: E0513 23:59:02.683958 3243 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-84802b4006\" not found" May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.684621 3243 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.684841 3243 reconciler.go:26] "Reconciler: start to sync state" May 13 23:59:02.797930 kubelet[3243]: E0513 23:59:02.704119 3243 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.707216 3243 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:59:02.797930 kubelet[3243]: I0513 23:59:02.708613 3243 factory.go:221] Registration of the containerd container factory successfully May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.708730 3243 factory.go:221] Registration of the systemd container factory successfully May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.709593 3243 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.722124 3243 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.722149 3243 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.722187 3243 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:59:02.812146 kubelet[3243]: E0513 23:59:02.722297 3243 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.762933 3243 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.762946 3243 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.762962 3243 state_mem.go:36] "Initialized new in-memory state store" May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.763112 3243 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.763122 3243 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.763140 3243 policy_none.go:49] "None policy: Start" May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.763749 3243 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.763773 3243 state_mem.go:35] "Initializing new in-memory state store" May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.763906 3243 state_mem.go:75] "Updated machine memory state" May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.768101 3243 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.796669 3243 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.796700 3243 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:59:02.812146 kubelet[3243]: I0513 23:59:02.797056 3243 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:59:02.817038 kubelet[3243]: I0513 23:59:02.809907 3243 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:59:02.819572 kubelet[3243]: I0513 23:59:02.817285 3243 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:59:02.819658 containerd[1733]: time="2025-05-13T23:59:02.819463456Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:59:02.820112 kubelet[3243]: I0513 23:59:02.819854 3243 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:59:02.873029 kubelet[3243]: W0513 23:59:02.872825 3243 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:02.875468 kubelet[3243]: W0513 23:59:02.875438 3243 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:02.876886 kubelet[3243]: W0513 23:59:02.876644 3243 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:59:02.885977 kubelet[3243]: I0513 23:59:02.885949 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81477c0db4677ea6aee72684299d85d1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-84802b4006\" (UID: \"81477c0db4677ea6aee72684299d85d1\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-84802b4006" May 13 23:59:02.886377 kubelet[3243]: I0513 23:59:02.886147 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c65d15f31b80104775e628e93a01ae9-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-84802b4006\" (UID: \"4c65d15f31b80104775e628e93a01ae9\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-84802b4006" May 13 23:59:02.886377 kubelet[3243]: I0513 23:59:02.886181 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c65d15f31b80104775e628e93a01ae9-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-84802b4006\" (UID: \"4c65d15f31b80104775e628e93a01ae9\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-84802b4006" May 13 23:59:02.886377 kubelet[3243]: I0513 23:59:02.886203 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c65d15f31b80104775e628e93a01ae9-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-84802b4006\" (UID: \"4c65d15f31b80104775e628e93a01ae9\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-84802b4006" May 13 23:59:02.886377 kubelet[3243]: I0513 23:59:02.886224 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81477c0db4677ea6aee72684299d85d1-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-84802b4006\" (UID: \"81477c0db4677ea6aee72684299d85d1\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-84802b4006" May 13 23:59:02.886377 kubelet[3243]: I0513 23:59:02.886246 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81477c0db4677ea6aee72684299d85d1-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-84802b4006\" (UID: \"81477c0db4677ea6aee72684299d85d1\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-84802b4006" May 13 23:59:02.886603 kubelet[3243]: I0513 23:59:02.886267 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c65d15f31b80104775e628e93a01ae9-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-84802b4006\" (UID: \"4c65d15f31b80104775e628e93a01ae9\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-84802b4006" May 13 23:59:02.886603 kubelet[3243]: I0513 23:59:02.886288 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c65d15f31b80104775e628e93a01ae9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-84802b4006\" (UID: \"4c65d15f31b80104775e628e93a01ae9\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-84802b4006" May 13 23:59:02.886603 kubelet[3243]: I0513 23:59:02.886311 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5438b53c3717cbd88eed8d0639c873d2-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-84802b4006\" (UID: \"5438b53c3717cbd88eed8d0639c873d2\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-84802b4006" May 13 23:59:02.929884 kubelet[3243]: I0513 23:59:02.929738 3243 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-84802b4006" May 13 23:59:02.947259 kubelet[3243]: I0513 23:59:02.947218 3243 kubelet_node_status.go:111] "Node was previously registered" node="ci-4284.0.0-n-84802b4006" May 13 23:59:02.947417 kubelet[3243]: I0513 23:59:02.947316 3243 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284.0.0-n-84802b4006" May 13 23:59:03.660334 kubelet[3243]: I0513 23:59:03.660294 3243 apiserver.go:52] "Watching apiserver" May 13 23:59:03.671329 systemd[1]: Created slice kubepods-besteffort-pod47858603_aa47_4fee_bee0_a76da9d32868.slice - libcontainer container kubepods-besteffort-pod47858603_aa47_4fee_bee0_a76da9d32868.slice. May 13 23:59:03.683151 kubelet[3243]: I0513 23:59:03.682882 3243 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-n-84802b4006" podStartSLOduration=1.682863986 podStartE2EDuration="1.682863986s" podCreationTimestamp="2025-05-13 23:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:03.682320673 +0000 UTC m=+1.104727742" watchObservedRunningTime="2025-05-13 23:59:03.682863986 +0000 UTC m=+1.105271055" May 13 23:59:03.685187 kubelet[3243]: I0513 23:59:03.685160 3243 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:59:03.689723 kubelet[3243]: I0513 23:59:03.689694 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47858603-aa47-4fee-bee0-a76da9d32868-lib-modules\") pod \"kube-proxy-728pb\" (UID: \"47858603-aa47-4fee-bee0-a76da9d32868\") " pod="kube-system/kube-proxy-728pb" May 13 23:59:03.689837 kubelet[3243]: I0513 23:59:03.689733 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/47858603-aa47-4fee-bee0-a76da9d32868-kube-proxy\") pod \"kube-proxy-728pb\" (UID: \"47858603-aa47-4fee-bee0-a76da9d32868\") " pod="kube-system/kube-proxy-728pb" May 13 23:59:03.689837 kubelet[3243]: I0513 23:59:03.689756 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47858603-aa47-4fee-bee0-a76da9d32868-xtables-lock\") pod \"kube-proxy-728pb\" (UID: \"47858603-aa47-4fee-bee0-a76da9d32868\") " pod="kube-system/kube-proxy-728pb" May 13 23:59:03.689837 kubelet[3243]: I0513 23:59:03.689776 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjxfj\" (UniqueName: \"kubernetes.io/projected/47858603-aa47-4fee-bee0-a76da9d32868-kube-api-access-wjxfj\") pod \"kube-proxy-728pb\" (UID: \"47858603-aa47-4fee-bee0-a76da9d32868\") " pod="kube-system/kube-proxy-728pb" May 13 23:59:03.695873 kubelet[3243]: I0513 23:59:03.695804 3243 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-84802b4006" podStartSLOduration=1.6957875919999998 podStartE2EDuration="1.695787592s" podCreationTimestamp="2025-05-13 23:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:03.695561586 +0000 UTC m=+1.117968655" watchObservedRunningTime="2025-05-13 23:59:03.695787592 +0000 UTC m=+1.118194661" May 13 23:59:03.716426 kubelet[3243]: I0513 23:59:03.716359 3243 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-n-84802b4006" podStartSLOduration=1.716310577 podStartE2EDuration="1.716310577s" podCreationTimestamp="2025-05-13 23:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:03.716174574 +0000 UTC m=+1.138581643" watchObservedRunningTime="2025-05-13 23:59:03.716310577 +0000 UTC m=+1.138717646" May 13 23:59:03.979223 containerd[1733]: time="2025-05-13T23:59:03.978620384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-728pb,Uid:47858603-aa47-4fee-bee0-a76da9d32868,Namespace:kube-system,Attempt:0,}" May 13 23:59:04.072530 containerd[1733]: time="2025-05-13T23:59:04.071805689Z" level=info msg="connecting to shim 48c370bd6c5dc432eb8388c3da0263473f2437a3d133023605e7e50d1e5718c0" address="unix:///run/containerd/s/9814c69cb2c87aae16fde1e69a64e28b171ccd90b088e627f4f528b3f87c2273" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:04.103244 systemd[1]: Started cri-containerd-48c370bd6c5dc432eb8388c3da0263473f2437a3d133023605e7e50d1e5718c0.scope - libcontainer container 48c370bd6c5dc432eb8388c3da0263473f2437a3d133023605e7e50d1e5718c0. May 13 23:59:04.138911 containerd[1733]: time="2025-05-13T23:59:04.138527468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-728pb,Uid:47858603-aa47-4fee-bee0-a76da9d32868,Namespace:kube-system,Attempt:0,} returns sandbox id \"48c370bd6c5dc432eb8388c3da0263473f2437a3d133023605e7e50d1e5718c0\"" May 13 23:59:04.142106 containerd[1733]: time="2025-05-13T23:59:04.141985650Z" level=info msg="CreateContainer within sandbox \"48c370bd6c5dc432eb8388c3da0263473f2437a3d133023605e7e50d1e5718c0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:59:04.169100 containerd[1733]: time="2025-05-13T23:59:04.165259501Z" level=info msg="Container 70328da64f70e1dd0e846a033fe615ab82c60a1051257afc9887cb4d49eda916: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:04.172956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3016022768.mount: Deactivated successfully. May 13 23:59:04.191154 containerd[1733]: time="2025-05-13T23:59:04.191105912Z" level=info msg="CreateContainer within sandbox \"48c370bd6c5dc432eb8388c3da0263473f2437a3d133023605e7e50d1e5718c0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"70328da64f70e1dd0e846a033fe615ab82c60a1051257afc9887cb4d49eda916\"" May 13 23:59:04.193438 containerd[1733]: time="2025-05-13T23:59:04.193213062Z" level=info msg="StartContainer for \"70328da64f70e1dd0e846a033fe615ab82c60a1051257afc9887cb4d49eda916\"" May 13 23:59:04.195283 containerd[1733]: time="2025-05-13T23:59:04.195176109Z" level=info msg="connecting to shim 70328da64f70e1dd0e846a033fe615ab82c60a1051257afc9887cb4d49eda916" address="unix:///run/containerd/s/9814c69cb2c87aae16fde1e69a64e28b171ccd90b088e627f4f528b3f87c2273" protocol=ttrpc version=3 May 13 23:59:04.222271 systemd[1]: Started cri-containerd-70328da64f70e1dd0e846a033fe615ab82c60a1051257afc9887cb4d49eda916.scope - libcontainer container 70328da64f70e1dd0e846a033fe615ab82c60a1051257afc9887cb4d49eda916. May 13 23:59:04.289359 containerd[1733]: time="2025-05-13T23:59:04.289138532Z" level=info msg="StartContainer for \"70328da64f70e1dd0e846a033fe615ab82c60a1051257afc9887cb4d49eda916\" returns successfully" May 13 23:59:04.665946 systemd[1]: Created slice kubepods-burstable-pod2e42058e_fc3f_4b9d_a619_f57544568f12.slice - libcontainer container kubepods-burstable-pod2e42058e_fc3f_4b9d_a619_f57544568f12.slice. May 13 23:59:04.695876 kubelet[3243]: I0513 23:59:04.695818 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2e42058e-fc3f-4b9d-a619-f57544568f12-run\") pod \"kube-flannel-ds-gz9mf\" (UID: \"2e42058e-fc3f-4b9d-a619-f57544568f12\") " pod="kube-flannel/kube-flannel-ds-gz9mf" May 13 23:59:04.696389 kubelet[3243]: I0513 23:59:04.695886 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwzw2\" (UniqueName: \"kubernetes.io/projected/2e42058e-fc3f-4b9d-a619-f57544568f12-kube-api-access-dwzw2\") pod \"kube-flannel-ds-gz9mf\" (UID: \"2e42058e-fc3f-4b9d-a619-f57544568f12\") " pod="kube-flannel/kube-flannel-ds-gz9mf" May 13 23:59:04.696389 kubelet[3243]: I0513 23:59:04.695922 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/2e42058e-fc3f-4b9d-a619-f57544568f12-cni-plugin\") pod \"kube-flannel-ds-gz9mf\" (UID: \"2e42058e-fc3f-4b9d-a619-f57544568f12\") " pod="kube-flannel/kube-flannel-ds-gz9mf" May 13 23:59:04.696389 kubelet[3243]: I0513 23:59:04.695941 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/2e42058e-fc3f-4b9d-a619-f57544568f12-cni\") pod \"kube-flannel-ds-gz9mf\" (UID: \"2e42058e-fc3f-4b9d-a619-f57544568f12\") " pod="kube-flannel/kube-flannel-ds-gz9mf" May 13 23:59:04.696389 kubelet[3243]: I0513 23:59:04.695959 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/2e42058e-fc3f-4b9d-a619-f57544568f12-flannel-cfg\") pod \"kube-flannel-ds-gz9mf\" (UID: \"2e42058e-fc3f-4b9d-a619-f57544568f12\") " pod="kube-flannel/kube-flannel-ds-gz9mf" May 13 23:59:04.696389 kubelet[3243]: I0513 23:59:04.695980 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e42058e-fc3f-4b9d-a619-f57544568f12-xtables-lock\") pod \"kube-flannel-ds-gz9mf\" (UID: \"2e42058e-fc3f-4b9d-a619-f57544568f12\") " pod="kube-flannel/kube-flannel-ds-gz9mf" May 13 23:59:04.779994 sudo[2157]: pam_unix(sudo:session): session closed for user root May 13 23:59:04.881032 sshd[2156]: Connection closed by 10.200.16.10 port 41790 May 13 23:59:04.881433 sshd-session[2154]: pam_unix(sshd:session): session closed for user core May 13 23:59:04.884847 systemd[1]: sshd@4-10.200.8.37:22-10.200.16.10:41790.service: Deactivated successfully. May 13 23:59:04.887263 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:59:04.887513 systemd[1]: session-7.scope: Consumed 3.578s CPU time, 219.6M memory peak. May 13 23:59:04.889941 systemd-logind[1711]: Session 7 logged out. Waiting for processes to exit. May 13 23:59:04.891013 systemd-logind[1711]: Removed session 7. May 13 23:59:04.975641 containerd[1733]: time="2025-05-13T23:59:04.975274768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-gz9mf,Uid:2e42058e-fc3f-4b9d-a619-f57544568f12,Namespace:kube-flannel,Attempt:0,}" May 13 23:59:05.034860 containerd[1733]: time="2025-05-13T23:59:05.034810076Z" level=info msg="connecting to shim 51fb7dfb20da765d9f5744e31a85ceb28e93f6c347b4a08b814d8157572bf2e7" address="unix:///run/containerd/s/60c0618add374f8461d4d8110b5b4e6d2e161870b134020f7c7f2c94741de3a0" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:05.068250 systemd[1]: Started cri-containerd-51fb7dfb20da765d9f5744e31a85ceb28e93f6c347b4a08b814d8157572bf2e7.scope - libcontainer container 51fb7dfb20da765d9f5744e31a85ceb28e93f6c347b4a08b814d8157572bf2e7. May 13 23:59:05.115729 containerd[1733]: time="2025-05-13T23:59:05.115685390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-gz9mf,Uid:2e42058e-fc3f-4b9d-a619-f57544568f12,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"51fb7dfb20da765d9f5744e31a85ceb28e93f6c347b4a08b814d8157572bf2e7\"" May 13 23:59:05.117757 containerd[1733]: time="2025-05-13T23:59:05.117454332Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 13 23:59:05.315436 kubelet[3243]: I0513 23:59:05.315052 3243 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-728pb" podStartSLOduration=2.315029507 podStartE2EDuration="2.315029507s" podCreationTimestamp="2025-05-13 23:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:04.769474098 +0000 UTC m=+2.191881167" watchObservedRunningTime="2025-05-13 23:59:05.315029507 +0000 UTC m=+2.737436576" May 13 23:59:06.949772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2189281233.mount: Deactivated successfully. May 13 23:59:07.034797 containerd[1733]: time="2025-05-13T23:59:07.034740000Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:07.037382 containerd[1733]: time="2025-05-13T23:59:07.037306861Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852936" May 13 23:59:07.040431 containerd[1733]: time="2025-05-13T23:59:07.040371833Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:07.045556 containerd[1733]: time="2025-05-13T23:59:07.045520055Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:07.046650 containerd[1733]: time="2025-05-13T23:59:07.046091169Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.928577235s" May 13 23:59:07.046650 containerd[1733]: time="2025-05-13T23:59:07.046130170Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" May 13 23:59:07.048563 containerd[1733]: time="2025-05-13T23:59:07.048534926Z" level=info msg="CreateContainer within sandbox \"51fb7dfb20da765d9f5744e31a85ceb28e93f6c347b4a08b814d8157572bf2e7\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 13 23:59:07.075551 containerd[1733]: time="2025-05-13T23:59:07.074739046Z" level=info msg="Container c0b10fb9db6883f07c9cefc94661fbe8d0771b0332307975cc77a0bc397d1323: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:07.143975 containerd[1733]: time="2025-05-13T23:59:07.143835881Z" level=info msg="CreateContainer within sandbox \"51fb7dfb20da765d9f5744e31a85ceb28e93f6c347b4a08b814d8157572bf2e7\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"c0b10fb9db6883f07c9cefc94661fbe8d0771b0332307975cc77a0bc397d1323\"" May 13 23:59:07.144919 containerd[1733]: time="2025-05-13T23:59:07.144868906Z" level=info msg="StartContainer for \"c0b10fb9db6883f07c9cefc94661fbe8d0771b0332307975cc77a0bc397d1323\"" May 13 23:59:07.146550 containerd[1733]: time="2025-05-13T23:59:07.146515945Z" level=info msg="connecting to shim c0b10fb9db6883f07c9cefc94661fbe8d0771b0332307975cc77a0bc397d1323" address="unix:///run/containerd/s/60c0618add374f8461d4d8110b5b4e6d2e161870b134020f7c7f2c94741de3a0" protocol=ttrpc version=3 May 13 23:59:07.167243 systemd[1]: Started cri-containerd-c0b10fb9db6883f07c9cefc94661fbe8d0771b0332307975cc77a0bc397d1323.scope - libcontainer container c0b10fb9db6883f07c9cefc94661fbe8d0771b0332307975cc77a0bc397d1323. May 13 23:59:07.192392 systemd[1]: cri-containerd-c0b10fb9db6883f07c9cefc94661fbe8d0771b0332307975cc77a0bc397d1323.scope: Deactivated successfully. May 13 23:59:07.194781 containerd[1733]: time="2025-05-13T23:59:07.194742586Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0b10fb9db6883f07c9cefc94661fbe8d0771b0332307975cc77a0bc397d1323\" id:\"c0b10fb9db6883f07c9cefc94661fbe8d0771b0332307975cc77a0bc397d1323\" pid:3576 exited_at:{seconds:1747180747 nanos:194275775}" May 13 23:59:07.196785 containerd[1733]: time="2025-05-13T23:59:07.196743633Z" level=info msg="received exit event container_id:\"c0b10fb9db6883f07c9cefc94661fbe8d0771b0332307975cc77a0bc397d1323\" id:\"c0b10fb9db6883f07c9cefc94661fbe8d0771b0332307975cc77a0bc397d1323\" pid:3576 exited_at:{seconds:1747180747 nanos:194275775}" May 13 23:59:07.198782 containerd[1733]: time="2025-05-13T23:59:07.198134266Z" level=info msg="StartContainer for \"c0b10fb9db6883f07c9cefc94661fbe8d0771b0332307975cc77a0bc397d1323\" returns successfully" May 13 23:59:07.863787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0b10fb9db6883f07c9cefc94661fbe8d0771b0332307975cc77a0bc397d1323-rootfs.mount: Deactivated successfully. May 13 23:59:10.772468 containerd[1733]: time="2025-05-13T23:59:10.772396771Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 13 23:59:13.078986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241895669.mount: Deactivated successfully. May 13 23:59:16.145309 containerd[1733]: time="2025-05-13T23:59:16.145247235Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:16.192708 containerd[1733]: time="2025-05-13T23:59:16.192583865Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" May 13 23:59:16.243904 containerd[1733]: time="2025-05-13T23:59:16.242599459Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:16.288179 containerd[1733]: time="2025-05-13T23:59:16.288028443Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:16.290439 containerd[1733]: time="2025-05-13T23:59:16.289550879Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 5.517103207s" May 13 23:59:16.290439 containerd[1733]: time="2025-05-13T23:59:16.289599680Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" May 13 23:59:16.292478 containerd[1733]: time="2025-05-13T23:59:16.292411947Z" level=info msg="CreateContainer within sandbox \"51fb7dfb20da765d9f5744e31a85ceb28e93f6c347b4a08b814d8157572bf2e7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:59:16.508040 containerd[1733]: time="2025-05-13T23:59:16.507909990Z" level=info msg="Container 54810a6956d47faee53515fb6458ba497dec65ed4dceeb921df1e235a676e7dc: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:16.700448 containerd[1733]: time="2025-05-13T23:59:16.700388884Z" level=info msg="CreateContainer within sandbox \"51fb7dfb20da765d9f5744e31a85ceb28e93f6c347b4a08b814d8157572bf2e7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"54810a6956d47faee53515fb6458ba497dec65ed4dceeb921df1e235a676e7dc\"" May 13 23:59:16.701196 containerd[1733]: time="2025-05-13T23:59:16.701133202Z" level=info msg="StartContainer for \"54810a6956d47faee53515fb6458ba497dec65ed4dceeb921df1e235a676e7dc\"" May 13 23:59:16.703204 containerd[1733]: time="2025-05-13T23:59:16.703158050Z" level=info msg="connecting to shim 54810a6956d47faee53515fb6458ba497dec65ed4dceeb921df1e235a676e7dc" address="unix:///run/containerd/s/60c0618add374f8461d4d8110b5b4e6d2e161870b134020f7c7f2c94741de3a0" protocol=ttrpc version=3 May 13 23:59:16.726220 systemd[1]: Started cri-containerd-54810a6956d47faee53515fb6458ba497dec65ed4dceeb921df1e235a676e7dc.scope - libcontainer container 54810a6956d47faee53515fb6458ba497dec65ed4dceeb921df1e235a676e7dc. May 13 23:59:16.753380 systemd[1]: cri-containerd-54810a6956d47faee53515fb6458ba497dec65ed4dceeb921df1e235a676e7dc.scope: Deactivated successfully. May 13 23:59:16.754651 containerd[1733]: time="2025-05-13T23:59:16.754599778Z" level=info msg="TaskExit event in podsandbox handler container_id:\"54810a6956d47faee53515fb6458ba497dec65ed4dceeb921df1e235a676e7dc\" id:\"54810a6956d47faee53515fb6458ba497dec65ed4dceeb921df1e235a676e7dc\" pid:3647 exited_at:{seconds:1747180756 nanos:754056165}" May 13 23:59:16.757361 containerd[1733]: time="2025-05-13T23:59:16.757253041Z" level=info msg="received exit event container_id:\"54810a6956d47faee53515fb6458ba497dec65ed4dceeb921df1e235a676e7dc\" id:\"54810a6956d47faee53515fb6458ba497dec65ed4dceeb921df1e235a676e7dc\" pid:3647 exited_at:{seconds:1747180756 nanos:754056165}" May 13 23:59:16.764472 containerd[1733]: time="2025-05-13T23:59:16.764308009Z" level=info msg="StartContainer for \"54810a6956d47faee53515fb6458ba497dec65ed4dceeb921df1e235a676e7dc\" returns successfully" May 13 23:59:16.776915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54810a6956d47faee53515fb6458ba497dec65ed4dceeb921df1e235a676e7dc-rootfs.mount: Deactivated successfully. May 13 23:59:16.834825 kubelet[3243]: I0513 23:59:16.834794 3243 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 23:59:17.301659 kubelet[3243]: I0513 23:59:16.879263 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62816060-b280-4dcd-aa68-526653ee8c94-config-volume\") pod \"coredns-6f6b679f8f-7ftgn\" (UID: \"62816060-b280-4dcd-aa68-526653ee8c94\") " pod="kube-system/coredns-6f6b679f8f-7ftgn" May 13 23:59:17.301659 kubelet[3243]: I0513 23:59:16.879327 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7w58\" (UniqueName: \"kubernetes.io/projected/62816060-b280-4dcd-aa68-526653ee8c94-kube-api-access-p7w58\") pod \"coredns-6f6b679f8f-7ftgn\" (UID: \"62816060-b280-4dcd-aa68-526653ee8c94\") " pod="kube-system/coredns-6f6b679f8f-7ftgn" May 13 23:59:17.301659 kubelet[3243]: I0513 23:59:16.980494 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsnqj\" (UniqueName: \"kubernetes.io/projected/6ee52719-6acd-4834-917b-af4b911361fe-kube-api-access-dsnqj\") pod \"coredns-6f6b679f8f-c8frs\" (UID: \"6ee52719-6acd-4834-917b-af4b911361fe\") " pod="kube-system/coredns-6f6b679f8f-c8frs" May 13 23:59:17.301659 kubelet[3243]: I0513 23:59:16.980579 3243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ee52719-6acd-4834-917b-af4b911361fe-config-volume\") pod \"coredns-6f6b679f8f-c8frs\" (UID: \"6ee52719-6acd-4834-917b-af4b911361fe\") " pod="kube-system/coredns-6f6b679f8f-c8frs" May 13 23:59:16.877327 systemd[1]: Created slice kubepods-burstable-pod62816060_b280_4dcd_aa68_526653ee8c94.slice - libcontainer container kubepods-burstable-pod62816060_b280_4dcd_aa68_526653ee8c94.slice. May 13 23:59:16.884448 systemd[1]: Created slice kubepods-burstable-pod6ee52719_6acd_4834_917b_af4b911361fe.slice - libcontainer container kubepods-burstable-pod6ee52719_6acd_4834_917b_af4b911361fe.slice. May 13 23:59:17.602546 containerd[1733]: time="2025-05-13T23:59:17.602499213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7ftgn,Uid:62816060-b280-4dcd-aa68-526653ee8c94,Namespace:kube-system,Attempt:0,}" May 13 23:59:17.605248 containerd[1733]: time="2025-05-13T23:59:17.605204677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c8frs,Uid:6ee52719-6acd-4834-917b-af4b911361fe,Namespace:kube-system,Attempt:0,}" May 13 23:59:20.864724 systemd[1]: run-netns-cni\x2d2a6faaac\x2d6cfb\x2d4f4a\x2d265a\x2da9614a9d278e.mount: Deactivated successfully. May 13 23:59:20.867483 containerd[1733]: time="2025-05-13T23:59:20.867031220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c8frs,Uid:6ee52719-6acd-4834-917b-af4b911361fe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7374738b48a4e343281c9933d474b6b53869726fd9e322ab6aec6dbd64034905\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:59:20.868601 kubelet[3243]: E0513 23:59:20.868286 3243 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7374738b48a4e343281c9933d474b6b53869726fd9e322ab6aec6dbd64034905\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:59:20.868601 kubelet[3243]: E0513 23:59:20.868381 3243 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7374738b48a4e343281c9933d474b6b53869726fd9e322ab6aec6dbd64034905\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-c8frs" May 13 23:59:20.868601 kubelet[3243]: E0513 23:59:20.868410 3243 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7374738b48a4e343281c9933d474b6b53869726fd9e322ab6aec6dbd64034905\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-c8frs" May 13 23:59:20.868601 kubelet[3243]: E0513 23:59:20.868497 3243 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-c8frs_kube-system(6ee52719-6acd-4834-917b-af4b911361fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-c8frs_kube-system(6ee52719-6acd-4834-917b-af4b911361fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7374738b48a4e343281c9933d474b6b53869726fd9e322ab6aec6dbd64034905\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-c8frs" podUID="6ee52719-6acd-4834-917b-af4b911361fe" May 13 23:59:20.875599 systemd[1]: run-netns-cni\x2dc1d70448\x2d471b\x2d390b\x2d881b\x2d42906a7a5350.mount: Deactivated successfully. May 13 23:59:20.878484 containerd[1733]: time="2025-05-13T23:59:20.878441392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7ftgn,Uid:62816060-b280-4dcd-aa68-526653ee8c94,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bdce27f86e33b435e30f5b80c7945aa39530d17b558b9f549ae99c0c56aa0c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:59:20.878711 kubelet[3243]: E0513 23:59:20.878668 3243 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bdce27f86e33b435e30f5b80c7945aa39530d17b558b9f549ae99c0c56aa0c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:59:20.878804 kubelet[3243]: E0513 23:59:20.878735 3243 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bdce27f86e33b435e30f5b80c7945aa39530d17b558b9f549ae99c0c56aa0c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-7ftgn" May 13 23:59:20.878804 kubelet[3243]: E0513 23:59:20.878759 3243 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bdce27f86e33b435e30f5b80c7945aa39530d17b558b9f549ae99c0c56aa0c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-7ftgn" May 13 23:59:20.878885 kubelet[3243]: E0513 23:59:20.878812 3243 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7ftgn_kube-system(62816060-b280-4dcd-aa68-526653ee8c94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7ftgn_kube-system(62816060-b280-4dcd-aa68-526653ee8c94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8bdce27f86e33b435e30f5b80c7945aa39530d17b558b9f549ae99c0c56aa0c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-7ftgn" podUID="62816060-b280-4dcd-aa68-526653ee8c94" May 13 23:59:21.798523 containerd[1733]: time="2025-05-13T23:59:21.797713531Z" level=info msg="CreateContainer within sandbox \"51fb7dfb20da765d9f5744e31a85ceb28e93f6c347b4a08b814d8157572bf2e7\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 13 23:59:21.833289 containerd[1733]: time="2025-05-13T23:59:21.833230378Z" level=info msg="Container b5b98173409fe31535f22f65032b54445f9ee6238edb523e16e35e76e4575d83: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:21.856105 containerd[1733]: time="2025-05-13T23:59:21.855546011Z" level=info msg="CreateContainer within sandbox \"51fb7dfb20da765d9f5744e31a85ceb28e93f6c347b4a08b814d8157572bf2e7\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"b5b98173409fe31535f22f65032b54445f9ee6238edb523e16e35e76e4575d83\"" May 13 23:59:21.859148 containerd[1733]: time="2025-05-13T23:59:21.857012246Z" level=info msg="StartContainer for \"b5b98173409fe31535f22f65032b54445f9ee6238edb523e16e35e76e4575d83\"" May 13 23:59:21.859148 containerd[1733]: time="2025-05-13T23:59:21.857989969Z" level=info msg="connecting to shim b5b98173409fe31535f22f65032b54445f9ee6238edb523e16e35e76e4575d83" address="unix:///run/containerd/s/60c0618add374f8461d4d8110b5b4e6d2e161870b134020f7c7f2c94741de3a0" protocol=ttrpc version=3 May 13 23:59:21.892324 systemd[1]: Started cri-containerd-b5b98173409fe31535f22f65032b54445f9ee6238edb523e16e35e76e4575d83.scope - libcontainer container b5b98173409fe31535f22f65032b54445f9ee6238edb523e16e35e76e4575d83. May 13 23:59:21.925115 containerd[1733]: time="2025-05-13T23:59:21.925058870Z" level=info msg="StartContainer for \"b5b98173409fe31535f22f65032b54445f9ee6238edb523e16e35e76e4575d83\" returns successfully" May 13 23:59:23.131667 systemd-networkd[1337]: flannel.1: Link UP May 13 23:59:23.131680 systemd-networkd[1337]: flannel.1: Gained carrier May 13 23:59:24.540228 systemd-networkd[1337]: flannel.1: Gained IPv6LL May 13 23:59:31.724034 containerd[1733]: time="2025-05-13T23:59:31.723970898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c8frs,Uid:6ee52719-6acd-4834-917b-af4b911361fe,Namespace:kube-system,Attempt:0,}" May 13 23:59:31.745992 systemd-networkd[1337]: cni0: Link UP May 13 23:59:31.746001 systemd-networkd[1337]: cni0: Gained carrier May 13 23:59:31.751283 systemd-networkd[1337]: cni0: Lost carrier May 13 23:59:31.791629 kernel: cni0: port 1(veth4228b07a) entered blocking state May 13 23:59:31.791717 kernel: cni0: port 1(veth4228b07a) entered disabled state May 13 23:59:31.791779 kernel: veth4228b07a: entered allmulticast mode May 13 23:59:31.791802 kernel: veth4228b07a: entered promiscuous mode May 13 23:59:31.794853 kernel: cni0: port 1(veth4228b07a) entered blocking state May 13 23:59:31.794922 kernel: cni0: port 1(veth4228b07a) entered forwarding state May 13 23:59:31.799017 kernel: cni0: port 1(veth4228b07a) entered disabled state May 13 23:59:31.797013 systemd-networkd[1337]: veth4228b07a: Link UP May 13 23:59:31.810537 kernel: cni0: port 1(veth4228b07a) entered blocking state May 13 23:59:31.810610 kernel: cni0: port 1(veth4228b07a) entered forwarding state May 13 23:59:31.810475 systemd-networkd[1337]: veth4228b07a: Gained carrier May 13 23:59:31.810962 systemd-networkd[1337]: cni0: Gained carrier May 13 23:59:31.813148 containerd[1733]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c938), "name":"cbr0", "type":"bridge"} May 13 23:59:31.813148 containerd[1733]: delegateAdd: netconf sent to delegate plugin: May 13 23:59:31.871189 containerd[1733]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T23:59:31.870583785Z" level=info msg="connecting to shim 8ef462769900e85f1d111ad6e9dcef0381b6cb1257a7f364b16ec9f93f81390d" address="unix:///run/containerd/s/635b4bb6e7ce4d4066a16335e877e059c97961dde12da0c7f1f8506ce6132f67" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:31.900222 systemd[1]: Started cri-containerd-8ef462769900e85f1d111ad6e9dcef0381b6cb1257a7f364b16ec9f93f81390d.scope - libcontainer container 8ef462769900e85f1d111ad6e9dcef0381b6cb1257a7f364b16ec9f93f81390d. May 13 23:59:31.945420 containerd[1733]: time="2025-05-13T23:59:31.945291961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c8frs,Uid:6ee52719-6acd-4834-917b-af4b911361fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ef462769900e85f1d111ad6e9dcef0381b6cb1257a7f364b16ec9f93f81390d\"" May 13 23:59:31.949159 containerd[1733]: time="2025-05-13T23:59:31.949125547Z" level=info msg="CreateContainer within sandbox \"8ef462769900e85f1d111ad6e9dcef0381b6cb1257a7f364b16ec9f93f81390d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:59:31.970991 containerd[1733]: time="2025-05-13T23:59:31.970218620Z" level=info msg="Container d0557d6d8d2b2dce5aaf47d6f46a4827c1287561c1dc6b059d1b902fed74b30e: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:31.985687 containerd[1733]: time="2025-05-13T23:59:31.985573664Z" level=info msg="CreateContainer within sandbox \"8ef462769900e85f1d111ad6e9dcef0381b6cb1257a7f364b16ec9f93f81390d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0557d6d8d2b2dce5aaf47d6f46a4827c1287561c1dc6b059d1b902fed74b30e\"" May 13 23:59:31.987616 containerd[1733]: time="2025-05-13T23:59:31.987343304Z" level=info msg="StartContainer for \"d0557d6d8d2b2dce5aaf47d6f46a4827c1287561c1dc6b059d1b902fed74b30e\"" May 13 23:59:31.988331 containerd[1733]: time="2025-05-13T23:59:31.988299225Z" level=info msg="connecting to shim d0557d6d8d2b2dce5aaf47d6f46a4827c1287561c1dc6b059d1b902fed74b30e" address="unix:///run/containerd/s/635b4bb6e7ce4d4066a16335e877e059c97961dde12da0c7f1f8506ce6132f67" protocol=ttrpc version=3 May 13 23:59:32.005227 systemd[1]: Started cri-containerd-d0557d6d8d2b2dce5aaf47d6f46a4827c1287561c1dc6b059d1b902fed74b30e.scope - libcontainer container d0557d6d8d2b2dce5aaf47d6f46a4827c1287561c1dc6b059d1b902fed74b30e. May 13 23:59:32.039782 containerd[1733]: time="2025-05-13T23:59:32.039733779Z" level=info msg="StartContainer for \"d0557d6d8d2b2dce5aaf47d6f46a4827c1287561c1dc6b059d1b902fed74b30e\" returns successfully" May 13 23:59:32.724949 containerd[1733]: time="2025-05-13T23:59:32.724595036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7ftgn,Uid:62816060-b280-4dcd-aa68-526653ee8c94,Namespace:kube-system,Attempt:0,}" May 13 23:59:32.742611 systemd-networkd[1337]: vethfb5117db: Link UP May 13 23:59:32.746024 kernel: cni0: port 2(vethfb5117db) entered blocking state May 13 23:59:32.746159 kernel: cni0: port 2(vethfb5117db) entered disabled state May 13 23:59:32.746192 kernel: vethfb5117db: entered allmulticast mode May 13 23:59:32.749041 kernel: vethfb5117db: entered promiscuous mode May 13 23:59:32.749155 kernel: cni0: port 2(vethfb5117db) entered blocking state May 13 23:59:32.752967 kernel: cni0: port 2(vethfb5117db) entered forwarding state May 13 23:59:32.754989 kernel: cni0: port 2(vethfb5117db) entered disabled state May 13 23:59:32.763054 kernel: cni0: port 2(vethfb5117db) entered blocking state May 13 23:59:32.763172 kernel: cni0: port 2(vethfb5117db) entered forwarding state May 13 23:59:32.763657 systemd-networkd[1337]: vethfb5117db: Gained carrier May 13 23:59:32.768137 containerd[1733]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c678), "name":"cbr0", "type":"bridge"} May 13 23:59:32.768137 containerd[1733]: delegateAdd: netconf sent to delegate plugin: May 13 23:59:32.822927 containerd[1733]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T23:59:32.822693936Z" level=info msg="connecting to shim 24600aa77a9f5442968ed5cb9518bf8ad918076b5d9126606a3b13cfaa1f2721" address="unix:///run/containerd/s/505257088ba5b79c6665628b5dcdd0ad351729be56ab4d704514c522b1432799" namespace=k8s.io protocol=ttrpc version=3 May 13 23:59:32.851719 kubelet[3243]: I0513 23:59:32.851136 3243 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-gz9mf" podStartSLOduration=17.67752049 podStartE2EDuration="28.851112573s" podCreationTimestamp="2025-05-13 23:59:04 +0000 UTC" firstStartedPulling="2025-05-13 23:59:05.116912419 +0000 UTC m=+2.539319488" lastFinishedPulling="2025-05-13 23:59:16.290504402 +0000 UTC m=+13.712911571" observedRunningTime="2025-05-13 23:59:22.82010513 +0000 UTC m=+20.242512199" watchObservedRunningTime="2025-05-13 23:59:32.851112573 +0000 UTC m=+30.273519642" May 13 23:59:32.851719 kubelet[3243]: I0513 23:59:32.851693 3243 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-c8frs" podStartSLOduration=29.851674486 podStartE2EDuration="29.851674486s" podCreationTimestamp="2025-05-13 23:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:32.850859268 +0000 UTC m=+30.273266337" watchObservedRunningTime="2025-05-13 23:59:32.851674486 +0000 UTC m=+30.274081555" May 13 23:59:32.852329 systemd[1]: Started cri-containerd-24600aa77a9f5442968ed5cb9518bf8ad918076b5d9126606a3b13cfaa1f2721.scope - libcontainer container 24600aa77a9f5442968ed5cb9518bf8ad918076b5d9126606a3b13cfaa1f2721. May 13 23:59:32.934932 containerd[1733]: time="2025-05-13T23:59:32.934853451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7ftgn,Uid:62816060-b280-4dcd-aa68-526653ee8c94,Namespace:kube-system,Attempt:0,} returns sandbox id \"24600aa77a9f5442968ed5cb9518bf8ad918076b5d9126606a3b13cfaa1f2721\"" May 13 23:59:32.937694 containerd[1733]: time="2025-05-13T23:59:32.937649814Z" level=info msg="CreateContainer within sandbox \"24600aa77a9f5442968ed5cb9518bf8ad918076b5d9126606a3b13cfaa1f2721\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:59:32.961093 containerd[1733]: time="2025-05-13T23:59:32.959025993Z" level=info msg="Container b45159e09315ffb75bb2cd9aff7e2a5113d605e63c65524fa7ec6f5df7470214: CDI devices from CRI Config.CDIDevices: []" May 13 23:59:32.974180 containerd[1733]: time="2025-05-13T23:59:32.974136132Z" level=info msg="CreateContainer within sandbox \"24600aa77a9f5442968ed5cb9518bf8ad918076b5d9126606a3b13cfaa1f2721\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b45159e09315ffb75bb2cd9aff7e2a5113d605e63c65524fa7ec6f5df7470214\"" May 13 23:59:32.974710 containerd[1733]: time="2025-05-13T23:59:32.974610943Z" level=info msg="StartContainer for \"b45159e09315ffb75bb2cd9aff7e2a5113d605e63c65524fa7ec6f5df7470214\"" May 13 23:59:32.976165 containerd[1733]: time="2025-05-13T23:59:32.975580364Z" level=info msg="connecting to shim b45159e09315ffb75bb2cd9aff7e2a5113d605e63c65524fa7ec6f5df7470214" address="unix:///run/containerd/s/505257088ba5b79c6665628b5dcdd0ad351729be56ab4d704514c522b1432799" protocol=ttrpc version=3 May 13 23:59:32.998228 systemd[1]: Started cri-containerd-b45159e09315ffb75bb2cd9aff7e2a5113d605e63c65524fa7ec6f5df7470214.scope - libcontainer container b45159e09315ffb75bb2cd9aff7e2a5113d605e63c65524fa7ec6f5df7470214. May 13 23:59:33.036193 containerd[1733]: time="2025-05-13T23:59:33.034865794Z" level=info msg="StartContainer for \"b45159e09315ffb75bb2cd9aff7e2a5113d605e63c65524fa7ec6f5df7470214\" returns successfully" May 13 23:59:33.436374 systemd-networkd[1337]: veth4228b07a: Gained IPv6LL May 13 23:59:33.692236 systemd-networkd[1337]: cni0: Gained IPv6LL May 13 23:59:34.460221 systemd-networkd[1337]: vethfb5117db: Gained IPv6LL May 13 23:59:37.616726 kubelet[3243]: I0513 23:59:37.616656 3243 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7ftgn" podStartSLOduration=34.616634438 podStartE2EDuration="34.616634438s" podCreationTimestamp="2025-05-13 23:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:59:33.839053227 +0000 UTC m=+31.261460396" watchObservedRunningTime="2025-05-13 23:59:37.616634438 +0000 UTC m=+35.039041507" May 14 00:00:02.776040 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 14 00:00:02.817721 systemd[1]: logrotate.service: Deactivated successfully. May 14 00:00:14.223777 systemd[1]: Started sshd@5-10.200.8.37:22-10.200.16.10:53950.service - OpenSSH per-connection server daemon (10.200.16.10:53950). May 14 00:00:14.863040 sshd[4284]: Accepted publickey for core from 10.200.16.10 port 53950 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:00:14.864734 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:14.870775 systemd-logind[1711]: New session 8 of user core. May 14 00:00:14.876223 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 00:00:15.384659 sshd[4286]: Connection closed by 10.200.16.10 port 53950 May 14 00:00:15.385114 sshd-session[4284]: pam_unix(sshd:session): session closed for user core May 14 00:00:15.388836 systemd[1]: sshd@5-10.200.8.37:22-10.200.16.10:53950.service: Deactivated successfully. May 14 00:00:15.391228 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:00:15.393586 systemd-logind[1711]: Session 8 logged out. Waiting for processes to exit. May 14 00:00:15.395473 systemd-logind[1711]: Removed session 8. May 14 00:00:20.498337 systemd[1]: Started sshd@6-10.200.8.37:22-10.200.16.10:32868.service - OpenSSH per-connection server daemon (10.200.16.10:32868). May 14 00:00:21.135216 sshd[4320]: Accepted publickey for core from 10.200.16.10 port 32868 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:00:21.136878 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:21.142713 systemd-logind[1711]: New session 9 of user core. May 14 00:00:21.149234 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 00:00:21.634730 sshd[4322]: Connection closed by 10.200.16.10 port 32868 May 14 00:00:21.635601 sshd-session[4320]: pam_unix(sshd:session): session closed for user core May 14 00:00:21.639259 systemd[1]: sshd@6-10.200.8.37:22-10.200.16.10:32868.service: Deactivated successfully. May 14 00:00:21.642006 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:00:21.644220 systemd-logind[1711]: Session 9 logged out. Waiting for processes to exit. May 14 00:00:21.645334 systemd-logind[1711]: Removed session 9. May 14 00:00:26.748426 systemd[1]: Started sshd@7-10.200.8.37:22-10.200.16.10:32878.service - OpenSSH per-connection server daemon (10.200.16.10:32878). May 14 00:00:27.381940 sshd[4355]: Accepted publickey for core from 10.200.16.10 port 32878 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:00:27.383777 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:27.388964 systemd-logind[1711]: New session 10 of user core. May 14 00:00:27.394230 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 00:00:27.885176 sshd[4357]: Connection closed by 10.200.16.10 port 32878 May 14 00:00:27.886029 sshd-session[4355]: pam_unix(sshd:session): session closed for user core May 14 00:00:27.889588 systemd[1]: sshd@7-10.200.8.37:22-10.200.16.10:32878.service: Deactivated successfully. May 14 00:00:27.891939 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:00:27.893655 systemd-logind[1711]: Session 10 logged out. Waiting for processes to exit. May 14 00:00:27.894798 systemd-logind[1711]: Removed session 10. May 14 00:00:32.997436 systemd[1]: Started sshd@8-10.200.8.37:22-10.200.16.10:43216.service - OpenSSH per-connection server daemon (10.200.16.10:43216). May 14 00:00:33.635241 sshd[4391]: Accepted publickey for core from 10.200.16.10 port 43216 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:00:33.636983 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:33.641420 systemd-logind[1711]: New session 11 of user core. May 14 00:00:33.650228 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 00:00:34.138036 sshd[4414]: Connection closed by 10.200.16.10 port 43216 May 14 00:00:34.138901 sshd-session[4391]: pam_unix(sshd:session): session closed for user core May 14 00:00:34.143494 systemd[1]: sshd@8-10.200.8.37:22-10.200.16.10:43216.service: Deactivated successfully. May 14 00:00:34.145981 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:00:34.147016 systemd-logind[1711]: Session 11 logged out. Waiting for processes to exit. May 14 00:00:34.148311 systemd-logind[1711]: Removed session 11. May 14 00:00:34.251573 systemd[1]: Started sshd@9-10.200.8.37:22-10.200.16.10:43224.service - OpenSSH per-connection server daemon (10.200.16.10:43224). May 14 00:00:34.888363 sshd[4429]: Accepted publickey for core from 10.200.16.10 port 43224 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:00:34.890155 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:34.895141 systemd-logind[1711]: New session 12 of user core. May 14 00:00:34.901277 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 00:00:35.430432 sshd[4433]: Connection closed by 10.200.16.10 port 43224 May 14 00:00:35.431353 sshd-session[4429]: pam_unix(sshd:session): session closed for user core May 14 00:00:35.436163 systemd[1]: sshd@9-10.200.8.37:22-10.200.16.10:43224.service: Deactivated successfully. May 14 00:00:35.438622 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:00:35.439669 systemd-logind[1711]: Session 12 logged out. Waiting for processes to exit. May 14 00:00:35.440750 systemd-logind[1711]: Removed session 12. May 14 00:00:35.542730 systemd[1]: Started sshd@10-10.200.8.37:22-10.200.16.10:43232.service - OpenSSH per-connection server daemon (10.200.16.10:43232). May 14 00:00:36.182132 sshd[4443]: Accepted publickey for core from 10.200.16.10 port 43232 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:00:36.183616 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:36.187972 systemd-logind[1711]: New session 13 of user core. May 14 00:00:36.192252 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 00:00:36.683972 sshd[4445]: Connection closed by 10.200.16.10 port 43232 May 14 00:00:36.684838 sshd-session[4443]: pam_unix(sshd:session): session closed for user core May 14 00:00:36.689588 systemd[1]: sshd@10-10.200.8.37:22-10.200.16.10:43232.service: Deactivated successfully. May 14 00:00:36.692024 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:00:36.693083 systemd-logind[1711]: Session 13 logged out. Waiting for processes to exit. May 14 00:00:36.694063 systemd-logind[1711]: Removed session 13. May 14 00:00:41.796004 systemd[1]: Started sshd@11-10.200.8.37:22-10.200.16.10:55646.service - OpenSSH per-connection server daemon (10.200.16.10:55646). May 14 00:00:42.429477 sshd[4478]: Accepted publickey for core from 10.200.16.10 port 55646 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:00:42.431191 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:42.436335 systemd-logind[1711]: New session 14 of user core. May 14 00:00:42.443240 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 00:00:42.940910 sshd[4480]: Connection closed by 10.200.16.10 port 55646 May 14 00:00:42.941841 sshd-session[4478]: pam_unix(sshd:session): session closed for user core May 14 00:00:42.946320 systemd[1]: sshd@11-10.200.8.37:22-10.200.16.10:55646.service: Deactivated successfully. May 14 00:00:42.948666 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:00:42.949596 systemd-logind[1711]: Session 14 logged out. Waiting for processes to exit. May 14 00:00:42.950656 systemd-logind[1711]: Removed session 14. May 14 00:00:48.055165 systemd[1]: Started sshd@12-10.200.8.37:22-10.200.16.10:55650.service - OpenSSH per-connection server daemon (10.200.16.10:55650). May 14 00:00:48.689526 sshd[4513]: Accepted publickey for core from 10.200.16.10 port 55650 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:00:48.691335 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:48.696600 systemd-logind[1711]: New session 15 of user core. May 14 00:00:48.701214 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:00:49.189893 sshd[4536]: Connection closed by 10.200.16.10 port 55650 May 14 00:00:49.190752 sshd-session[4513]: pam_unix(sshd:session): session closed for user core May 14 00:00:49.194469 systemd[1]: sshd@12-10.200.8.37:22-10.200.16.10:55650.service: Deactivated successfully. May 14 00:00:49.197145 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:00:49.198779 systemd-logind[1711]: Session 15 logged out. Waiting for processes to exit. May 14 00:00:49.199854 systemd-logind[1711]: Removed session 15. May 14 00:00:54.303181 systemd[1]: Started sshd@13-10.200.8.37:22-10.200.16.10:35394.service - OpenSSH per-connection server daemon (10.200.16.10:35394). May 14 00:00:54.935285 sshd[4569]: Accepted publickey for core from 10.200.16.10 port 35394 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:00:54.936769 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:54.941375 systemd-logind[1711]: New session 16 of user core. May 14 00:00:54.949207 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:00:55.434632 sshd[4571]: Connection closed by 10.200.16.10 port 35394 May 14 00:00:55.435508 sshd-session[4569]: pam_unix(sshd:session): session closed for user core May 14 00:00:55.439540 systemd[1]: sshd@13-10.200.8.37:22-10.200.16.10:35394.service: Deactivated successfully. May 14 00:00:55.441671 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:00:55.442707 systemd-logind[1711]: Session 16 logged out. Waiting for processes to exit. May 14 00:00:55.443737 systemd-logind[1711]: Removed session 16. May 14 00:01:00.547288 systemd[1]: Started sshd@14-10.200.8.37:22-10.200.16.10:44334.service - OpenSSH per-connection server daemon (10.200.16.10:44334). May 14 00:01:01.181193 sshd[4604]: Accepted publickey for core from 10.200.16.10 port 44334 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:01:01.182928 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:01.189300 systemd-logind[1711]: New session 17 of user core. May 14 00:01:01.194250 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:01:01.683559 sshd[4606]: Connection closed by 10.200.16.10 port 44334 May 14 00:01:01.684465 sshd-session[4604]: pam_unix(sshd:session): session closed for user core May 14 00:01:01.689610 systemd[1]: sshd@14-10.200.8.37:22-10.200.16.10:44334.service: Deactivated successfully. May 14 00:01:01.691877 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:01:01.692833 systemd-logind[1711]: Session 17 logged out. Waiting for processes to exit. May 14 00:01:01.693753 systemd-logind[1711]: Removed session 17. May 14 00:01:01.795217 systemd[1]: Started sshd@15-10.200.8.37:22-10.200.16.10:44340.service - OpenSSH per-connection server daemon (10.200.16.10:44340). May 14 00:01:02.437322 sshd[4618]: Accepted publickey for core from 10.200.16.10 port 44340 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:01:02.438802 sshd-session[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:02.443652 systemd-logind[1711]: New session 18 of user core. May 14 00:01:02.454242 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:01:03.094195 sshd[4620]: Connection closed by 10.200.16.10 port 44340 May 14 00:01:03.094948 sshd-session[4618]: pam_unix(sshd:session): session closed for user core May 14 00:01:03.097993 systemd[1]: sshd@15-10.200.8.37:22-10.200.16.10:44340.service: Deactivated successfully. May 14 00:01:03.100576 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:01:03.103719 systemd-logind[1711]: Session 18 logged out. Waiting for processes to exit. May 14 00:01:03.105331 systemd-logind[1711]: Removed session 18. May 14 00:01:03.207548 systemd[1]: Started sshd@16-10.200.8.37:22-10.200.16.10:44342.service - OpenSSH per-connection server daemon (10.200.16.10:44342). May 14 00:01:03.844700 sshd[4632]: Accepted publickey for core from 10.200.16.10 port 44342 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:01:03.846428 sshd-session[4632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:03.850822 systemd-logind[1711]: New session 19 of user core. May 14 00:01:03.859236 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:01:05.685454 sshd[4655]: Connection closed by 10.200.16.10 port 44342 May 14 00:01:05.686303 sshd-session[4632]: pam_unix(sshd:session): session closed for user core May 14 00:01:05.689624 systemd[1]: sshd@16-10.200.8.37:22-10.200.16.10:44342.service: Deactivated successfully. May 14 00:01:05.691974 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:01:05.693884 systemd-logind[1711]: Session 19 logged out. Waiting for processes to exit. May 14 00:01:05.695176 systemd-logind[1711]: Removed session 19. May 14 00:01:05.795413 systemd[1]: Started sshd@17-10.200.8.37:22-10.200.16.10:44354.service - OpenSSH per-connection server daemon (10.200.16.10:44354). May 14 00:01:06.428482 sshd[4674]: Accepted publickey for core from 10.200.16.10 port 44354 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:01:06.430263 sshd-session[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:06.435452 systemd-logind[1711]: New session 20 of user core. May 14 00:01:06.443211 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:01:07.033650 sshd[4676]: Connection closed by 10.200.16.10 port 44354 May 14 00:01:07.034497 sshd-session[4674]: pam_unix(sshd:session): session closed for user core May 14 00:01:07.037583 systemd[1]: sshd@17-10.200.8.37:22-10.200.16.10:44354.service: Deactivated successfully. May 14 00:01:07.039953 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:01:07.041884 systemd-logind[1711]: Session 20 logged out. Waiting for processes to exit. May 14 00:01:07.043152 systemd-logind[1711]: Removed session 20. May 14 00:01:07.145292 systemd[1]: Started sshd@18-10.200.8.37:22-10.200.16.10:44364.service - OpenSSH per-connection server daemon (10.200.16.10:44364). May 14 00:01:07.779175 sshd[4686]: Accepted publickey for core from 10.200.16.10 port 44364 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:01:07.780866 sshd-session[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:07.785587 systemd-logind[1711]: New session 21 of user core. May 14 00:01:07.791243 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:01:08.279828 sshd[4688]: Connection closed by 10.200.16.10 port 44364 May 14 00:01:08.280714 sshd-session[4686]: pam_unix(sshd:session): session closed for user core May 14 00:01:08.285174 systemd[1]: sshd@18-10.200.8.37:22-10.200.16.10:44364.service: Deactivated successfully. May 14 00:01:08.287279 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:01:08.288422 systemd-logind[1711]: Session 21 logged out. Waiting for processes to exit. May 14 00:01:08.289495 systemd-logind[1711]: Removed session 21. May 14 00:01:13.396972 systemd[1]: Started sshd@19-10.200.8.37:22-10.200.16.10:36536.service - OpenSSH per-connection server daemon (10.200.16.10:36536). May 14 00:01:14.033911 sshd[4721]: Accepted publickey for core from 10.200.16.10 port 36536 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:01:14.035420 sshd-session[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:14.041093 systemd-logind[1711]: New session 22 of user core. May 14 00:01:14.046246 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 00:01:14.543422 sshd[4744]: Connection closed by 10.200.16.10 port 36536 May 14 00:01:14.544380 sshd-session[4721]: pam_unix(sshd:session): session closed for user core May 14 00:01:14.547715 systemd[1]: sshd@19-10.200.8.37:22-10.200.16.10:36536.service: Deactivated successfully. May 14 00:01:14.549961 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:01:14.551793 systemd-logind[1711]: Session 22 logged out. Waiting for processes to exit. May 14 00:01:14.552708 systemd-logind[1711]: Removed session 22. May 14 00:01:19.656090 systemd[1]: Started sshd@20-10.200.8.37:22-10.200.16.10:37996.service - OpenSSH per-connection server daemon (10.200.16.10:37996). May 14 00:01:20.291705 sshd[4776]: Accepted publickey for core from 10.200.16.10 port 37996 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:01:20.293349 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:20.298097 systemd-logind[1711]: New session 23 of user core. May 14 00:01:20.304250 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 00:01:20.790598 sshd[4778]: Connection closed by 10.200.16.10 port 37996 May 14 00:01:20.791383 sshd-session[4776]: pam_unix(sshd:session): session closed for user core May 14 00:01:20.794394 systemd[1]: sshd@20-10.200.8.37:22-10.200.16.10:37996.service: Deactivated successfully. May 14 00:01:20.796675 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:01:20.798242 systemd-logind[1711]: Session 23 logged out. Waiting for processes to exit. May 14 00:01:20.799369 systemd-logind[1711]: Removed session 23. May 14 00:01:25.915176 systemd[1]: Started sshd@21-10.200.8.37:22-10.200.16.10:38004.service - OpenSSH per-connection server daemon (10.200.16.10:38004). May 14 00:01:26.548994 sshd[4814]: Accepted publickey for core from 10.200.16.10 port 38004 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:01:26.550593 sshd-session[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:26.554876 systemd-logind[1711]: New session 24 of user core. May 14 00:01:26.564217 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 00:01:27.044303 sshd[4816]: Connection closed by 10.200.16.10 port 38004 May 14 00:01:27.045126 sshd-session[4814]: pam_unix(sshd:session): session closed for user core May 14 00:01:27.050855 systemd[1]: sshd@21-10.200.8.37:22-10.200.16.10:38004.service: Deactivated successfully. May 14 00:01:27.053388 systemd[1]: session-24.scope: Deactivated successfully. May 14 00:01:27.054323 systemd-logind[1711]: Session 24 logged out. Waiting for processes to exit. May 14 00:01:27.055394 systemd-logind[1711]: Removed session 24. May 14 00:01:32.156532 systemd[1]: Started sshd@22-10.200.8.37:22-10.200.16.10:55210.service - OpenSSH per-connection server daemon (10.200.16.10:55210). May 14 00:01:32.793146 sshd[4848]: Accepted publickey for core from 10.200.16.10 port 55210 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:01:32.794556 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:32.799181 systemd-logind[1711]: New session 25 of user core. May 14 00:01:32.804231 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 00:01:33.291370 sshd[4850]: Connection closed by 10.200.16.10 port 55210 May 14 00:01:33.292260 sshd-session[4848]: pam_unix(sshd:session): session closed for user core May 14 00:01:33.295667 systemd[1]: sshd@22-10.200.8.37:22-10.200.16.10:55210.service: Deactivated successfully. May 14 00:01:33.297752 systemd[1]: session-25.scope: Deactivated successfully. May 14 00:01:33.299394 systemd-logind[1711]: Session 25 logged out. Waiting for processes to exit. May 14 00:01:33.300506 systemd-logind[1711]: Removed session 25. May 14 00:01:38.404304 systemd[1]: Started sshd@23-10.200.8.37:22-10.200.16.10:55226.service - OpenSSH per-connection server daemon (10.200.16.10:55226). May 14 00:01:39.042374 sshd[4885]: Accepted publickey for core from 10.200.16.10 port 55226 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:01:39.043947 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:39.048464 systemd-logind[1711]: New session 26 of user core. May 14 00:01:39.052246 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 00:01:39.541153 sshd[4908]: Connection closed by 10.200.16.10 port 55226 May 14 00:01:39.542015 sshd-session[4885]: pam_unix(sshd:session): session closed for user core May 14 00:01:39.546650 systemd[1]: sshd@23-10.200.8.37:22-10.200.16.10:55226.service: Deactivated successfully. May 14 00:01:39.548829 systemd[1]: session-26.scope: Deactivated successfully. May 14 00:01:39.549677 systemd-logind[1711]: Session 26 logged out. Waiting for processes to exit. May 14 00:01:39.550826 systemd-logind[1711]: Removed session 26.