Apr 30 12:50:16.039900 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:26:36 -00 2025 Apr 30 12:50:16.039930 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:50:16.039944 kernel: BIOS-provided physical RAM map: Apr 30 12:50:16.039957 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 12:50:16.039973 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Apr 30 12:50:16.039989 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Apr 30 12:50:16.040005 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Apr 30 12:50:16.040020 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Apr 30 12:50:16.040042 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Apr 30 12:50:16.040054 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Apr 30 12:50:16.040070 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Apr 30 12:50:16.040085 kernel: printk: bootconsole [earlyser0] enabled Apr 30 12:50:16.040099 kernel: NX (Execute Disable) protection: active Apr 30 12:50:16.040112 kernel: APIC: Static calls initialized Apr 30 12:50:16.040130 kernel: efi: EFI v2.7 by Microsoft Apr 30 12:50:16.040162 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 RNG=0x3ffd1018 Apr 30 12:50:16.040178 kernel: random: crng init done Apr 30 12:50:16.040194 kernel: secureboot: Secure boot disabled Apr 30 12:50:16.040210 kernel: SMBIOS 3.1.0 present. Apr 30 12:50:16.040227 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Apr 30 12:50:16.040243 kernel: Hypervisor detected: Microsoft Hyper-V Apr 30 12:50:16.040258 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Apr 30 12:50:16.040273 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Apr 30 12:50:16.040287 kernel: Hyper-V: Nested features: 0x1e0101 Apr 30 12:50:16.040305 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Apr 30 12:50:16.040318 kernel: Hyper-V: Using hypercall for remote TLB flush Apr 30 12:50:16.040335 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 12:50:16.040352 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Apr 30 12:50:16.040364 kernel: tsc: Marking TSC unstable due to running on Hyper-V Apr 30 12:50:16.040377 kernel: tsc: Detected 2593.905 MHz processor Apr 30 12:50:16.040393 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 12:50:16.040408 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 12:50:16.040422 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Apr 30 12:50:16.040442 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 12:50:16.040458 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 12:50:16.040474 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Apr 30 12:50:16.040487 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Apr 30 12:50:16.040502 kernel: Using GB pages for direct mapping Apr 30 12:50:16.040516 kernel: ACPI: Early table checksum verification disabled Apr 30 12:50:16.040532 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Apr 30 12:50:16.040558 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:50:16.040581 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:50:16.040597 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Apr 30 12:50:16.040612 kernel: ACPI: FACS 0x000000003FFFE000 000040 Apr 30 12:50:16.040627 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:50:16.040641 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:50:16.040653 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:50:16.040673 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:50:16.040689 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:50:16.040705 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:50:16.040723 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 30 12:50:16.040740 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Apr 30 12:50:16.040757 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Apr 30 12:50:16.040778 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Apr 30 12:50:16.040795 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Apr 30 12:50:16.040810 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Apr 30 12:50:16.040827 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Apr 30 12:50:16.040843 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Apr 30 12:50:16.040859 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Apr 30 12:50:16.040876 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Apr 30 12:50:16.040894 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Apr 30 12:50:16.040913 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 12:50:16.040930 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 12:50:16.040946 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Apr 30 12:50:16.040964 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Apr 30 12:50:16.040985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Apr 30 12:50:16.041000 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Apr 30 12:50:16.041015 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Apr 30 12:50:16.041032 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Apr 30 12:50:16.041047 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Apr 30 12:50:16.041062 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Apr 30 12:50:16.041076 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Apr 30 12:50:16.041092 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Apr 30 12:50:16.041112 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Apr 30 12:50:16.041126 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Apr 30 12:50:16.041141 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Apr 30 12:50:16.041171 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Apr 30 12:50:16.041186 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Apr 30 12:50:16.041204 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Apr 30 12:50:16.041219 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Apr 30 12:50:16.041234 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Apr 30 12:50:16.041251 kernel: Zone ranges: Apr 30 12:50:16.041270 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 12:50:16.041287 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 12:50:16.041305 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 12:50:16.041321 kernel: Movable zone start for each node Apr 30 12:50:16.041336 kernel: Early memory node ranges Apr 30 12:50:16.041351 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 12:50:16.041363 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Apr 30 12:50:16.041374 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Apr 30 12:50:16.041386 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Apr 30 12:50:16.041400 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Apr 30 12:50:16.041413 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 12:50:16.041426 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 12:50:16.041438 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Apr 30 12:50:16.041452 kernel: ACPI: PM-Timer IO Port: 0x408 Apr 30 12:50:16.041466 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Apr 30 12:50:16.041481 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Apr 30 12:50:16.041496 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 12:50:16.041511 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 12:50:16.041529 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Apr 30 12:50:16.041542 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 12:50:16.041556 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Apr 30 12:50:16.041569 kernel: Booting paravirtualized kernel on Hyper-V Apr 30 12:50:16.041590 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 12:50:16.041603 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 12:50:16.041616 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 12:50:16.041628 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 12:50:16.041640 kernel: pcpu-alloc: [0] 0 1 Apr 30 12:50:16.041657 kernel: Hyper-V: PV spinlocks enabled Apr 30 12:50:16.041670 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 12:50:16.041686 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:50:16.041700 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 12:50:16.041714 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 30 12:50:16.041728 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 12:50:16.041742 kernel: Fallback order for Node 0: 0 Apr 30 12:50:16.041756 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Apr 30 12:50:16.041773 kernel: Policy zone: Normal Apr 30 12:50:16.041797 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 12:50:16.041811 kernel: software IO TLB: area num 2. Apr 30 12:50:16.041828 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 312164K reserved, 0K cma-reserved) Apr 30 12:50:16.041842 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 12:50:16.041857 kernel: ftrace: allocating 37918 entries in 149 pages Apr 30 12:50:16.041871 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 12:50:16.041885 kernel: Dynamic Preempt: voluntary Apr 30 12:50:16.041898 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 12:50:16.041912 kernel: rcu: RCU event tracing is enabled. Apr 30 12:50:16.041927 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 12:50:16.041943 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 12:50:16.041958 kernel: Rude variant of Tasks RCU enabled. Apr 30 12:50:16.041971 kernel: Tracing variant of Tasks RCU enabled. Apr 30 12:50:16.041985 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 12:50:16.041999 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 12:50:16.042012 kernel: Using NULL legacy PIC Apr 30 12:50:16.042030 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Apr 30 12:50:16.042044 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 12:50:16.042057 kernel: Console: colour dummy device 80x25 Apr 30 12:50:16.042072 kernel: printk: console [tty1] enabled Apr 30 12:50:16.042086 kernel: printk: console [ttyS0] enabled Apr 30 12:50:16.042101 kernel: printk: bootconsole [earlyser0] disabled Apr 30 12:50:16.042115 kernel: ACPI: Core revision 20230628 Apr 30 12:50:16.042130 kernel: Failed to register legacy timer interrupt Apr 30 12:50:16.042158 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 12:50:16.042187 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 30 12:50:16.042203 kernel: Hyper-V: Using IPI hypercalls Apr 30 12:50:16.042217 kernel: APIC: send_IPI() replaced with hv_send_ipi() Apr 30 12:50:16.042232 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Apr 30 12:50:16.042247 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Apr 30 12:50:16.042261 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Apr 30 12:50:16.042276 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Apr 30 12:50:16.042291 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Apr 30 12:50:16.042306 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Apr 30 12:50:16.042326 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 12:50:16.042341 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 12:50:16.042356 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 12:50:16.042371 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 12:50:16.042386 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 12:50:16.042400 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 12:50:16.042415 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 12:50:16.042430 kernel: RETBleed: Vulnerable Apr 30 12:50:16.042444 kernel: Speculative Store Bypass: Vulnerable Apr 30 12:50:16.042458 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 12:50:16.042477 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 12:50:16.042491 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 12:50:16.042505 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 12:50:16.042519 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 12:50:16.042534 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 12:50:16.042549 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 12:50:16.042563 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 12:50:16.042578 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 12:50:16.042593 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 30 12:50:16.042608 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 30 12:50:16.042622 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 30 12:50:16.042644 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 30 12:50:16.042659 kernel: Freeing SMP alternatives memory: 32K Apr 30 12:50:16.042674 kernel: pid_max: default: 32768 minimum: 301 Apr 30 12:50:16.042689 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 12:50:16.042703 kernel: landlock: Up and running. Apr 30 12:50:16.042718 kernel: SELinux: Initializing. Apr 30 12:50:16.042733 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 12:50:16.042748 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 12:50:16.042763 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 12:50:16.042778 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:50:16.042793 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:50:16.042812 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:50:16.042827 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 12:50:16.042843 kernel: signal: max sigframe size: 3632 Apr 30 12:50:16.042857 kernel: rcu: Hierarchical SRCU implementation. Apr 30 12:50:16.042873 kernel: rcu: Max phase no-delay instances is 400. Apr 30 12:50:16.042887 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 12:50:16.042901 kernel: smp: Bringing up secondary CPUs ... Apr 30 12:50:16.042916 kernel: smpboot: x86: Booting SMP configuration: Apr 30 12:50:16.042930 kernel: .... node #0, CPUs: #1 Apr 30 12:50:16.042949 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Apr 30 12:50:16.042965 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 12:50:16.042979 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 12:50:16.042993 kernel: smpboot: Max logical packages: 1 Apr 30 12:50:16.043008 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Apr 30 12:50:16.043021 kernel: devtmpfs: initialized Apr 30 12:50:16.043035 kernel: x86/mm: Memory block size: 128MB Apr 30 12:50:16.043049 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Apr 30 12:50:16.043067 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 12:50:16.043081 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 12:50:16.043095 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 12:50:16.043109 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 12:50:16.043123 kernel: audit: initializing netlink subsys (disabled) Apr 30 12:50:16.043136 kernel: audit: type=2000 audit(1746017415.027:1): state=initialized audit_enabled=0 res=1 Apr 30 12:50:16.043247 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 12:50:16.043263 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 12:50:16.043279 kernel: cpuidle: using governor menu Apr 30 12:50:16.043298 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 12:50:16.043313 kernel: dca service started, version 1.12.1 Apr 30 12:50:16.043328 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Apr 30 12:50:16.043343 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 12:50:16.043359 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 12:50:16.043375 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 12:50:16.043390 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 12:50:16.043404 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 12:50:16.043420 kernel: ACPI: Added _OSI(Module Device) Apr 30 12:50:16.043438 kernel: ACPI: Added _OSI(Processor Device) Apr 30 12:50:16.043454 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 12:50:16.043468 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 12:50:16.043482 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 12:50:16.043497 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 12:50:16.043510 kernel: ACPI: Interpreter enabled Apr 30 12:50:16.043525 kernel: ACPI: PM: (supports S0 S5) Apr 30 12:50:16.043538 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 12:50:16.043552 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 12:50:16.043568 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 12:50:16.043582 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Apr 30 12:50:16.043595 kernel: iommu: Default domain type: Translated Apr 30 12:50:16.043609 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 12:50:16.043624 kernel: efivars: Registered efivars operations Apr 30 12:50:16.043639 kernel: PCI: Using ACPI for IRQ routing Apr 30 12:50:16.043653 kernel: PCI: System does not support PCI Apr 30 12:50:16.043668 kernel: vgaarb: loaded Apr 30 12:50:16.043683 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Apr 30 12:50:16.043702 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 12:50:16.043716 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 12:50:16.043730 kernel: pnp: PnP ACPI init Apr 30 12:50:16.043744 kernel: pnp: PnP ACPI: found 3 devices Apr 30 12:50:16.043759 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 12:50:16.043773 kernel: NET: Registered PF_INET protocol family Apr 30 12:50:16.043787 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 12:50:16.043802 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 30 12:50:16.043816 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 12:50:16.043833 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 12:50:16.043847 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 12:50:16.043862 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 30 12:50:16.043875 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 12:50:16.043890 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 30 12:50:16.043904 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 12:50:16.043919 kernel: NET: Registered PF_XDP protocol family Apr 30 12:50:16.043933 kernel: PCI: CLS 0 bytes, default 64 Apr 30 12:50:16.043947 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 12:50:16.043964 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Apr 30 12:50:16.043979 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 12:50:16.043993 kernel: Initialise system trusted keyrings Apr 30 12:50:16.044007 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 30 12:50:16.044021 kernel: Key type asymmetric registered Apr 30 12:50:16.044035 kernel: Asymmetric key parser 'x509' registered Apr 30 12:50:16.044049 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 12:50:16.044063 kernel: io scheduler mq-deadline registered Apr 30 12:50:16.044077 kernel: io scheduler kyber registered Apr 30 12:50:16.044095 kernel: io scheduler bfq registered Apr 30 12:50:16.044109 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 12:50:16.044124 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 12:50:16.044138 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 12:50:16.044172 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 12:50:16.044187 kernel: i8042: PNP: No PS/2 controller found. Apr 30 12:50:16.044366 kernel: rtc_cmos 00:02: registered as rtc0 Apr 30 12:50:16.044490 kernel: rtc_cmos 00:02: setting system clock to 2025-04-30T12:50:15 UTC (1746017415) Apr 30 12:50:16.044622 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Apr 30 12:50:16.044640 kernel: intel_pstate: CPU model not supported Apr 30 12:50:16.044656 kernel: efifb: probing for efifb Apr 30 12:50:16.044670 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 30 12:50:16.044684 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 30 12:50:16.044698 kernel: efifb: scrolling: redraw Apr 30 12:50:16.044712 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 12:50:16.044727 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 12:50:16.044742 kernel: fb0: EFI VGA frame buffer device Apr 30 12:50:16.044762 kernel: pstore: Using crash dump compression: deflate Apr 30 12:50:16.044777 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 12:50:16.044790 kernel: NET: Registered PF_INET6 protocol family Apr 30 12:50:16.044805 kernel: Segment Routing with IPv6 Apr 30 12:50:16.044820 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 12:50:16.044835 kernel: NET: Registered PF_PACKET protocol family Apr 30 12:50:16.044849 kernel: Key type dns_resolver registered Apr 30 12:50:16.044863 kernel: IPI shorthand broadcast: enabled Apr 30 12:50:16.044877 kernel: sched_clock: Marking stable (816003200, 41896500)->(1050232300, -192332600) Apr 30 12:50:16.044896 kernel: registered taskstats version 1 Apr 30 12:50:16.044910 kernel: Loading compiled-in X.509 certificates Apr 30 12:50:16.044924 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 10d2d341d26c1df942e743344427c053ef3a2a5f' Apr 30 12:50:16.044939 kernel: Key type .fscrypt registered Apr 30 12:50:16.044954 kernel: Key type fscrypt-provisioning registered Apr 30 12:50:16.044967 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 12:50:16.044981 kernel: ima: Allocated hash algorithm: sha1 Apr 30 12:50:16.044995 kernel: ima: No architecture policies found Apr 30 12:50:16.045010 kernel: clk: Disabling unused clocks Apr 30 12:50:16.045028 kernel: Freeing unused kernel image (initmem) memory: 43484K Apr 30 12:50:16.045043 kernel: Write protecting the kernel read-only data: 38912k Apr 30 12:50:16.045057 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K Apr 30 12:50:16.045072 kernel: Run /init as init process Apr 30 12:50:16.045086 kernel: with arguments: Apr 30 12:50:16.045101 kernel: /init Apr 30 12:50:16.045115 kernel: with environment: Apr 30 12:50:16.045129 kernel: HOME=/ Apr 30 12:50:16.045157 kernel: TERM=linux Apr 30 12:50:16.045182 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 12:50:16.045198 systemd[1]: Successfully made /usr/ read-only. Apr 30 12:50:16.045215 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:50:16.045230 systemd[1]: Detected virtualization microsoft. Apr 30 12:50:16.045245 systemd[1]: Detected architecture x86-64. Apr 30 12:50:16.045258 systemd[1]: Running in initrd. Apr 30 12:50:16.045272 systemd[1]: No hostname configured, using default hostname. Apr 30 12:50:16.045290 systemd[1]: Hostname set to . Apr 30 12:50:16.045304 systemd[1]: Initializing machine ID from random generator. Apr 30 12:50:16.045319 systemd[1]: Queued start job for default target initrd.target. Apr 30 12:50:16.045334 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:50:16.045349 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:50:16.045365 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 12:50:16.045380 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:50:16.045395 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 12:50:16.045414 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 12:50:16.045431 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 12:50:16.045446 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 12:50:16.045460 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:50:16.045474 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:50:16.045488 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:50:16.045503 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:50:16.045523 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:50:16.045540 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:50:16.045555 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:50:16.045571 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:50:16.045588 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 12:50:16.045604 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 12:50:16.045622 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:50:16.045641 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:50:16.045658 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:50:16.045678 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:50:16.045694 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 12:50:16.045711 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:50:16.045726 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 12:50:16.045743 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 12:50:16.045762 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:50:16.045811 systemd-journald[177]: Collecting audit messages is disabled. Apr 30 12:50:16.045848 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:50:16.045865 systemd-journald[177]: Journal started Apr 30 12:50:16.045896 systemd-journald[177]: Runtime Journal (/run/log/journal/28f124ab482c48e984aff085a20581ae) is 8M, max 158.8M, 150.8M free. Apr 30 12:50:16.059895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:50:16.065160 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:50:16.065316 systemd-modules-load[179]: Inserted module 'overlay' Apr 30 12:50:16.076676 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 12:50:16.079404 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:50:16.082675 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 12:50:16.101586 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 12:50:16.105701 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 30 12:50:16.108050 kernel: Bridge firewalling registered Apr 30 12:50:16.108344 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:50:16.112187 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:50:16.120712 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:50:16.123826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:50:16.136359 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:50:16.143720 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:50:16.152699 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:50:16.162331 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:50:16.165793 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:50:16.176082 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:50:16.183339 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:50:16.189035 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 12:50:16.201337 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:50:16.204767 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:50:16.219816 dracut-cmdline[212]: dracut-dracut-053 Apr 30 12:50:16.223197 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:50:16.264375 systemd-resolved[213]: Positive Trust Anchors: Apr 30 12:50:16.266856 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:50:16.270386 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:50:16.289042 systemd-resolved[213]: Defaulting to hostname 'linux'. Apr 30 12:50:16.290394 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:50:16.295531 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:50:16.309167 kernel: SCSI subsystem initialized Apr 30 12:50:16.319165 kernel: Loading iSCSI transport class v2.0-870. Apr 30 12:50:16.331184 kernel: iscsi: registered transport (tcp) Apr 30 12:50:16.352189 kernel: iscsi: registered transport (qla4xxx) Apr 30 12:50:16.352287 kernel: QLogic iSCSI HBA Driver Apr 30 12:50:16.388944 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 12:50:16.398287 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 12:50:16.426346 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 12:50:16.426448 kernel: device-mapper: uevent: version 1.0.3 Apr 30 12:50:16.429310 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 12:50:16.470174 kernel: raid6: avx512x4 gen() 18418 MB/s Apr 30 12:50:16.490163 kernel: raid6: avx512x2 gen() 18401 MB/s Apr 30 12:50:16.508161 kernel: raid6: avx512x1 gen() 18087 MB/s Apr 30 12:50:16.527157 kernel: raid6: avx2x4 gen() 18228 MB/s Apr 30 12:50:16.546163 kernel: raid6: avx2x2 gen() 18146 MB/s Apr 30 12:50:16.565927 kernel: raid6: avx2x1 gen() 13472 MB/s Apr 30 12:50:16.565973 kernel: raid6: using algorithm avx512x4 gen() 18418 MB/s Apr 30 12:50:16.587038 kernel: raid6: .... xor() 6613 MB/s, rmw enabled Apr 30 12:50:16.587073 kernel: raid6: using avx512x2 recovery algorithm Apr 30 12:50:16.610179 kernel: xor: automatically using best checksumming function avx Apr 30 12:50:16.752176 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 12:50:16.762100 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:50:16.770420 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:50:16.788386 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 30 12:50:16.793651 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:50:16.806320 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 12:50:16.819229 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Apr 30 12:50:16.845802 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:50:16.854304 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:50:16.895351 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:50:16.910341 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 12:50:16.939558 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 12:50:16.945899 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:50:16.949626 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:50:16.958923 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:50:16.969397 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 12:50:16.990269 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:50:17.009165 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 12:50:17.009699 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:50:17.012510 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:50:17.018653 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:50:17.024295 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:50:17.024626 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:50:17.032389 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:50:17.042197 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 12:50:17.042251 kernel: hv_vmbus: Vmbus version:5.2 Apr 30 12:50:17.046670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:50:17.055027 kernel: AES CTR mode by8 optimization enabled Apr 30 12:50:17.062806 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:50:17.075184 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 12:50:17.076161 kernel: hv_vmbus: registering driver hid_hyperv Apr 30 12:50:17.080082 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Apr 30 12:50:17.080120 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 30 12:50:17.094678 kernel: hv_vmbus: registering driver hv_netvsc Apr 30 12:50:17.094737 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 30 12:50:17.095930 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Apr 30 12:50:17.125679 kernel: hv_vmbus: registering driver hv_storvsc Apr 30 12:50:17.125762 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 12:50:17.125783 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 12:50:17.131392 kernel: scsi host1: storvsc_host_t Apr 30 12:50:17.131657 kernel: scsi host0: storvsc_host_t Apr 30 12:50:17.139237 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 30 12:50:17.141603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:50:17.146398 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 30 12:50:17.158592 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:50:17.177192 kernel: PTP clock support registered Apr 30 12:50:17.188259 kernel: hv_utils: Registering HyperV Utility Driver Apr 30 12:50:17.188349 kernel: hv_vmbus: registering driver hv_utils Apr 30 12:50:17.195941 kernel: hv_utils: Shutdown IC version 3.2 Apr 30 12:50:17.196005 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 30 12:50:17.690241 kernel: hv_utils: Heartbeat IC version 3.0 Apr 30 12:50:17.690269 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 12:50:17.690289 kernel: hv_utils: TimeSync IC version 4.0 Apr 30 12:50:17.690311 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 30 12:50:17.680731 systemd-resolved[213]: Clock change detected. Flushing caches. Apr 30 12:50:17.687176 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:50:17.708629 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 30 12:50:17.722543 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 12:50:17.722794 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 12:50:17.723077 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 30 12:50:17.723276 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 30 12:50:17.723487 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:50:17.723514 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 12:50:17.820225 kernel: hv_netvsc 6045bd0e-99d3-6045-bd0e-99d36045bd0e eth0: VF slot 1 added Apr 30 12:50:17.828965 kernel: hv_vmbus: registering driver hv_pci Apr 30 12:50:17.833413 kernel: hv_pci 450b5a8b-f132-43e7-b1eb-93905fb05a35: PCI VMBus probing: Using version 0x10004 Apr 30 12:50:17.871765 kernel: hv_pci 450b5a8b-f132-43e7-b1eb-93905fb05a35: PCI host bridge to bus f132:00 Apr 30 12:50:17.872246 kernel: pci_bus f132:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Apr 30 12:50:17.872446 kernel: pci_bus f132:00: No busn resource found for root bus, will use [bus 00-ff] Apr 30 12:50:17.872599 kernel: pci f132:00:02.0: [15b3:1016] type 00 class 0x020000 Apr 30 12:50:17.872787 kernel: pci f132:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 12:50:17.872991 kernel: pci f132:00:02.0: enabling Extended Tags Apr 30 12:50:17.873173 kernel: pci f132:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f132:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Apr 30 12:50:17.873345 kernel: pci_bus f132:00: busn_res: [bus 00-ff] end is updated to 00 Apr 30 12:50:17.873505 kernel: pci f132:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Apr 30 12:50:18.034745 kernel: mlx5_core f132:00:02.0: enabling device (0000 -> 0002) Apr 30 12:50:18.266188 kernel: mlx5_core f132:00:02.0: firmware version: 14.30.5000 Apr 30 12:50:18.266422 kernel: hv_netvsc 6045bd0e-99d3-6045-bd0e-99d36045bd0e eth0: VF registering: eth1 Apr 30 12:50:18.266599 kernel: mlx5_core f132:00:02.0 eth1: joined to eth0 Apr 30 12:50:18.266788 kernel: mlx5_core f132:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 12:50:18.233504 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 30 12:50:18.273947 kernel: mlx5_core f132:00:02.0 enP61746s1: renamed from eth1 Apr 30 12:50:18.324855 kernel: BTRFS: device fsid 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (446) Apr 30 12:50:18.334921 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (464) Apr 30 12:50:18.355032 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 30 12:50:18.358882 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 30 12:50:18.370422 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 30 12:50:18.386881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 12:50:18.397063 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 12:50:18.407918 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:50:18.416983 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:50:19.425938 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:50:19.427777 disk-uuid[602]: The operation has completed successfully. Apr 30 12:50:19.508809 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 12:50:19.508933 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 12:50:19.563068 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 12:50:19.570985 sh[688]: Success Apr 30 12:50:19.600033 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 12:50:19.827147 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 12:50:19.846035 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 12:50:19.850346 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 12:50:19.864916 kernel: BTRFS info (device dm-0): first mount of filesystem 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 Apr 30 12:50:19.864962 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:50:19.870273 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 12:50:19.872865 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 12:50:19.875253 kernel: BTRFS info (device dm-0): using free space tree Apr 30 12:50:20.204315 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 12:50:20.209629 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 12:50:20.218088 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 12:50:20.225042 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 12:50:20.251779 kernel: BTRFS info (device sda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:50:20.251848 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:50:20.251869 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:50:20.268925 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:50:20.275995 kernel: BTRFS info (device sda6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:50:20.278805 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 12:50:20.288131 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 12:50:20.319647 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:50:20.330071 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:50:20.355331 systemd-networkd[869]: lo: Link UP Apr 30 12:50:20.355341 systemd-networkd[869]: lo: Gained carrier Apr 30 12:50:20.357638 systemd-networkd[869]: Enumeration completed Apr 30 12:50:20.357876 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:50:20.360054 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:50:20.360058 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:50:20.362195 systemd[1]: Reached target network.target - Network. Apr 30 12:50:20.427931 kernel: mlx5_core f132:00:02.0 enP61746s1: Link up Apr 30 12:50:20.461932 kernel: hv_netvsc 6045bd0e-99d3-6045-bd0e-99d36045bd0e eth0: Data path switched to VF: enP61746s1 Apr 30 12:50:20.462961 systemd-networkd[869]: enP61746s1: Link UP Apr 30 12:50:20.463124 systemd-networkd[869]: eth0: Link UP Apr 30 12:50:20.463382 systemd-networkd[869]: eth0: Gained carrier Apr 30 12:50:20.463402 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:50:20.474791 systemd-networkd[869]: enP61746s1: Gained carrier Apr 30 12:50:20.497955 systemd-networkd[869]: eth0: DHCPv4 address 10.200.4.14/24, gateway 10.200.4.1 acquired from 168.63.129.16 Apr 30 12:50:21.158003 ignition[819]: Ignition 2.20.0 Apr 30 12:50:21.158015 ignition[819]: Stage: fetch-offline Apr 30 12:50:21.158062 ignition[819]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:50:21.158072 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 12:50:21.158191 ignition[819]: parsed url from cmdline: "" Apr 30 12:50:21.158196 ignition[819]: no config URL provided Apr 30 12:50:21.158203 ignition[819]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:50:21.158214 ignition[819]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:50:21.158221 ignition[819]: failed to fetch config: resource requires networking Apr 30 12:50:21.159804 ignition[819]: Ignition finished successfully Apr 30 12:50:21.176937 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:50:21.184087 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 12:50:21.197998 ignition[879]: Ignition 2.20.0 Apr 30 12:50:21.198008 ignition[879]: Stage: fetch Apr 30 12:50:21.198234 ignition[879]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:50:21.198248 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 12:50:21.198350 ignition[879]: parsed url from cmdline: "" Apr 30 12:50:21.198354 ignition[879]: no config URL provided Apr 30 12:50:21.198359 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:50:21.198368 ignition[879]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:50:21.198397 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 30 12:50:21.287228 ignition[879]: GET result: OK Apr 30 12:50:21.287336 ignition[879]: config has been read from IMDS userdata Apr 30 12:50:21.287366 ignition[879]: parsing config with SHA512: ac0cc173f56b197cad05e915007bfa97ea9f1ea47c572709b27241e745cf3e063487c155122e5e14e3a21903acfc76b34eb55f881c1cf1b69c45c04fd6f95b18 Apr 30 12:50:21.293363 unknown[879]: fetched base config from "system" Apr 30 12:50:21.293384 unknown[879]: fetched base config from "system" Apr 30 12:50:21.293964 ignition[879]: fetch: fetch complete Apr 30 12:50:21.293394 unknown[879]: fetched user config from "azure" Apr 30 12:50:21.293971 ignition[879]: fetch: fetch passed Apr 30 12:50:21.295946 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 12:50:21.294031 ignition[879]: Ignition finished successfully Apr 30 12:50:21.314120 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 12:50:21.331809 ignition[886]: Ignition 2.20.0 Apr 30 12:50:21.331822 ignition[886]: Stage: kargs Apr 30 12:50:21.332076 ignition[886]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:50:21.332090 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 12:50:21.332957 ignition[886]: kargs: kargs passed Apr 30 12:50:21.337716 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 12:50:21.333007 ignition[886]: Ignition finished successfully Apr 30 12:50:21.349140 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 12:50:21.363347 ignition[892]: Ignition 2.20.0 Apr 30 12:50:21.363358 ignition[892]: Stage: disks Apr 30 12:50:21.363586 ignition[892]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:50:21.365456 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 12:50:21.363601 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 12:50:21.364506 ignition[892]: disks: disks passed Apr 30 12:50:21.364553 ignition[892]: Ignition finished successfully Apr 30 12:50:21.378771 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 12:50:21.383795 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 12:50:21.389329 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:50:21.391773 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:50:21.396369 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:50:21.405063 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 12:50:21.454608 systemd-fsck[900]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 30 12:50:21.458922 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 12:50:21.469995 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 12:50:21.561917 kernel: EXT4-fs (sda9): mounted filesystem 59d16236-967d-47d1-a9bd-4b055a17ab77 r/w with ordered data mode. Quota mode: none. Apr 30 12:50:21.562575 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 12:50:21.565383 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 12:50:21.602045 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:50:21.606371 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 12:50:21.614061 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 12:50:21.616169 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (911) Apr 30 12:50:21.621006 kernel: BTRFS info (device sda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:50:21.626845 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:50:21.626874 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:50:21.629547 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 12:50:21.638692 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:50:21.629592 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:50:21.640610 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:50:21.643664 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 12:50:21.657055 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 12:50:22.039093 systemd-networkd[869]: eth0: Gained IPv6LL Apr 30 12:50:22.235469 coreos-metadata[913]: Apr 30 12:50:22.235 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 12:50:22.241564 coreos-metadata[913]: Apr 30 12:50:22.241 INFO Fetch successful Apr 30 12:50:22.244013 coreos-metadata[913]: Apr 30 12:50:22.241 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 30 12:50:22.253188 coreos-metadata[913]: Apr 30 12:50:22.253 INFO Fetch successful Apr 30 12:50:22.272664 coreos-metadata[913]: Apr 30 12:50:22.272 INFO wrote hostname ci-4230.1.1-a-af46bb47a4 to /sysroot/etc/hostname Apr 30 12:50:22.276332 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 12:50:22.280506 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 12:50:22.292139 initrd-setup-root[948]: cut: /sysroot/etc/group: No such file or directory Apr 30 12:50:22.295075 systemd-networkd[869]: enP61746s1: Gained IPv6LL Apr 30 12:50:22.311391 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 12:50:22.316329 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 12:50:23.138578 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 12:50:23.147072 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 12:50:23.155106 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 12:50:23.162252 kernel: BTRFS info (device sda6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:50:23.164579 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 12:50:23.191336 ignition[1029]: INFO : Ignition 2.20.0 Apr 30 12:50:23.191336 ignition[1029]: INFO : Stage: mount Apr 30 12:50:23.198671 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:50:23.198671 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 12:50:23.198671 ignition[1029]: INFO : mount: mount passed Apr 30 12:50:23.198671 ignition[1029]: INFO : Ignition finished successfully Apr 30 12:50:23.192039 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 12:50:23.205565 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 12:50:23.219019 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 12:50:23.233156 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:50:23.247412 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1041) Apr 30 12:50:23.247471 kernel: BTRFS info (device sda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:50:23.252927 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:50:23.252973 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:50:23.257922 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:50:23.259821 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:50:23.283640 ignition[1058]: INFO : Ignition 2.20.0 Apr 30 12:50:23.283640 ignition[1058]: INFO : Stage: files Apr 30 12:50:23.287917 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:50:23.287917 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 12:50:23.287917 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Apr 30 12:50:23.300271 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 12:50:23.300271 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 12:50:23.367120 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 12:50:23.371253 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 12:50:23.371253 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 12:50:23.367787 unknown[1058]: wrote ssh authorized keys file for user: core Apr 30 12:50:23.382233 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 12:50:23.386678 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 12:50:23.426845 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 12:50:23.540174 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 12:50:23.545701 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 12:50:23.545701 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 12:50:23.553892 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:50:23.553892 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:50:23.553892 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:50:23.553892 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:50:23.553892 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:50:23.574159 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:50:23.574159 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:50:23.574159 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:50:23.574159 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 12:50:23.574159 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 12:50:23.574159 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 12:50:23.574159 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 12:50:24.110022 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 12:50:24.299229 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 12:50:24.299229 ignition[1058]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 12:50:24.314294 ignition[1058]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:50:24.318764 ignition[1058]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:50:24.318764 ignition[1058]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 12:50:24.325810 ignition[1058]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 12:50:24.325810 ignition[1058]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 12:50:24.332088 ignition[1058]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:50:24.332088 ignition[1058]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:50:24.332088 ignition[1058]: INFO : files: files passed Apr 30 12:50:24.332088 ignition[1058]: INFO : Ignition finished successfully Apr 30 12:50:24.344343 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 12:50:24.355107 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 12:50:24.366062 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 12:50:24.370787 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 12:50:24.370893 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 12:50:24.393621 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:50:24.393621 initrd-setup-root-after-ignition[1086]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:50:24.402192 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:50:24.406053 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:50:24.412592 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 12:50:24.421097 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 12:50:24.443538 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 12:50:24.443647 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 12:50:24.449762 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 12:50:24.457135 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 12:50:24.459532 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 12:50:24.467300 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 12:50:24.481805 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:50:24.492359 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 12:50:24.505553 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:50:24.510766 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:50:24.516122 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 12:50:24.516302 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 12:50:24.516412 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:50:24.517059 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 12:50:24.517403 systemd[1]: Stopped target basic.target - Basic System. Apr 30 12:50:24.517751 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 12:50:24.518487 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:50:24.518832 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 12:50:24.519698 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 12:50:24.520397 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:50:24.520751 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 12:50:24.521189 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 12:50:24.521525 systemd[1]: Stopped target swap.target - Swaps. Apr 30 12:50:24.521896 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 12:50:24.522043 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:50:24.522610 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:50:24.609979 ignition[1110]: INFO : Ignition 2.20.0 Apr 30 12:50:24.609979 ignition[1110]: INFO : Stage: umount Apr 30 12:50:24.609979 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:50:24.609979 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 30 12:50:24.522996 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:50:24.622748 ignition[1110]: INFO : umount: umount passed Apr 30 12:50:24.622748 ignition[1110]: INFO : Ignition finished successfully Apr 30 12:50:24.523288 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 12:50:24.548939 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:50:24.549392 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 12:50:24.549513 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 12:50:24.558871 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 12:50:24.559050 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:50:24.563710 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 12:50:24.563861 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 12:50:24.568387 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 12:50:24.568533 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 12:50:24.581172 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 12:50:24.589083 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 12:50:24.592561 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 12:50:24.592753 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:50:24.604065 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 12:50:24.604191 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:50:24.615811 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 12:50:24.617146 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 12:50:24.624867 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 12:50:24.624970 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 12:50:24.627476 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 12:50:24.627531 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 12:50:24.636043 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 12:50:24.636111 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 12:50:24.688006 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 12:50:24.688097 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 12:50:24.692466 systemd[1]: Stopped target network.target - Network. Apr 30 12:50:24.698442 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 12:50:24.698532 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:50:24.706710 systemd[1]: Stopped target paths.target - Path Units. Apr 30 12:50:24.708706 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 12:50:24.710499 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:50:24.713446 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 12:50:24.715544 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 12:50:24.717861 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 12:50:24.717913 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:50:24.722419 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 12:50:24.722471 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:50:24.728441 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 12:50:24.728510 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 12:50:24.732446 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 12:50:24.732508 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 12:50:24.738837 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 12:50:24.743154 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 12:50:24.750738 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 12:50:24.751404 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 12:50:24.751494 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 12:50:24.754407 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 12:50:24.754510 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 12:50:24.770895 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 12:50:24.771027 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 12:50:24.777876 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 12:50:24.778135 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 12:50:24.778179 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:50:24.785245 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:50:24.801062 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 12:50:24.801180 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 12:50:24.806812 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 12:50:24.807392 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 12:50:24.807468 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:50:24.825025 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 12:50:24.828149 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 12:50:24.828209 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:50:24.831985 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:50:24.832042 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:50:24.843597 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 12:50:24.843660 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 12:50:24.848183 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:50:24.849343 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 12:50:24.866492 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 12:50:24.866658 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:50:24.871798 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 12:50:24.871844 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 12:50:24.876961 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 12:50:24.877010 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:50:24.883637 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 12:50:24.885830 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:50:24.890427 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 12:50:24.890479 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 12:50:24.894812 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:50:24.894864 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:50:24.914073 kernel: hv_netvsc 6045bd0e-99d3-6045-bd0e-99d36045bd0e eth0: Data path switched from VF: enP61746s1 Apr 30 12:50:24.916173 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 12:50:24.918588 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 12:50:24.918662 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:50:24.923942 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 12:50:24.923996 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:50:24.932306 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 12:50:24.932365 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:50:24.938090 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:50:24.938149 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:50:24.944454 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 12:50:24.944579 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 12:50:24.949689 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 12:50:24.949775 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 12:50:24.955564 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 12:50:24.978074 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 12:50:24.986291 systemd[1]: Switching root. Apr 30 12:50:25.045441 systemd-journald[177]: Journal stopped Apr 30 12:50:29.838763 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Apr 30 12:50:29.838794 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 12:50:29.838807 kernel: SELinux: policy capability open_perms=1 Apr 30 12:50:29.838818 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 12:50:29.838826 kernel: SELinux: policy capability always_check_network=0 Apr 30 12:50:29.838837 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 12:50:29.838847 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 12:50:29.838860 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 12:50:29.838868 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 12:50:29.838879 kernel: audit: type=1403 audit(1746017426.376:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 12:50:29.838889 systemd[1]: Successfully loaded SELinux policy in 117.278ms. Apr 30 12:50:29.838911 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.719ms. Apr 30 12:50:29.838923 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:50:29.838936 systemd[1]: Detected virtualization microsoft. Apr 30 12:50:29.838949 systemd[1]: Detected architecture x86-64. Apr 30 12:50:29.838961 systemd[1]: Detected first boot. Apr 30 12:50:29.838975 systemd[1]: Hostname set to . Apr 30 12:50:29.838986 systemd[1]: Initializing machine ID from random generator. Apr 30 12:50:29.838997 zram_generator::config[1156]: No configuration found. Apr 30 12:50:29.839012 kernel: Guest personality initialized and is inactive Apr 30 12:50:29.839021 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Apr 30 12:50:29.839030 kernel: Initialized host personality Apr 30 12:50:29.839041 kernel: NET: Registered PF_VSOCK protocol family Apr 30 12:50:29.839061 systemd[1]: Populated /etc with preset unit settings. Apr 30 12:50:29.839074 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 12:50:29.839084 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 12:50:29.839097 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 12:50:29.839111 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 12:50:29.839121 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 12:50:29.839131 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 12:50:29.839140 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 12:50:29.839150 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 12:50:29.839160 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 12:50:29.839172 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 12:50:29.839187 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 12:50:29.839199 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 12:50:29.839212 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:50:29.839226 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:50:29.839236 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 12:50:29.839246 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 12:50:29.839259 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 12:50:29.839269 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:50:29.839279 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 12:50:29.839291 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:50:29.839302 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 12:50:29.839314 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 12:50:29.839325 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 12:50:29.839336 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 12:50:29.839349 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:50:29.839360 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:50:29.839376 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:50:29.839388 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:50:29.839400 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 12:50:29.839411 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 12:50:29.839424 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 12:50:29.839435 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:50:29.839450 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:50:29.839463 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:50:29.839477 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 12:50:29.839490 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 12:50:29.839508 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 12:50:29.839527 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 12:50:29.839550 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:50:29.839579 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 12:50:29.839601 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 12:50:29.839624 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 12:50:29.839647 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 12:50:29.839671 systemd[1]: Reached target machines.target - Containers. Apr 30 12:50:29.839693 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 12:50:29.839715 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:50:29.839735 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:50:29.839762 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 12:50:29.839782 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:50:29.839806 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:50:29.839829 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:50:29.839851 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 12:50:29.839871 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:50:29.839908 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 12:50:29.839933 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 12:50:29.839965 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 12:50:29.839990 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 12:50:29.840014 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 12:50:29.840037 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:50:29.840061 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:50:29.840079 kernel: loop: module loaded Apr 30 12:50:29.840099 kernel: fuse: init (API version 7.39) Apr 30 12:50:29.840120 kernel: ACPI: bus type drm_connector registered Apr 30 12:50:29.840148 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:50:29.840171 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 12:50:29.840192 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 12:50:29.840218 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 12:50:29.840238 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:50:29.840259 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 12:50:29.840286 systemd[1]: Stopped verity-setup.service. Apr 30 12:50:29.840309 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:50:29.840365 systemd-journald[1263]: Collecting audit messages is disabled. Apr 30 12:50:29.840395 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 12:50:29.840412 systemd-journald[1263]: Journal started Apr 30 12:50:29.840449 systemd-journald[1263]: Runtime Journal (/run/log/journal/528fb09b136847999d58434aac8ba4ac) is 8M, max 158.8M, 150.8M free. Apr 30 12:50:29.214592 systemd[1]: Queued start job for default target multi-user.target. Apr 30 12:50:29.225968 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 12:50:29.226392 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 12:50:29.850480 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:50:29.851123 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 12:50:29.853990 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 12:50:29.856679 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 12:50:29.859292 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 12:50:29.862144 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 12:50:29.864651 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 12:50:29.867604 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:50:29.870660 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 12:50:29.870855 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 12:50:29.873743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:50:29.873942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:50:29.876783 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:50:29.877114 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:50:29.880037 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:50:29.880223 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:50:29.883194 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 12:50:29.883367 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 12:50:29.886234 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:50:29.886464 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:50:29.889552 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:50:29.892476 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 12:50:29.895700 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 12:50:29.908111 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 12:50:29.919783 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 12:50:29.927959 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 12:50:29.930627 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 12:50:29.930676 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:50:29.934862 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 12:50:29.946155 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 12:50:29.955032 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 12:50:29.957925 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:50:29.978060 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 12:50:29.990494 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 12:50:29.993541 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:50:29.994787 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 12:50:30.000027 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:50:30.006782 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:50:30.019927 systemd-journald[1263]: Time spent on flushing to /var/log/journal/528fb09b136847999d58434aac8ba4ac is 22.523ms for 963 entries. Apr 30 12:50:30.019927 systemd-journald[1263]: System Journal (/var/log/journal/528fb09b136847999d58434aac8ba4ac) is 8M, max 2.6G, 2.6G free. Apr 30 12:50:30.069150 systemd-journald[1263]: Received client request to flush runtime journal. Apr 30 12:50:30.016052 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 12:50:30.024653 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:50:30.030739 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 12:50:30.035811 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:50:30.042043 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 12:50:30.045153 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 12:50:30.053565 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 12:50:30.061026 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 12:50:30.072502 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 12:50:30.082097 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 12:50:30.092566 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 12:50:30.101554 kernel: loop0: detected capacity change from 0 to 28272 Apr 30 12:50:30.100039 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 12:50:30.104098 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:50:30.118360 udevadm[1311]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 12:50:30.187128 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 12:50:30.188099 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 12:50:30.188161 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Apr 30 12:50:30.188178 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Apr 30 12:50:30.194479 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:50:30.204137 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 12:50:30.382668 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 12:50:30.395111 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:50:30.417293 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Apr 30 12:50:30.417457 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Apr 30 12:50:30.422961 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:50:30.449926 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 12:50:30.554934 kernel: loop1: detected capacity change from 0 to 210664 Apr 30 12:50:30.613064 kernel: loop2: detected capacity change from 0 to 147912 Apr 30 12:50:31.194998 kernel: loop3: detected capacity change from 0 to 138176 Apr 30 12:50:31.616337 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 12:50:31.624124 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:50:31.653761 kernel: loop4: detected capacity change from 0 to 28272 Apr 30 12:50:31.662160 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Apr 30 12:50:31.664925 kernel: loop5: detected capacity change from 0 to 210664 Apr 30 12:50:31.674921 kernel: loop6: detected capacity change from 0 to 147912 Apr 30 12:50:31.687924 kernel: loop7: detected capacity change from 0 to 138176 Apr 30 12:50:31.700123 (sd-merge)[1328]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 30 12:50:31.700708 (sd-merge)[1328]: Merged extensions into '/usr'. Apr 30 12:50:31.705963 systemd[1]: Reload requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 12:50:31.705978 systemd[1]: Reloading... Apr 30 12:50:31.771928 zram_generator::config[1352]: No configuration found. Apr 30 12:50:31.984586 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:50:32.100187 systemd[1]: Reloading finished in 393 ms. Apr 30 12:50:32.122737 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:50:32.141396 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 12:50:32.172708 systemd[1]: Starting ensure-sysext.service... Apr 30 12:50:32.179347 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:50:32.187152 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:50:32.232354 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 12:50:32.234519 systemd[1]: Reload requested from client PID 1435 ('systemctl') (unit ensure-sysext.service)... Apr 30 12:50:32.234540 systemd[1]: Reloading... Apr 30 12:50:32.247943 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 12:50:32.300968 systemd-tmpfiles[1437]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 12:50:32.301776 systemd-tmpfiles[1437]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 12:50:32.309026 systemd-tmpfiles[1437]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 12:50:32.309609 systemd-tmpfiles[1437]: ACLs are not supported, ignoring. Apr 30 12:50:32.309695 systemd-tmpfiles[1437]: ACLs are not supported, ignoring. Apr 30 12:50:32.339118 zram_generator::config[1471]: No configuration found. Apr 30 12:50:32.340390 systemd-tmpfiles[1437]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:50:32.342321 systemd-tmpfiles[1437]: Skipping /boot Apr 30 12:50:32.351018 kernel: hv_vmbus: registering driver hv_balloon Apr 30 12:50:32.351108 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 30 12:50:32.383716 kernel: hv_vmbus: registering driver hyperv_fb Apr 30 12:50:32.383803 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 30 12:50:32.387309 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 30 12:50:32.400517 systemd-tmpfiles[1437]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:50:32.400686 systemd-tmpfiles[1437]: Skipping /boot Apr 30 12:50:32.409077 kernel: Console: switching to colour dummy device 80x25 Apr 30 12:50:32.411931 kernel: Console: switching to colour frame buffer device 128x48 Apr 30 12:50:32.737921 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1423) Apr 30 12:50:32.842352 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:50:32.931941 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Apr 30 12:50:33.001993 systemd[1]: Reloading finished in 766 ms. Apr 30 12:50:33.026510 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:50:33.061751 systemd[1]: Finished ensure-sysext.service. Apr 30 12:50:33.091675 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 30 12:50:33.104644 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:50:33.113539 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:50:33.120104 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 12:50:33.125251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:50:33.128987 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:50:33.133125 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:50:33.138168 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:50:33.143151 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:50:33.148136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:50:33.150138 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 12:50:33.156032 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:50:33.160893 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 12:50:33.167137 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:50:33.173241 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 12:50:33.182975 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 12:50:33.203102 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 12:50:33.212093 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:50:33.217896 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:50:33.219543 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 12:50:33.223563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:50:33.223815 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:50:33.227233 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:50:33.227494 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:50:33.231444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:50:33.231695 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:50:33.235295 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:50:33.235510 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:50:33.240975 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 12:50:33.260184 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 12:50:33.263403 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:50:33.263629 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:50:33.266151 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 12:50:33.281222 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 12:50:33.302457 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 12:50:33.337720 lvm[1626]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:50:33.383225 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 12:50:33.388263 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:50:33.403210 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 12:50:33.413971 lvm[1646]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:50:33.457506 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 12:50:33.475431 systemd-networkd[1436]: lo: Link UP Apr 30 12:50:33.475711 systemd-resolved[1614]: Positive Trust Anchors: Apr 30 12:50:33.475720 systemd-resolved[1614]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:50:33.475771 systemd-resolved[1614]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:50:33.475940 systemd-networkd[1436]: lo: Gained carrier Apr 30 12:50:33.479699 systemd-networkd[1436]: Enumeration completed Apr 30 12:50:33.480026 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:50:33.480146 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:50:33.480151 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:50:33.481835 systemd-resolved[1614]: Using system hostname 'ci-4230.1.1-a-af46bb47a4'. Apr 30 12:50:33.496145 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 12:50:33.498892 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 12:50:33.507112 augenrules[1661]: No rules Apr 30 12:50:33.508941 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:50:33.509235 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:50:33.535584 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:50:33.542921 kernel: mlx5_core f132:00:02.0 enP61746s1: Link up Apr 30 12:50:33.564925 kernel: hv_netvsc 6045bd0e-99d3-6045-bd0e-99d36045bd0e eth0: Data path switched to VF: enP61746s1 Apr 30 12:50:33.565693 systemd-networkd[1436]: enP61746s1: Link UP Apr 30 12:50:33.565841 systemd-networkd[1436]: eth0: Link UP Apr 30 12:50:33.565847 systemd-networkd[1436]: eth0: Gained carrier Apr 30 12:50:33.565885 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:50:33.569342 systemd-networkd[1436]: enP61746s1: Gained carrier Apr 30 12:50:33.570235 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:50:33.573476 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 12:50:33.576606 systemd[1]: Reached target network.target - Network. Apr 30 12:50:33.578815 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:50:33.605957 systemd-networkd[1436]: eth0: DHCPv4 address 10.200.4.14/24, gateway 10.200.4.1 acquired from 168.63.129.16 Apr 30 12:50:33.777458 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 12:50:33.781464 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:50:34.647175 systemd-networkd[1436]: eth0: Gained IPv6LL Apr 30 12:50:34.649998 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 12:50:34.653329 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 12:50:35.223207 systemd-networkd[1436]: enP61746s1: Gained IPv6LL Apr 30 12:50:36.806307 ldconfig[1292]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 12:50:36.816643 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 12:50:36.824096 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 12:50:36.845371 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 12:50:36.848324 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:50:36.850883 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 12:50:36.854401 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 12:50:36.857742 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 12:50:36.860273 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 12:50:36.863169 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 12:50:36.865948 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 12:50:36.865993 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:50:36.868236 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:50:36.873116 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 12:50:36.877307 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 12:50:36.884399 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 12:50:36.887670 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 30 12:50:36.890485 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 30 12:50:36.903475 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 12:50:36.906356 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 12:50:36.909774 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 12:50:36.912344 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:50:36.915007 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:50:36.917494 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:50:36.917529 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:50:36.926014 systemd[1]: Starting chronyd.service - NTP client/server... Apr 30 12:50:36.931043 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 12:50:36.941098 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 12:50:36.950152 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 12:50:36.955859 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 12:50:36.961174 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 12:50:36.966378 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 12:50:36.966433 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 30 12:50:36.973137 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 30 12:50:36.975792 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 30 12:50:36.977862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:36.983181 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 12:50:36.986924 jq[1685]: false Apr 30 12:50:36.989068 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 12:50:36.993805 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 12:50:36.993845 KVP[1687]: KVP starting; pid is:1687 Apr 30 12:50:37.002086 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 12:50:37.009180 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 12:50:37.022148 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 12:50:37.025750 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 12:50:37.026367 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 12:50:37.036926 kernel: hv_utils: KVP IC version 4.0 Apr 30 12:50:37.035135 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 12:50:37.038197 KVP[1687]: KVP LIC Version: 3.1 Apr 30 12:50:37.041013 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 12:50:37.050517 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 12:50:37.051846 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 12:50:37.058527 (chronyd)[1678]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 30 12:50:37.062418 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 12:50:37.062696 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 12:50:37.071675 jq[1698]: true Apr 30 12:50:37.119528 chronyd[1722]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 30 12:50:37.132160 extend-filesystems[1686]: Found loop4 Apr 30 12:50:37.132160 extend-filesystems[1686]: Found loop5 Apr 30 12:50:37.132160 extend-filesystems[1686]: Found loop6 Apr 30 12:50:37.132160 extend-filesystems[1686]: Found loop7 Apr 30 12:50:37.132160 extend-filesystems[1686]: Found sda Apr 30 12:50:37.132160 extend-filesystems[1686]: Found sda1 Apr 30 12:50:37.132160 extend-filesystems[1686]: Found sda2 Apr 30 12:50:37.132160 extend-filesystems[1686]: Found sda3 Apr 30 12:50:37.132160 extend-filesystems[1686]: Found usr Apr 30 12:50:37.132160 extend-filesystems[1686]: Found sda4 Apr 30 12:50:37.132160 extend-filesystems[1686]: Found sda6 Apr 30 12:50:37.132160 extend-filesystems[1686]: Found sda7 Apr 30 12:50:37.132160 extend-filesystems[1686]: Found sda9 Apr 30 12:50:37.132160 extend-filesystems[1686]: Checking size of /dev/sda9 Apr 30 12:50:37.218617 jq[1704]: true Apr 30 12:50:37.125303 (ntainerd)[1715]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 12:50:37.172734 chronyd[1722]: Timezone right/UTC failed leap second check, ignoring Apr 30 12:50:37.221107 update_engine[1696]: I20250430 12:50:37.153452 1696 main.cc:92] Flatcar Update Engine starting Apr 30 12:50:37.221798 tar[1703]: linux-amd64/helm Apr 30 12:50:37.168731 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 12:50:37.177081 chronyd[1722]: Loaded seccomp filter (level 2) Apr 30 12:50:37.184117 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 12:50:37.222658 dbus-daemon[1681]: [system] SELinux support is enabled Apr 30 12:50:37.184442 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 12:50:37.194715 systemd[1]: Started chronyd.service - NTP client/server. Apr 30 12:50:37.222826 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 12:50:37.236136 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 12:50:37.236176 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 12:50:37.242368 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 12:50:37.242403 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 12:50:37.252954 extend-filesystems[1686]: Old size kept for /dev/sda9 Apr 30 12:50:37.252954 extend-filesystems[1686]: Found sr0 Apr 30 12:50:37.269789 update_engine[1696]: I20250430 12:50:37.247843 1696 update_check_scheduler.cc:74] Next update check in 9m5s Apr 30 12:50:37.255293 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 12:50:37.255984 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 12:50:37.262651 systemd[1]: Started update-engine.service - Update Engine. Apr 30 12:50:37.276111 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 12:50:37.289936 systemd-logind[1694]: New seat seat0. Apr 30 12:50:37.302589 systemd-logind[1694]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Apr 30 12:50:37.302822 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 12:50:37.430585 bash[1754]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:50:37.432993 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 12:50:37.438896 coreos-metadata[1680]: Apr 30 12:50:37.438 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 30 12:50:37.442127 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 12:50:37.446921 coreos-metadata[1680]: Apr 30 12:50:37.444 INFO Fetch successful Apr 30 12:50:37.446921 coreos-metadata[1680]: Apr 30 12:50:37.444 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 30 12:50:37.452868 coreos-metadata[1680]: Apr 30 12:50:37.452 INFO Fetch successful Apr 30 12:50:37.453785 coreos-metadata[1680]: Apr 30 12:50:37.453 INFO Fetching http://168.63.129.16/machine/aa8e736e-a8fd-4d7e-b1d7-6b470ffcec66/94221ebe%2D2c35%2D4f9c%2Dae5f%2D7c60b29118fb.%5Fci%2D4230.1.1%2Da%2Daf46bb47a4?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 30 12:50:37.458660 coreos-metadata[1680]: Apr 30 12:50:37.458 INFO Fetch successful Apr 30 12:50:37.458988 coreos-metadata[1680]: Apr 30 12:50:37.458 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 30 12:50:37.470798 coreos-metadata[1680]: Apr 30 12:50:37.470 INFO Fetch successful Apr 30 12:50:37.525573 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1755) Apr 30 12:50:37.586314 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 12:50:37.589680 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 12:50:37.822297 locksmithd[1746]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 12:50:37.979861 sshd_keygen[1712]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 12:50:38.046325 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 12:50:38.062953 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 12:50:38.078190 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 30 12:50:38.099408 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 12:50:38.099744 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 12:50:38.113072 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 12:50:38.119288 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 30 12:50:38.138056 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 12:50:38.153373 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 12:50:38.165924 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 12:50:38.171414 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 12:50:38.289712 tar[1703]: linux-amd64/LICENSE Apr 30 12:50:38.289712 tar[1703]: linux-amd64/README.md Apr 30 12:50:38.302523 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 12:50:38.688400 containerd[1715]: time="2025-04-30T12:50:38.687010900Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 12:50:38.703077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:38.720738 containerd[1715]: time="2025-04-30T12:50:38.720687900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:50:38.722257 containerd[1715]: time="2025-04-30T12:50:38.722176600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:50:38.722257 containerd[1715]: time="2025-04-30T12:50:38.722213200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 12:50:38.722257 containerd[1715]: time="2025-04-30T12:50:38.722233000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 12:50:38.722539 containerd[1715]: time="2025-04-30T12:50:38.722415900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 12:50:38.722539 containerd[1715]: time="2025-04-30T12:50:38.722440900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 12:50:38.722926 containerd[1715]: time="2025-04-30T12:50:38.722523100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:50:38.722926 containerd[1715]: time="2025-04-30T12:50:38.722648700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:50:38.722844 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:50:38.723440 containerd[1715]: time="2025-04-30T12:50:38.723404300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:50:38.723440 containerd[1715]: time="2025-04-30T12:50:38.723433600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 12:50:38.723543 containerd[1715]: time="2025-04-30T12:50:38.723453800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:50:38.723543 containerd[1715]: time="2025-04-30T12:50:38.723467600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 12:50:38.723619 containerd[1715]: time="2025-04-30T12:50:38.723585500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:50:38.723816 containerd[1715]: time="2025-04-30T12:50:38.723787200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:50:38.724014 containerd[1715]: time="2025-04-30T12:50:38.723990600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:50:38.724014 containerd[1715]: time="2025-04-30T12:50:38.724010500Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 12:50:38.724329 containerd[1715]: time="2025-04-30T12:50:38.724113800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 12:50:38.724329 containerd[1715]: time="2025-04-30T12:50:38.724171500Z" level=info msg="metadata content store policy set" policy=shared Apr 30 12:50:38.775651 containerd[1715]: time="2025-04-30T12:50:38.775433600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 12:50:38.775651 containerd[1715]: time="2025-04-30T12:50:38.775516500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 12:50:38.775651 containerd[1715]: time="2025-04-30T12:50:38.775541100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 12:50:38.775651 containerd[1715]: time="2025-04-30T12:50:38.775564600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 12:50:38.775651 containerd[1715]: time="2025-04-30T12:50:38.775597300Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 12:50:38.775973 containerd[1715]: time="2025-04-30T12:50:38.775818600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 12:50:38.777115 containerd[1715]: time="2025-04-30T12:50:38.776234600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 12:50:38.777115 containerd[1715]: time="2025-04-30T12:50:38.776392300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 12:50:38.777115 containerd[1715]: time="2025-04-30T12:50:38.776414100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 12:50:38.777115 containerd[1715]: time="2025-04-30T12:50:38.776436200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 12:50:38.777115 containerd[1715]: time="2025-04-30T12:50:38.776456200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 12:50:38.777115 containerd[1715]: time="2025-04-30T12:50:38.776477900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 12:50:38.777115 containerd[1715]: time="2025-04-30T12:50:38.776494900Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 12:50:38.777115 containerd[1715]: time="2025-04-30T12:50:38.776514900Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 12:50:38.777115 containerd[1715]: time="2025-04-30T12:50:38.776536200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 12:50:38.777115 containerd[1715]: time="2025-04-30T12:50:38.776554500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 12:50:38.777115 containerd[1715]: time="2025-04-30T12:50:38.776571700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 12:50:38.777115 containerd[1715]: time="2025-04-30T12:50:38.776588700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 12:50:38.777115 containerd[1715]: time="2025-04-30T12:50:38.776617000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777115 containerd[1715]: time="2025-04-30T12:50:38.776636500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776653700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776673600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776689300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776721500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776742200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776759900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776780200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776800800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776816900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776834300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776851800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776872100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776919200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776939500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.777694 containerd[1715]: time="2025-04-30T12:50:38.776954500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 12:50:38.778237 containerd[1715]: time="2025-04-30T12:50:38.777009900Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 12:50:38.778237 containerd[1715]: time="2025-04-30T12:50:38.777035400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 12:50:38.778237 containerd[1715]: time="2025-04-30T12:50:38.777049700Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 12:50:38.778237 containerd[1715]: time="2025-04-30T12:50:38.777068600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 12:50:38.778237 containerd[1715]: time="2025-04-30T12:50:38.777118000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.778237 containerd[1715]: time="2025-04-30T12:50:38.777138000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 12:50:38.778237 containerd[1715]: time="2025-04-30T12:50:38.777153500Z" level=info msg="NRI interface is disabled by configuration." Apr 30 12:50:38.778237 containerd[1715]: time="2025-04-30T12:50:38.777167100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 12:50:38.779545 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.777580700Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.777650300Z" level=info msg="Connect containerd service" Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.777708100Z" level=info msg="using legacy CRI server" Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.777718100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.777880400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.778637000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.779028900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.779087200Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.779192100Z" level=info msg="Start subscribing containerd event" Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.779241800Z" level=info msg="Start recovering state" Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.779315800Z" level=info msg="Start event monitor" Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.779328300Z" level=info msg="Start snapshots syncer" Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.779339500Z" level=info msg="Start cni network conf syncer for default" Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.779349100Z" level=info msg="Start streaming server" Apr 30 12:50:38.785694 containerd[1715]: time="2025-04-30T12:50:38.779754700Z" level=info msg="containerd successfully booted in 0.094043s" Apr 30 12:50:38.782511 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 12:50:38.785017 systemd[1]: Startup finished in 872ms (firmware) + 26.859s (loader) + 958ms (kernel) + 10.081s (initrd) + 12.524s (userspace) = 51.295s. Apr 30 12:50:39.050271 login[1850]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 12:50:39.052888 login[1851]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 12:50:39.062184 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 12:50:39.070257 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 12:50:39.078407 systemd-logind[1694]: New session 1 of user core. Apr 30 12:50:39.082111 systemd-logind[1694]: New session 2 of user core. Apr 30 12:50:39.142483 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 12:50:39.151972 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 12:50:39.168457 (systemd)[1874]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 12:50:39.172094 systemd-logind[1694]: New session c1 of user core. Apr 30 12:50:39.423107 systemd[1874]: Queued start job for default target default.target. Apr 30 12:50:39.428504 systemd[1874]: Created slice app.slice - User Application Slice. Apr 30 12:50:39.428542 systemd[1874]: Reached target paths.target - Paths. Apr 30 12:50:39.428596 systemd[1874]: Reached target timers.target - Timers. Apr 30 12:50:39.430514 systemd[1874]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 12:50:39.447738 systemd[1874]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 12:50:39.447952 systemd[1874]: Reached target sockets.target - Sockets. Apr 30 12:50:39.448010 systemd[1874]: Reached target basic.target - Basic System. Apr 30 12:50:39.448059 systemd[1874]: Reached target default.target - Main User Target. Apr 30 12:50:39.448096 systemd[1874]: Startup finished in 265ms. Apr 30 12:50:39.448843 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 12:50:39.455079 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 12:50:39.456155 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 12:50:39.609534 kubelet[1862]: E0430 12:50:39.609479 1862 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:50:39.612220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:50:39.612416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:50:39.612833 systemd[1]: kubelet.service: Consumed 970ms CPU time, 246.9M memory peak. Apr 30 12:50:40.010300 waagent[1847]: 2025-04-30T12:50:40.010179Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 30 12:50:40.041435 waagent[1847]: 2025-04-30T12:50:40.010676Z INFO Daemon Daemon OS: flatcar 4230.1.1 Apr 30 12:50:40.041435 waagent[1847]: 2025-04-30T12:50:40.011210Z INFO Daemon Daemon Python: 3.11.11 Apr 30 12:50:40.041435 waagent[1847]: 2025-04-30T12:50:40.012171Z INFO Daemon Daemon Run daemon Apr 30 12:50:40.041435 waagent[1847]: 2025-04-30T12:50:40.012802Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.1.1' Apr 30 12:50:40.041435 waagent[1847]: 2025-04-30T12:50:40.013525Z INFO Daemon Daemon Using waagent for provisioning Apr 30 12:50:40.041435 waagent[1847]: 2025-04-30T12:50:40.014430Z INFO Daemon Daemon Activate resource disk Apr 30 12:50:40.041435 waagent[1847]: 2025-04-30T12:50:40.015063Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 30 12:50:40.041435 waagent[1847]: 2025-04-30T12:50:40.021147Z INFO Daemon Daemon Found device: None Apr 30 12:50:40.041435 waagent[1847]: 2025-04-30T12:50:40.022089Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 30 12:50:40.041435 waagent[1847]: 2025-04-30T12:50:40.022831Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 30 12:50:40.041435 waagent[1847]: 2025-04-30T12:50:40.024159Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 12:50:40.041435 waagent[1847]: 2025-04-30T12:50:40.024933Z INFO Daemon Daemon Running default provisioning handler Apr 30 12:50:40.045059 waagent[1847]: 2025-04-30T12:50:40.044924Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 30 12:50:40.051609 waagent[1847]: 2025-04-30T12:50:40.051533Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 30 12:50:40.057978 waagent[1847]: 2025-04-30T12:50:40.057850Z INFO Daemon Daemon cloud-init is enabled: False Apr 30 12:50:40.102080 waagent[1847]: 2025-04-30T12:50:40.058919Z INFO Daemon Daemon Copying ovf-env.xml Apr 30 12:50:40.203554 waagent[1847]: 2025-04-30T12:50:40.201196Z INFO Daemon Daemon Successfully mounted dvd Apr 30 12:50:40.231317 waagent[1847]: 2025-04-30T12:50:40.231224Z INFO Daemon Daemon Detect protocol endpoint Apr 30 12:50:40.231348 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 30 12:50:40.234043 waagent[1847]: 2025-04-30T12:50:40.233965Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 30 12:50:40.245187 waagent[1847]: 2025-04-30T12:50:40.234248Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 30 12:50:40.245187 waagent[1847]: 2025-04-30T12:50:40.234943Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 30 12:50:40.245187 waagent[1847]: 2025-04-30T12:50:40.235921Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 30 12:50:40.245187 waagent[1847]: 2025-04-30T12:50:40.236660Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 30 12:50:40.332578 waagent[1847]: 2025-04-30T12:50:40.332434Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 30 12:50:40.340632 waagent[1847]: 2025-04-30T12:50:40.332998Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 30 12:50:40.340632 waagent[1847]: 2025-04-30T12:50:40.333924Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 30 12:50:40.507249 waagent[1847]: 2025-04-30T12:50:40.507132Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 30 12:50:40.511102 waagent[1847]: 2025-04-30T12:50:40.511015Z INFO Daemon Daemon Forcing an update of the goal state. Apr 30 12:50:40.517766 waagent[1847]: 2025-04-30T12:50:40.517702Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 12:50:40.533932 waagent[1847]: 2025-04-30T12:50:40.533870Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.166 Apr 30 12:50:40.547365 waagent[1847]: 2025-04-30T12:50:40.534575Z INFO Daemon Apr 30 12:50:40.547365 waagent[1847]: 2025-04-30T12:50:40.535283Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 4e3db797-e6ed-49ed-b91b-9293b89858dc eTag: 14846537485060172055 source: Fabric] Apr 30 12:50:40.547365 waagent[1847]: 2025-04-30T12:50:40.536240Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 30 12:50:40.547365 waagent[1847]: 2025-04-30T12:50:40.537209Z INFO Daemon Apr 30 12:50:40.547365 waagent[1847]: 2025-04-30T12:50:40.537802Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 30 12:50:40.550562 waagent[1847]: 2025-04-30T12:50:40.550515Z INFO Daemon Daemon Downloading artifacts profile blob Apr 30 12:50:40.697310 waagent[1847]: 2025-04-30T12:50:40.697211Z INFO Daemon Downloaded certificate {'thumbprint': '0B5A7BBF7F4C52BB5AADC0ED89E0EC7ECE7A31A1', 'hasPrivateKey': True} Apr 30 12:50:40.703988 waagent[1847]: 2025-04-30T12:50:40.698050Z INFO Daemon Fetch goal state completed Apr 30 12:50:40.747107 waagent[1847]: 2025-04-30T12:50:40.746976Z INFO Daemon Daemon Starting provisioning Apr 30 12:50:40.754175 waagent[1847]: 2025-04-30T12:50:40.747458Z INFO Daemon Daemon Handle ovf-env.xml. Apr 30 12:50:40.754175 waagent[1847]: 2025-04-30T12:50:40.748413Z INFO Daemon Daemon Set hostname [ci-4230.1.1-a-af46bb47a4] Apr 30 12:50:40.762997 waagent[1847]: 2025-04-30T12:50:40.762882Z INFO Daemon Daemon Publish hostname [ci-4230.1.1-a-af46bb47a4] Apr 30 12:50:40.770734 waagent[1847]: 2025-04-30T12:50:40.763422Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 30 12:50:40.770734 waagent[1847]: 2025-04-30T12:50:40.763924Z INFO Daemon Daemon Primary interface is [eth0] Apr 30 12:50:40.773637 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:50:40.773647 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:50:40.773703 systemd-networkd[1436]: eth0: DHCP lease lost Apr 30 12:50:40.775082 waagent[1847]: 2025-04-30T12:50:40.775014Z INFO Daemon Daemon Create user account if not exists Apr 30 12:50:40.789224 waagent[1847]: 2025-04-30T12:50:40.775393Z INFO Daemon Daemon User core already exists, skip useradd Apr 30 12:50:40.789224 waagent[1847]: 2025-04-30T12:50:40.776470Z INFO Daemon Daemon Configure sudoer Apr 30 12:50:40.789224 waagent[1847]: 2025-04-30T12:50:40.777499Z INFO Daemon Daemon Configure sshd Apr 30 12:50:40.789224 waagent[1847]: 2025-04-30T12:50:40.778197Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 30 12:50:40.789224 waagent[1847]: 2025-04-30T12:50:40.778722Z INFO Daemon Daemon Deploy ssh public key. Apr 30 12:50:40.816980 systemd-networkd[1436]: eth0: DHCPv4 address 10.200.4.14/24, gateway 10.200.4.1 acquired from 168.63.129.16 Apr 30 12:50:41.943851 waagent[1847]: 2025-04-30T12:50:41.943742Z INFO Daemon Daemon Provisioning complete Apr 30 12:50:41.957519 waagent[1847]: 2025-04-30T12:50:41.957450Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 30 12:50:41.965011 waagent[1847]: 2025-04-30T12:50:41.957801Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 30 12:50:41.965011 waagent[1847]: 2025-04-30T12:50:41.958851Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 30 12:50:42.085613 waagent[1926]: 2025-04-30T12:50:42.085501Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 30 12:50:42.086115 waagent[1926]: 2025-04-30T12:50:42.085673Z INFO ExtHandler ExtHandler OS: flatcar 4230.1.1 Apr 30 12:50:42.086115 waagent[1926]: 2025-04-30T12:50:42.085755Z INFO ExtHandler ExtHandler Python: 3.11.11 Apr 30 12:50:42.117377 waagent[1926]: 2025-04-30T12:50:42.117263Z INFO ExtHandler ExtHandler Distro: flatcar-4230.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 30 12:50:42.117658 waagent[1926]: 2025-04-30T12:50:42.117590Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 12:50:42.117774 waagent[1926]: 2025-04-30T12:50:42.117722Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 12:50:42.126826 waagent[1926]: 2025-04-30T12:50:42.126753Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 30 12:50:42.133658 waagent[1926]: 2025-04-30T12:50:42.133601Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.166 Apr 30 12:50:42.134156 waagent[1926]: 2025-04-30T12:50:42.134099Z INFO ExtHandler Apr 30 12:50:42.134239 waagent[1926]: 2025-04-30T12:50:42.134194Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 5b93a312-8f10-4852-8bf9-d99c46de9e17 eTag: 14846537485060172055 source: Fabric] Apr 30 12:50:42.134561 waagent[1926]: 2025-04-30T12:50:42.134509Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 30 12:50:42.135172 waagent[1926]: 2025-04-30T12:50:42.135115Z INFO ExtHandler Apr 30 12:50:42.135237 waagent[1926]: 2025-04-30T12:50:42.135202Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 30 12:50:42.139031 waagent[1926]: 2025-04-30T12:50:42.138989Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 30 12:50:42.218997 waagent[1926]: 2025-04-30T12:50:42.217255Z INFO ExtHandler Downloaded certificate {'thumbprint': '0B5A7BBF7F4C52BB5AADC0ED89E0EC7ECE7A31A1', 'hasPrivateKey': True} Apr 30 12:50:42.218997 waagent[1926]: 2025-04-30T12:50:42.218062Z INFO ExtHandler Fetch goal state completed Apr 30 12:50:42.235091 waagent[1926]: 2025-04-30T12:50:42.235015Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1926 Apr 30 12:50:42.235268 waagent[1926]: 2025-04-30T12:50:42.235198Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 30 12:50:42.236890 waagent[1926]: 2025-04-30T12:50:42.236830Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.1.1', '', 'Flatcar Container Linux by Kinvolk'] Apr 30 12:50:42.237281 waagent[1926]: 2025-04-30T12:50:42.237232Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 30 12:50:43.256537 waagent[1926]: 2025-04-30T12:50:43.256480Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 30 12:50:43.257031 waagent[1926]: 2025-04-30T12:50:43.256749Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 30 12:50:43.263785 waagent[1926]: 2025-04-30T12:50:43.263586Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 30 12:50:43.270607 systemd[1]: Reload requested from client PID 1939 ('systemctl') (unit waagent.service)... Apr 30 12:50:43.270626 systemd[1]: Reloading... Apr 30 12:50:43.352964 zram_generator::config[1974]: No configuration found. Apr 30 12:50:43.494598 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:50:43.608366 systemd[1]: Reloading finished in 337 ms. Apr 30 12:50:43.626398 waagent[1926]: 2025-04-30T12:50:43.625871Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 30 12:50:43.635290 systemd[1]: Reload requested from client PID 2035 ('systemctl') (unit waagent.service)... Apr 30 12:50:43.635312 systemd[1]: Reloading... Apr 30 12:50:43.718932 zram_generator::config[2070]: No configuration found. Apr 30 12:50:43.863329 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:50:43.977920 systemd[1]: Reloading finished in 342 ms. Apr 30 12:50:43.999391 waagent[1926]: 2025-04-30T12:50:43.999144Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 30 12:50:43.999531 waagent[1926]: 2025-04-30T12:50:43.999384Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 30 12:50:44.362993 waagent[1926]: 2025-04-30T12:50:44.362862Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 30 12:50:44.363726 waagent[1926]: 2025-04-30T12:50:44.363646Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 30 12:50:44.364594 waagent[1926]: 2025-04-30T12:50:44.364527Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 30 12:50:44.365526 waagent[1926]: 2025-04-30T12:50:44.365470Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 30 12:50:44.365652 waagent[1926]: 2025-04-30T12:50:44.365594Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 12:50:44.365744 waagent[1926]: 2025-04-30T12:50:44.365698Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 30 12:50:44.366169 waagent[1926]: 2025-04-30T12:50:44.366118Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 12:50:44.366257 waagent[1926]: 2025-04-30T12:50:44.366195Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 30 12:50:44.366426 waagent[1926]: 2025-04-30T12:50:44.366380Z INFO EnvHandler ExtHandler Configure routes Apr 30 12:50:44.366546 waagent[1926]: 2025-04-30T12:50:44.366489Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 30 12:50:44.366808 waagent[1926]: 2025-04-30T12:50:44.366761Z INFO EnvHandler ExtHandler Gateway:None Apr 30 12:50:44.366866 waagent[1926]: 2025-04-30T12:50:44.366810Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 30 12:50:44.367171 waagent[1926]: 2025-04-30T12:50:44.367111Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 30 12:50:44.367432 waagent[1926]: 2025-04-30T12:50:44.367389Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 30 12:50:44.367561 waagent[1926]: 2025-04-30T12:50:44.367512Z INFO EnvHandler ExtHandler Routes:None Apr 30 12:50:44.367970 waagent[1926]: 2025-04-30T12:50:44.367893Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 30 12:50:44.368564 waagent[1926]: 2025-04-30T12:50:44.368410Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 30 12:50:44.371363 waagent[1926]: 2025-04-30T12:50:44.371313Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 30 12:50:44.371363 waagent[1926]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 30 12:50:44.371363 waagent[1926]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Apr 30 12:50:44.371363 waagent[1926]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 30 12:50:44.371363 waagent[1926]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 30 12:50:44.371363 waagent[1926]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 12:50:44.371363 waagent[1926]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 30 12:50:44.381927 waagent[1926]: 2025-04-30T12:50:44.380041Z INFO ExtHandler ExtHandler Apr 30 12:50:44.381927 waagent[1926]: 2025-04-30T12:50:44.380156Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6db309ca-22ae-4fcb-8a15-85ebf4b9d3c7 correlation 806c725f-7aba-4e8c-9639-231dc1753657 created: 2025-04-30T12:49:34.540581Z] Apr 30 12:50:44.381927 waagent[1926]: 2025-04-30T12:50:44.380629Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 30 12:50:44.381927 waagent[1926]: 2025-04-30T12:50:44.381536Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Apr 30 12:50:44.415263 waagent[1926]: 2025-04-30T12:50:44.415171Z INFO MonitorHandler ExtHandler Network interfaces: Apr 30 12:50:44.415263 waagent[1926]: Executing ['ip', '-a', '-o', 'link']: Apr 30 12:50:44.415263 waagent[1926]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 30 12:50:44.415263 waagent[1926]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:0e:99:d3 brd ff:ff:ff:ff:ff:ff Apr 30 12:50:44.415263 waagent[1926]: 3: enP61746s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:0e:99:d3 brd ff:ff:ff:ff:ff:ff\ altname enP61746p0s2 Apr 30 12:50:44.415263 waagent[1926]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 30 12:50:44.415263 waagent[1926]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 30 12:50:44.415263 waagent[1926]: 2: eth0 inet 10.200.4.14/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 30 12:50:44.415263 waagent[1926]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 30 12:50:44.415263 waagent[1926]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 30 12:50:44.415263 waagent[1926]: 2: eth0 inet6 fe80::6245:bdff:fe0e:99d3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 12:50:44.415263 waagent[1926]: 3: enP61746s1 inet6 fe80::6245:bdff:fe0e:99d3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 30 12:50:44.430931 waagent[1926]: 2025-04-30T12:50:44.429764Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E1830AD4-EE56-4239-9DBC-B682E4FEBBE8;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 30 12:50:44.474482 waagent[1926]: 2025-04-30T12:50:44.474401Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 30 12:50:44.474482 waagent[1926]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 12:50:44.474482 waagent[1926]: pkts bytes target prot opt in out source destination Apr 30 12:50:44.474482 waagent[1926]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 12:50:44.474482 waagent[1926]: pkts bytes target prot opt in out source destination Apr 30 12:50:44.474482 waagent[1926]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 12:50:44.474482 waagent[1926]: pkts bytes target prot opt in out source destination Apr 30 12:50:44.474482 waagent[1926]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 12:50:44.474482 waagent[1926]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 12:50:44.474482 waagent[1926]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 12:50:44.477839 waagent[1926]: 2025-04-30T12:50:44.477773Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 30 12:50:44.477839 waagent[1926]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 12:50:44.477839 waagent[1926]: pkts bytes target prot opt in out source destination Apr 30 12:50:44.477839 waagent[1926]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 30 12:50:44.477839 waagent[1926]: pkts bytes target prot opt in out source destination Apr 30 12:50:44.477839 waagent[1926]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 30 12:50:44.477839 waagent[1926]: pkts bytes target prot opt in out source destination Apr 30 12:50:44.477839 waagent[1926]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 30 12:50:44.477839 waagent[1926]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 30 12:50:44.477839 waagent[1926]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 30 12:50:44.478296 waagent[1926]: 2025-04-30T12:50:44.478136Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 30 12:50:49.619686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 12:50:49.625153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:49.735485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:49.746255 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:50:50.314674 kubelet[2170]: E0430 12:50:50.314612 2170 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:50:50.318667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:50:50.318866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:50:50.319397 systemd[1]: kubelet.service: Consumed 141ms CPU time, 96M memory peak. Apr 30 12:50:51.867476 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 12:50:51.875238 systemd[1]: Started sshd@0-10.200.4.14:22-10.200.16.10:58708.service - OpenSSH per-connection server daemon (10.200.16.10:58708). Apr 30 12:50:52.552714 sshd[2179]: Accepted publickey for core from 10.200.16.10 port 58708 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:50:52.554272 sshd-session[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:52.558724 systemd-logind[1694]: New session 3 of user core. Apr 30 12:50:52.569084 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 12:50:53.115248 systemd[1]: Started sshd@1-10.200.4.14:22-10.200.16.10:58722.service - OpenSSH per-connection server daemon (10.200.16.10:58722). Apr 30 12:50:53.715358 sshd[2184]: Accepted publickey for core from 10.200.16.10 port 58722 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:50:53.717151 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:53.722614 systemd-logind[1694]: New session 4 of user core. Apr 30 12:50:53.730055 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 12:50:54.150452 sshd[2186]: Connection closed by 10.200.16.10 port 58722 Apr 30 12:50:54.151374 sshd-session[2184]: pam_unix(sshd:session): session closed for user core Apr 30 12:50:54.155938 systemd[1]: sshd@1-10.200.4.14:22-10.200.16.10:58722.service: Deactivated successfully. Apr 30 12:50:54.158258 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 12:50:54.159056 systemd-logind[1694]: Session 4 logged out. Waiting for processes to exit. Apr 30 12:50:54.159967 systemd-logind[1694]: Removed session 4. Apr 30 12:50:54.262215 systemd[1]: Started sshd@2-10.200.4.14:22-10.200.16.10:58724.service - OpenSSH per-connection server daemon (10.200.16.10:58724). Apr 30 12:50:54.865712 sshd[2192]: Accepted publickey for core from 10.200.16.10 port 58724 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:50:54.867278 sshd-session[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:54.872121 systemd-logind[1694]: New session 5 of user core. Apr 30 12:50:54.879107 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 12:50:55.319418 sshd[2194]: Connection closed by 10.200.16.10 port 58724 Apr 30 12:50:55.320221 sshd-session[2192]: pam_unix(sshd:session): session closed for user core Apr 30 12:50:55.323225 systemd[1]: sshd@2-10.200.4.14:22-10.200.16.10:58724.service: Deactivated successfully. Apr 30 12:50:55.325374 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 12:50:55.326860 systemd-logind[1694]: Session 5 logged out. Waiting for processes to exit. Apr 30 12:50:55.327952 systemd-logind[1694]: Removed session 5. Apr 30 12:50:55.432220 systemd[1]: Started sshd@3-10.200.4.14:22-10.200.16.10:58740.service - OpenSSH per-connection server daemon (10.200.16.10:58740). Apr 30 12:50:56.037130 sshd[2200]: Accepted publickey for core from 10.200.16.10 port 58740 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:50:56.038895 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:56.044395 systemd-logind[1694]: New session 6 of user core. Apr 30 12:50:56.053098 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 12:50:56.497077 sshd[2202]: Connection closed by 10.200.16.10 port 58740 Apr 30 12:50:56.498084 sshd-session[2200]: pam_unix(sshd:session): session closed for user core Apr 30 12:50:56.501466 systemd[1]: sshd@3-10.200.4.14:22-10.200.16.10:58740.service: Deactivated successfully. Apr 30 12:50:56.503621 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 12:50:56.505215 systemd-logind[1694]: Session 6 logged out. Waiting for processes to exit. Apr 30 12:50:56.506244 systemd-logind[1694]: Removed session 6. Apr 30 12:50:56.610218 systemd[1]: Started sshd@4-10.200.4.14:22-10.200.16.10:58754.service - OpenSSH per-connection server daemon (10.200.16.10:58754). Apr 30 12:50:57.210754 sshd[2208]: Accepted publickey for core from 10.200.16.10 port 58754 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:50:57.212529 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:57.218094 systemd-logind[1694]: New session 7 of user core. Apr 30 12:50:57.227090 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 12:50:57.728222 sudo[2211]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 12:50:57.728615 sudo[2211]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:50:59.197644 (dockerd)[2228]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 12:50:59.197938 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 12:51:00.369691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 12:51:00.376136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:51:00.535086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:51:00.541066 (kubelet)[2240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:51:00.980175 chronyd[1722]: Selected source PHC0 Apr 30 12:51:01.037987 kubelet[2240]: E0430 12:51:01.037932 2240 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:51:01.040858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:51:01.041072 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:51:01.041476 systemd[1]: kubelet.service: Consumed 144ms CPU time, 97.8M memory peak. Apr 30 12:51:01.838020 dockerd[2228]: time="2025-04-30T12:51:01.837953324Z" level=info msg="Starting up" Apr 30 12:51:02.187296 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport967475254-merged.mount: Deactivated successfully. Apr 30 12:51:02.252206 dockerd[2228]: time="2025-04-30T12:51:02.252142309Z" level=info msg="Loading containers: start." Apr 30 12:51:02.540932 kernel: Initializing XFRM netlink socket Apr 30 12:51:02.704056 systemd-networkd[1436]: docker0: Link UP Apr 30 12:51:02.741062 dockerd[2228]: time="2025-04-30T12:51:02.741014520Z" level=info msg="Loading containers: done." Apr 30 12:51:02.764419 dockerd[2228]: time="2025-04-30T12:51:02.764376315Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 12:51:02.764625 dockerd[2228]: time="2025-04-30T12:51:02.764504015Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Apr 30 12:51:02.764678 dockerd[2228]: time="2025-04-30T12:51:02.764643115Z" level=info msg="Daemon has completed initialization" Apr 30 12:51:02.816021 dockerd[2228]: time="2025-04-30T12:51:02.815558906Z" level=info msg="API listen on /run/docker.sock" Apr 30 12:51:02.815976 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 12:51:04.752117 containerd[1715]: time="2025-04-30T12:51:04.752040576Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 12:51:05.580310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2723993352.mount: Deactivated successfully. Apr 30 12:51:07.284329 containerd[1715]: time="2025-04-30T12:51:07.284265576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:07.287155 containerd[1715]: time="2025-04-30T12:51:07.287099776Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674881" Apr 30 12:51:07.290494 containerd[1715]: time="2025-04-30T12:51:07.290437976Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:07.295724 containerd[1715]: time="2025-04-30T12:51:07.295689676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:07.296986 containerd[1715]: time="2025-04-30T12:51:07.296751476Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.544661s" Apr 30 12:51:07.296986 containerd[1715]: time="2025-04-30T12:51:07.296793676Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 12:51:07.320013 containerd[1715]: time="2025-04-30T12:51:07.319973076Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 12:51:09.327605 containerd[1715]: time="2025-04-30T12:51:09.327540510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:09.334804 containerd[1715]: time="2025-04-30T12:51:09.334747417Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617542" Apr 30 12:51:09.337310 containerd[1715]: time="2025-04-30T12:51:09.337252524Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:09.342776 containerd[1715]: time="2025-04-30T12:51:09.342740658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:09.344033 containerd[1715]: time="2025-04-30T12:51:09.343744800Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.023732124s" Apr 30 12:51:09.344033 containerd[1715]: time="2025-04-30T12:51:09.343783302Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 12:51:09.365774 containerd[1715]: time="2025-04-30T12:51:09.365730837Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 12:51:10.849188 containerd[1715]: time="2025-04-30T12:51:10.849130819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:10.851498 containerd[1715]: time="2025-04-30T12:51:10.851438617Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903690" Apr 30 12:51:10.855345 containerd[1715]: time="2025-04-30T12:51:10.855289081Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:10.860824 containerd[1715]: time="2025-04-30T12:51:10.860765615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:10.861926 containerd[1715]: time="2025-04-30T12:51:10.861769857Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.495996319s" Apr 30 12:51:10.861926 containerd[1715]: time="2025-04-30T12:51:10.861808759Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 12:51:10.885491 containerd[1715]: time="2025-04-30T12:51:10.885453466Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 12:51:11.119894 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 12:51:11.125466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:51:11.223801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:51:11.228617 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:51:11.825516 kubelet[2520]: E0430 12:51:11.825416 2520 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:51:11.828050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:51:11.828433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:51:11.828850 systemd[1]: kubelet.service: Consumed 158ms CPU time, 94.1M memory peak. Apr 30 12:51:12.795424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820393046.mount: Deactivated successfully. Apr 30 12:51:13.303292 containerd[1715]: time="2025-04-30T12:51:13.303223240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:13.305140 containerd[1715]: time="2025-04-30T12:51:13.305083664Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185825" Apr 30 12:51:13.307797 containerd[1715]: time="2025-04-30T12:51:13.307733698Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:13.311614 containerd[1715]: time="2025-04-30T12:51:13.311552046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:13.312450 containerd[1715]: time="2025-04-30T12:51:13.312225655Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.426728387s" Apr 30 12:51:13.312450 containerd[1715]: time="2025-04-30T12:51:13.312270855Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 12:51:13.335461 containerd[1715]: time="2025-04-30T12:51:13.335408648Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 12:51:13.993839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1188868043.mount: Deactivated successfully. Apr 30 12:51:15.177893 containerd[1715]: time="2025-04-30T12:51:15.177821802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:15.180983 containerd[1715]: time="2025-04-30T12:51:15.180925041Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Apr 30 12:51:15.184296 containerd[1715]: time="2025-04-30T12:51:15.184240083Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:15.189945 containerd[1715]: time="2025-04-30T12:51:15.189890155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:15.191031 containerd[1715]: time="2025-04-30T12:51:15.190994269Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.85554382s" Apr 30 12:51:15.191031 containerd[1715]: time="2025-04-30T12:51:15.191028269Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 12:51:15.212559 containerd[1715]: time="2025-04-30T12:51:15.212511342Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 12:51:15.814339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527911180.mount: Deactivated successfully. Apr 30 12:51:15.835856 containerd[1715]: time="2025-04-30T12:51:15.835810842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:15.838438 containerd[1715]: time="2025-04-30T12:51:15.838366275Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Apr 30 12:51:15.842348 containerd[1715]: time="2025-04-30T12:51:15.842295924Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:15.846514 containerd[1715]: time="2025-04-30T12:51:15.846463277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:15.847345 containerd[1715]: time="2025-04-30T12:51:15.847179886Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 634.626044ms" Apr 30 12:51:15.847345 containerd[1715]: time="2025-04-30T12:51:15.847218087Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 12:51:15.870207 containerd[1715]: time="2025-04-30T12:51:15.870164878Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 12:51:16.461669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831227453.mount: Deactivated successfully. Apr 30 12:51:18.781914 containerd[1715]: time="2025-04-30T12:51:18.781853935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:18.783895 containerd[1715]: time="2025-04-30T12:51:18.783832162Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Apr 30 12:51:18.786365 containerd[1715]: time="2025-04-30T12:51:18.786309695Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:18.791050 containerd[1715]: time="2025-04-30T12:51:18.790988258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:18.792238 containerd[1715]: time="2025-04-30T12:51:18.792061972Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.921860394s" Apr 30 12:51:18.792238 containerd[1715]: time="2025-04-30T12:51:18.792104173Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 12:51:20.455184 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Apr 30 12:51:21.860051 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 12:51:21.866222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:51:21.884523 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 12:51:21.884643 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 12:51:21.885036 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:51:21.892224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:51:21.924774 systemd[1]: Reload requested from client PID 2710 ('systemctl') (unit session-7.scope)... Apr 30 12:51:21.924794 systemd[1]: Reloading... Apr 30 12:51:22.057990 zram_generator::config[2757]: No configuration found. Apr 30 12:51:22.189389 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:51:22.223765 update_engine[1696]: I20250430 12:51:22.222954 1696 update_attempter.cc:509] Updating boot flags... Apr 30 12:51:22.309499 systemd[1]: Reloading finished in 384 ms. Apr 30 12:51:22.578677 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 12:51:22.578848 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 12:51:22.579305 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:51:22.579381 systemd[1]: kubelet.service: Consumed 104ms CPU time, 83.2M memory peak. Apr 30 12:51:22.586314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:51:22.635033 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2835) Apr 30 12:51:22.831107 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2838) Apr 30 12:51:23.010178 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2838) Apr 30 12:51:23.167414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:51:23.174065 (kubelet)[2967]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:51:23.391075 kubelet[2967]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:51:23.391075 kubelet[2967]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 12:51:23.391075 kubelet[2967]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:51:23.391582 kubelet[2967]: I0430 12:51:23.391175 2967 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:51:23.791395 kubelet[2967]: I0430 12:51:23.791115 2967 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 12:51:23.791395 kubelet[2967]: I0430 12:51:23.791160 2967 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:51:23.791590 kubelet[2967]: I0430 12:51:23.791459 2967 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 12:51:23.809021 kubelet[2967]: I0430 12:51:23.808470 2967 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:51:23.809152 kubelet[2967]: E0430 12:51:23.809041 2967 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:23.820286 kubelet[2967]: I0430 12:51:23.820253 2967 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:51:23.822466 kubelet[2967]: I0430 12:51:23.822409 2967 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:51:23.822665 kubelet[2967]: I0430 12:51:23.822458 2967 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-a-af46bb47a4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 12:51:23.823083 kubelet[2967]: I0430 12:51:23.823060 2967 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:51:23.823083 kubelet[2967]: I0430 12:51:23.823086 2967 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 12:51:23.823237 kubelet[2967]: I0430 12:51:23.823220 2967 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:51:23.824333 kubelet[2967]: I0430 12:51:23.824313 2967 kubelet.go:400] "Attempting to sync node with API server" Apr 30 12:51:23.824333 kubelet[2967]: I0430 12:51:23.824335 2967 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:51:23.824453 kubelet[2967]: I0430 12:51:23.824367 2967 kubelet.go:312] "Adding apiserver pod source" Apr 30 12:51:23.824453 kubelet[2967]: I0430 12:51:23.824386 2967 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:51:23.831425 kubelet[2967]: W0430 12:51:23.831368 2967 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:23.831425 kubelet[2967]: E0430 12:51:23.831423 2967 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:23.831547 kubelet[2967]: I0430 12:51:23.831523 2967 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:51:23.834647 kubelet[2967]: I0430 12:51:23.833840 2967 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:51:23.834647 kubelet[2967]: W0430 12:51:23.833935 2967 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 12:51:23.835946 kubelet[2967]: I0430 12:51:23.835925 2967 server.go:1264] "Started kubelet" Apr 30 12:51:23.837607 kubelet[2967]: W0430 12:51:23.837564 2967 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-af46bb47a4&limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:23.837693 kubelet[2967]: E0430 12:51:23.837621 2967 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-af46bb47a4&limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:23.837693 kubelet[2967]: I0430 12:51:23.837663 2967 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:51:23.839932 kubelet[2967]: I0430 12:51:23.839745 2967 server.go:455] "Adding debug handlers to kubelet server" Apr 30 12:51:23.844931 kubelet[2967]: I0430 12:51:23.844366 2967 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:51:23.844931 kubelet[2967]: I0430 12:51:23.844839 2967 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:51:23.845589 kubelet[2967]: E0430 12:51:23.845460 2967 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-a-af46bb47a4.183b19a697a8aa3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-a-af46bb47a4,UID:ci-4230.1.1-a-af46bb47a4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-a-af46bb47a4,},FirstTimestamp:2025-04-30 12:51:23.835877948 +0000 UTC m=+0.658274207,LastTimestamp:2025-04-30 12:51:23.835877948 +0000 UTC m=+0.658274207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-a-af46bb47a4,}" Apr 30 12:51:23.846815 kubelet[2967]: I0430 12:51:23.846789 2967 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:51:23.851610 kubelet[2967]: E0430 12:51:23.850327 2967 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230.1.1-a-af46bb47a4\" not found" Apr 30 12:51:23.851610 kubelet[2967]: I0430 12:51:23.850381 2967 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 12:51:23.851610 kubelet[2967]: I0430 12:51:23.850484 2967 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:51:23.851610 kubelet[2967]: I0430 12:51:23.850538 2967 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:51:23.851610 kubelet[2967]: W0430 12:51:23.850844 2967 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:23.851610 kubelet[2967]: E0430 12:51:23.850893 2967 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:23.851610 kubelet[2967]: E0430 12:51:23.851013 2967 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:51:23.851610 kubelet[2967]: E0430 12:51:23.851453 2967 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-af46bb47a4?timeout=10s\": dial tcp 10.200.4.14:6443: connect: connection refused" interval="200ms" Apr 30 12:51:23.852345 kubelet[2967]: I0430 12:51:23.852321 2967 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:51:23.852523 kubelet[2967]: I0430 12:51:23.852506 2967 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:51:23.853971 kubelet[2967]: I0430 12:51:23.853954 2967 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:51:23.877311 kubelet[2967]: I0430 12:51:23.877286 2967 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 12:51:23.877311 kubelet[2967]: I0430 12:51:23.877304 2967 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 12:51:23.877481 kubelet[2967]: I0430 12:51:23.877328 2967 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:51:23.881537 kubelet[2967]: I0430 12:51:23.881515 2967 policy_none.go:49] "None policy: Start" Apr 30 12:51:23.882515 kubelet[2967]: I0430 12:51:23.882493 2967 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 12:51:23.882606 kubelet[2967]: I0430 12:51:23.882524 2967 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:51:23.890235 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 12:51:23.903840 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 12:51:23.907042 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 12:51:23.918663 kubelet[2967]: I0430 12:51:23.918635 2967 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:51:23.919012 kubelet[2967]: I0430 12:51:23.918864 2967 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:51:23.919246 kubelet[2967]: I0430 12:51:23.919137 2967 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:51:23.921375 kubelet[2967]: E0430 12:51:23.921217 2967 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-a-af46bb47a4\" not found" Apr 30 12:51:23.952748 kubelet[2967]: I0430 12:51:23.952683 2967 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:23.953202 kubelet[2967]: E0430 12:51:23.953165 2967 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.14:6443/api/v1/nodes\": dial tcp 10.200.4.14:6443: connect: connection refused" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:23.961679 kubelet[2967]: I0430 12:51:23.961645 2967 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:51:23.963402 kubelet[2967]: I0430 12:51:23.963087 2967 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:51:23.963402 kubelet[2967]: I0430 12:51:23.963112 2967 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 12:51:23.963402 kubelet[2967]: I0430 12:51:23.963142 2967 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 12:51:23.963402 kubelet[2967]: E0430 12:51:23.963193 2967 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 30 12:51:23.964601 kubelet[2967]: W0430 12:51:23.964547 2967 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:23.966001 kubelet[2967]: E0430 12:51:23.965950 2967 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:24.053043 kubelet[2967]: E0430 12:51:24.052848 2967 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-af46bb47a4?timeout=10s\": dial tcp 10.200.4.14:6443: connect: connection refused" interval="400ms" Apr 30 12:51:24.064219 kubelet[2967]: I0430 12:51:24.064082 2967 topology_manager.go:215] "Topology Admit Handler" podUID="cbfeec3eeff1b14f5ba2a6c165899a94" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.066439 kubelet[2967]: I0430 12:51:24.066397 2967 topology_manager.go:215] "Topology Admit Handler" podUID="7b9dd61b94004695e516ac68482ea8a5" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.068413 kubelet[2967]: I0430 12:51:24.068156 2967 topology_manager.go:215] "Topology Admit Handler" podUID="bab729e493d131a326fc9d935ca468a4" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.075665 systemd[1]: Created slice kubepods-burstable-podcbfeec3eeff1b14f5ba2a6c165899a94.slice - libcontainer container kubepods-burstable-podcbfeec3eeff1b14f5ba2a6c165899a94.slice. Apr 30 12:51:24.097794 systemd[1]: Created slice kubepods-burstable-pod7b9dd61b94004695e516ac68482ea8a5.slice - libcontainer container kubepods-burstable-pod7b9dd61b94004695e516ac68482ea8a5.slice. Apr 30 12:51:24.110873 systemd[1]: Created slice kubepods-burstable-podbab729e493d131a326fc9d935ca468a4.slice - libcontainer container kubepods-burstable-podbab729e493d131a326fc9d935ca468a4.slice. Apr 30 12:51:24.155896 kubelet[2967]: I0430 12:51:24.155863 2967 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.156328 kubelet[2967]: E0430 12:51:24.156287 2967 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.14:6443/api/v1/nodes\": dial tcp 10.200.4.14:6443: connect: connection refused" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.252870 kubelet[2967]: I0430 12:51:24.252720 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b9dd61b94004695e516ac68482ea8a5-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-af46bb47a4\" (UID: \"7b9dd61b94004695e516ac68482ea8a5\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.252870 kubelet[2967]: I0430 12:51:24.252779 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b9dd61b94004695e516ac68482ea8a5-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-a-af46bb47a4\" (UID: \"7b9dd61b94004695e516ac68482ea8a5\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.252870 kubelet[2967]: I0430 12:51:24.252850 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b9dd61b94004695e516ac68482ea8a5-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-af46bb47a4\" (UID: \"7b9dd61b94004695e516ac68482ea8a5\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.253375 kubelet[2967]: I0430 12:51:24.252931 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b9dd61b94004695e516ac68482ea8a5-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-a-af46bb47a4\" (UID: \"7b9dd61b94004695e516ac68482ea8a5\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.253375 kubelet[2967]: I0430 12:51:24.252977 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bab729e493d131a326fc9d935ca468a4-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-a-af46bb47a4\" (UID: \"bab729e493d131a326fc9d935ca468a4\") " pod="kube-system/kube-scheduler-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.253375 kubelet[2967]: I0430 12:51:24.253002 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbfeec3eeff1b14f5ba2a6c165899a94-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-a-af46bb47a4\" (UID: \"cbfeec3eeff1b14f5ba2a6c165899a94\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.253375 kubelet[2967]: I0430 12:51:24.253027 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbfeec3eeff1b14f5ba2a6c165899a94-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-a-af46bb47a4\" (UID: \"cbfeec3eeff1b14f5ba2a6c165899a94\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.253375 kubelet[2967]: I0430 12:51:24.253054 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbfeec3eeff1b14f5ba2a6c165899a94-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-a-af46bb47a4\" (UID: \"cbfeec3eeff1b14f5ba2a6c165899a94\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.253535 kubelet[2967]: I0430 12:51:24.253099 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b9dd61b94004695e516ac68482ea8a5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-a-af46bb47a4\" (UID: \"7b9dd61b94004695e516ac68482ea8a5\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.395706 containerd[1715]: time="2025-04-30T12:51:24.395651637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-a-af46bb47a4,Uid:cbfeec3eeff1b14f5ba2a6c165899a94,Namespace:kube-system,Attempt:0,}" Apr 30 12:51:24.409359 containerd[1715]: time="2025-04-30T12:51:24.409317519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-a-af46bb47a4,Uid:7b9dd61b94004695e516ac68482ea8a5,Namespace:kube-system,Attempt:0,}" Apr 30 12:51:24.414871 containerd[1715]: time="2025-04-30T12:51:24.414787993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-a-af46bb47a4,Uid:bab729e493d131a326fc9d935ca468a4,Namespace:kube-system,Attempt:0,}" Apr 30 12:51:24.453793 kubelet[2967]: E0430 12:51:24.453725 2967 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-af46bb47a4?timeout=10s\": dial tcp 10.200.4.14:6443: connect: connection refused" interval="800ms" Apr 30 12:51:24.558954 kubelet[2967]: I0430 12:51:24.558917 2967 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.559341 kubelet[2967]: E0430 12:51:24.559307 2967 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.14:6443/api/v1/nodes\": dial tcp 10.200.4.14:6443: connect: connection refused" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:24.797261 kubelet[2967]: W0430 12:51:24.797123 2967 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:24.797261 kubelet[2967]: E0430 12:51:24.797179 2967 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:24.982371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1523003009.mount: Deactivated successfully. Apr 30 12:51:25.004208 containerd[1715]: time="2025-04-30T12:51:25.004159177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:51:25.016532 containerd[1715]: time="2025-04-30T12:51:25.016376041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 30 12:51:25.019564 containerd[1715]: time="2025-04-30T12:51:25.019522583Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:51:25.023466 containerd[1715]: time="2025-04-30T12:51:25.023423435Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:51:25.032508 containerd[1715]: time="2025-04-30T12:51:25.032438674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:51:25.038364 containerd[1715]: time="2025-04-30T12:51:25.038319667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:51:25.039138 containerd[1715]: time="2025-04-30T12:51:25.039100580Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 643.313441ms" Apr 30 12:51:25.041878 containerd[1715]: time="2025-04-30T12:51:25.041839823Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:51:25.053045 containerd[1715]: time="2025-04-30T12:51:25.052579792Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:51:25.061344 containerd[1715]: time="2025-04-30T12:51:25.061304030Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 651.880209ms" Apr 30 12:51:25.061555 containerd[1715]: time="2025-04-30T12:51:25.061525834Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 646.63844ms" Apr 30 12:51:25.114651 kubelet[2967]: W0430 12:51:25.114582 2967 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-af46bb47a4&limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:25.114651 kubelet[2967]: E0430 12:51:25.114640 2967 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-af46bb47a4&limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:25.255050 kubelet[2967]: E0430 12:51:25.254997 2967 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-af46bb47a4?timeout=10s\": dial tcp 10.200.4.14:6443: connect: connection refused" interval="1.6s" Apr 30 12:51:25.362525 kubelet[2967]: I0430 12:51:25.362373 2967 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:25.363148 kubelet[2967]: E0430 12:51:25.363083 2967 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.14:6443/api/v1/nodes\": dial tcp 10.200.4.14:6443: connect: connection refused" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:25.416742 kubelet[2967]: W0430 12:51:25.416671 2967 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:25.416742 kubelet[2967]: E0430 12:51:25.416741 2967 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.14:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:25.490276 kubelet[2967]: W0430 12:51:25.489877 2967 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:25.490276 kubelet[2967]: E0430 12:51:25.489977 2967 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:25.997486 kubelet[2967]: E0430 12:51:25.997410 2967 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:26.786865 kubelet[2967]: W0430 12:51:26.786809 2967 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-af46bb47a4&limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:26.786865 kubelet[2967]: E0430 12:51:26.786868 2967 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-af46bb47a4&limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:26.856324 kubelet[2967]: E0430 12:51:26.856267 2967 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-af46bb47a4?timeout=10s\": dial tcp 10.200.4.14:6443: connect: connection refused" interval="3.2s" Apr 30 12:51:26.877234 containerd[1715]: time="2025-04-30T12:51:26.875548783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:51:26.877234 containerd[1715]: time="2025-04-30T12:51:26.876308095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:51:26.877234 containerd[1715]: time="2025-04-30T12:51:26.876356795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:26.877234 containerd[1715]: time="2025-04-30T12:51:26.876457797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:26.888202 containerd[1715]: time="2025-04-30T12:51:26.885003732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:51:26.888202 containerd[1715]: time="2025-04-30T12:51:26.885052233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:51:26.888202 containerd[1715]: time="2025-04-30T12:51:26.885067033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:26.888202 containerd[1715]: time="2025-04-30T12:51:26.885140334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:26.888202 containerd[1715]: time="2025-04-30T12:51:26.875521482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:51:26.888202 containerd[1715]: time="2025-04-30T12:51:26.880807866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:51:26.888202 containerd[1715]: time="2025-04-30T12:51:26.880826366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:26.888202 containerd[1715]: time="2025-04-30T12:51:26.880929368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:26.925770 systemd[1]: Started cri-containerd-7c6361c77c7661f954b8b7a39813cc1b2a019917ff5b5668bd44c241384205ec.scope - libcontainer container 7c6361c77c7661f954b8b7a39813cc1b2a019917ff5b5668bd44c241384205ec. Apr 30 12:51:26.933811 systemd[1]: Started cri-containerd-54511c3811b5f50c2b880f31e8f36992d885471b5dd7aabbbee4a90344a1bfef.scope - libcontainer container 54511c3811b5f50c2b880f31e8f36992d885471b5dd7aabbbee4a90344a1bfef. Apr 30 12:51:26.965081 systemd[1]: Started cri-containerd-f953f7e0a9bc7862a2110c1b89ec7868c318edd4662bbbb591371830928c44ce.scope - libcontainer container f953f7e0a9bc7862a2110c1b89ec7868c318edd4662bbbb591371830928c44ce. Apr 30 12:51:26.970594 kubelet[2967]: W0430 12:51:26.970518 2967 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:26.970594 kubelet[2967]: E0430 12:51:26.970598 2967 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.14:6443: connect: connection refused Apr 30 12:51:26.974386 kubelet[2967]: I0430 12:51:26.974357 2967 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:26.975425 kubelet[2967]: E0430 12:51:26.975389 2967 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.14:6443/api/v1/nodes\": dial tcp 10.200.4.14:6443: connect: connection refused" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:27.021926 containerd[1715]: time="2025-04-30T12:51:27.021809092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-a-af46bb47a4,Uid:cbfeec3eeff1b14f5ba2a6c165899a94,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c6361c77c7661f954b8b7a39813cc1b2a019917ff5b5668bd44c241384205ec\"" Apr 30 12:51:27.031124 containerd[1715]: time="2025-04-30T12:51:27.031058839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-a-af46bb47a4,Uid:bab729e493d131a326fc9d935ca468a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"54511c3811b5f50c2b880f31e8f36992d885471b5dd7aabbbee4a90344a1bfef\"" Apr 30 12:51:27.035028 containerd[1715]: time="2025-04-30T12:51:27.034710096Z" level=info msg="CreateContainer within sandbox \"54511c3811b5f50c2b880f31e8f36992d885471b5dd7aabbbee4a90344a1bfef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 12:51:27.035290 containerd[1715]: time="2025-04-30T12:51:27.035263805Z" level=info msg="CreateContainer within sandbox \"7c6361c77c7661f954b8b7a39813cc1b2a019917ff5b5668bd44c241384205ec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 12:51:27.047617 containerd[1715]: time="2025-04-30T12:51:27.047457998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-a-af46bb47a4,Uid:7b9dd61b94004695e516ac68482ea8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f953f7e0a9bc7862a2110c1b89ec7868c318edd4662bbbb591371830928c44ce\"" Apr 30 12:51:27.051536 containerd[1715]: time="2025-04-30T12:51:27.051502561Z" level=info msg="CreateContainer within sandbox \"f953f7e0a9bc7862a2110c1b89ec7868c318edd4662bbbb591371830928c44ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 12:51:27.126827 containerd[1715]: time="2025-04-30T12:51:27.126769650Z" level=info msg="CreateContainer within sandbox \"7c6361c77c7661f954b8b7a39813cc1b2a019917ff5b5668bd44c241384205ec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b3d70c09736ec2b8d618bc80ccae7f5b53334f6cd2516fb678561c57a8159924\"" Apr 30 12:51:27.127658 containerd[1715]: time="2025-04-30T12:51:27.127616163Z" level=info msg="StartContainer for \"b3d70c09736ec2b8d618bc80ccae7f5b53334f6cd2516fb678561c57a8159924\"" Apr 30 12:51:27.153103 systemd[1]: Started cri-containerd-b3d70c09736ec2b8d618bc80ccae7f5b53334f6cd2516fb678561c57a8159924.scope - libcontainer container b3d70c09736ec2b8d618bc80ccae7f5b53334f6cd2516fb678561c57a8159924. Apr 30 12:51:27.173318 containerd[1715]: time="2025-04-30T12:51:27.173264884Z" level=info msg="CreateContainer within sandbox \"f953f7e0a9bc7862a2110c1b89ec7868c318edd4662bbbb591371830928c44ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d47cb4edaed32d7e637fed0ec94f154b19f5f81fefcce67270acb398285b4b8e\"" Apr 30 12:51:27.174127 containerd[1715]: time="2025-04-30T12:51:27.174076197Z" level=info msg="StartContainer for \"d47cb4edaed32d7e637fed0ec94f154b19f5f81fefcce67270acb398285b4b8e\"" Apr 30 12:51:27.179919 containerd[1715]: time="2025-04-30T12:51:27.177732155Z" level=info msg="CreateContainer within sandbox \"54511c3811b5f50c2b880f31e8f36992d885471b5dd7aabbbee4a90344a1bfef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a19b140db454fb033a08fa04f61bdf467b55bd7763dbd11acead77791108ba5b\"" Apr 30 12:51:27.179919 containerd[1715]: time="2025-04-30T12:51:27.179250279Z" level=info msg="StartContainer for \"a19b140db454fb033a08fa04f61bdf467b55bd7763dbd11acead77791108ba5b\"" Apr 30 12:51:27.210257 containerd[1715]: time="2025-04-30T12:51:27.210131667Z" level=info msg="StartContainer for \"b3d70c09736ec2b8d618bc80ccae7f5b53334f6cd2516fb678561c57a8159924\" returns successfully" Apr 30 12:51:27.235711 systemd[1]: Started cri-containerd-d47cb4edaed32d7e637fed0ec94f154b19f5f81fefcce67270acb398285b4b8e.scope - libcontainer container d47cb4edaed32d7e637fed0ec94f154b19f5f81fefcce67270acb398285b4b8e. Apr 30 12:51:27.245342 systemd[1]: Started cri-containerd-a19b140db454fb033a08fa04f61bdf467b55bd7763dbd11acead77791108ba5b.scope - libcontainer container a19b140db454fb033a08fa04f61bdf467b55bd7763dbd11acead77791108ba5b. Apr 30 12:51:27.268917 kubelet[2967]: E0430 12:51:27.268668 2967 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-a-af46bb47a4.183b19a697a8aa3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-a-af46bb47a4,UID:ci-4230.1.1-a-af46bb47a4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-a-af46bb47a4,},FirstTimestamp:2025-04-30 12:51:23.835877948 +0000 UTC m=+0.658274207,LastTimestamp:2025-04-30 12:51:23.835877948 +0000 UTC m=+0.658274207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-a-af46bb47a4,}" Apr 30 12:51:27.325306 containerd[1715]: time="2025-04-30T12:51:27.325177684Z" level=info msg="StartContainer for \"d47cb4edaed32d7e637fed0ec94f154b19f5f81fefcce67270acb398285b4b8e\" returns successfully" Apr 30 12:51:27.368022 containerd[1715]: time="2025-04-30T12:51:27.367975659Z" level=info msg="StartContainer for \"a19b140db454fb033a08fa04f61bdf467b55bd7763dbd11acead77791108ba5b\" returns successfully" Apr 30 12:51:29.714732 kubelet[2967]: E0430 12:51:29.714683 2967 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.1.1-a-af46bb47a4" not found Apr 30 12:51:30.060775 kubelet[2967]: E0430 12:51:30.060426 2967 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.1-a-af46bb47a4\" not found" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:30.062818 kubelet[2967]: E0430 12:51:30.062786 2967 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.1.1-a-af46bb47a4" not found Apr 30 12:51:30.177661 kubelet[2967]: I0430 12:51:30.177626 2967 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:30.188037 kubelet[2967]: I0430 12:51:30.188000 2967 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:30.195307 kubelet[2967]: E0430 12:51:30.195275 2967 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230.1.1-a-af46bb47a4\" not found" Apr 30 12:51:30.296466 kubelet[2967]: E0430 12:51:30.296382 2967 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230.1.1-a-af46bb47a4\" not found" Apr 30 12:51:30.397380 kubelet[2967]: E0430 12:51:30.397324 2967 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230.1.1-a-af46bb47a4\" not found" Apr 30 12:51:30.498282 kubelet[2967]: E0430 12:51:30.498222 2967 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230.1.1-a-af46bb47a4\" not found" Apr 30 12:51:30.833616 kubelet[2967]: I0430 12:51:30.833479 2967 apiserver.go:52] "Watching apiserver" Apr 30 12:51:30.851026 kubelet[2967]: I0430 12:51:30.850987 2967 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:51:32.397644 kubelet[2967]: W0430 12:51:32.396373 2967 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 12:51:32.445641 systemd[1]: Reload requested from client PID 3263 ('systemctl') (unit session-7.scope)... Apr 30 12:51:32.445660 systemd[1]: Reloading... Apr 30 12:51:32.565024 zram_generator::config[3310]: No configuration found. Apr 30 12:51:32.710958 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:51:32.865736 systemd[1]: Reloading finished in 419 ms. Apr 30 12:51:32.894610 kubelet[2967]: E0430 12:51:32.894392 2967 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4230.1.1-a-af46bb47a4.183b19a697a8aa3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-a-af46bb47a4,UID:ci-4230.1.1-a-af46bb47a4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-a-af46bb47a4,},FirstTimestamp:2025-04-30 12:51:23.835877948 +0000 UTC m=+0.658274207,LastTimestamp:2025-04-30 12:51:23.835877948 +0000 UTC m=+0.658274207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-a-af46bb47a4,}" Apr 30 12:51:32.894762 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:51:32.896047 kubelet[2967]: I0430 12:51:32.895789 2967 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:51:32.911271 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:51:32.911558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:51:32.911633 systemd[1]: kubelet.service: Consumed 903ms CPU time, 114.6M memory peak. Apr 30 12:51:32.917225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:51:35.233641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:51:35.245334 (kubelet)[3377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:51:35.309077 kubelet[3377]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:51:35.309077 kubelet[3377]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 12:51:35.309077 kubelet[3377]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:51:35.309593 kubelet[3377]: I0430 12:51:35.309143 3377 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:51:35.314800 kubelet[3377]: I0430 12:51:35.314768 3377 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 12:51:35.314800 kubelet[3377]: I0430 12:51:35.314791 3377 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:51:35.315167 kubelet[3377]: I0430 12:51:35.315083 3377 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 12:51:35.317748 kubelet[3377]: I0430 12:51:35.317722 3377 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 12:51:35.320047 kubelet[3377]: I0430 12:51:35.319756 3377 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:51:35.334032 kubelet[3377]: I0430 12:51:35.333624 3377 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:51:35.334155 kubelet[3377]: I0430 12:51:35.334023 3377 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:51:35.334954 kubelet[3377]: I0430 12:51:35.334067 3377 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-a-af46bb47a4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 12:51:35.334954 kubelet[3377]: I0430 12:51:35.334448 3377 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:51:35.334954 kubelet[3377]: I0430 12:51:35.334464 3377 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 12:51:35.334954 kubelet[3377]: I0430 12:51:35.334518 3377 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:51:35.334954 kubelet[3377]: I0430 12:51:35.334673 3377 kubelet.go:400] "Attempting to sync node with API server" Apr 30 12:51:35.335313 kubelet[3377]: I0430 12:51:35.334690 3377 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:51:35.335313 kubelet[3377]: I0430 12:51:35.334933 3377 kubelet.go:312] "Adding apiserver pod source" Apr 30 12:51:35.335313 kubelet[3377]: I0430 12:51:35.335063 3377 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:51:35.341281 kubelet[3377]: I0430 12:51:35.338393 3377 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:51:35.341281 kubelet[3377]: I0430 12:51:35.338634 3377 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:51:35.341281 kubelet[3377]: I0430 12:51:35.339750 3377 server.go:1264] "Started kubelet" Apr 30 12:51:35.345926 kubelet[3377]: I0430 12:51:35.344866 3377 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:51:35.354879 kubelet[3377]: I0430 12:51:35.354621 3377 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:51:35.355262 kubelet[3377]: I0430 12:51:35.355197 3377 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:51:35.356322 kubelet[3377]: I0430 12:51:35.356306 3377 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:51:35.357278 kubelet[3377]: I0430 12:51:35.357254 3377 server.go:455] "Adding debug handlers to kubelet server" Apr 30 12:51:35.358771 kubelet[3377]: E0430 12:51:35.358752 3377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230.1.1-a-af46bb47a4\" not found" Apr 30 12:51:35.358935 kubelet[3377]: I0430 12:51:35.358924 3377 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 12:51:35.359755 kubelet[3377]: I0430 12:51:35.359087 3377 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:51:35.359755 kubelet[3377]: I0430 12:51:35.359220 3377 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:51:35.367163 kubelet[3377]: I0430 12:51:35.367117 3377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:51:35.368429 kubelet[3377]: I0430 12:51:35.368403 3377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:51:35.368511 kubelet[3377]: I0430 12:51:35.368441 3377 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 12:51:35.368511 kubelet[3377]: I0430 12:51:35.368460 3377 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 12:51:35.368622 kubelet[3377]: E0430 12:51:35.368505 3377 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:51:35.378517 kubelet[3377]: I0430 12:51:35.378215 3377 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:51:35.378517 kubelet[3377]: I0430 12:51:35.378239 3377 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:51:35.378517 kubelet[3377]: I0430 12:51:35.378313 3377 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:51:35.425740 kubelet[3377]: I0430 12:51:35.425691 3377 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 12:51:35.425740 kubelet[3377]: I0430 12:51:35.425710 3377 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 12:51:35.425740 kubelet[3377]: I0430 12:51:35.425733 3377 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:51:35.426090 kubelet[3377]: I0430 12:51:35.425987 3377 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 12:51:35.426090 kubelet[3377]: I0430 12:51:35.426004 3377 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 12:51:35.426090 kubelet[3377]: I0430 12:51:35.426033 3377 policy_none.go:49] "None policy: Start" Apr 30 12:51:35.426663 kubelet[3377]: I0430 12:51:35.426631 3377 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 12:51:35.426663 kubelet[3377]: I0430 12:51:35.426662 3377 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:51:35.426843 kubelet[3377]: I0430 12:51:35.426830 3377 state_mem.go:75] "Updated machine memory state" Apr 30 12:51:35.432132 kubelet[3377]: I0430 12:51:35.432097 3377 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:51:35.432975 kubelet[3377]: I0430 12:51:35.432305 3377 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:51:35.432975 kubelet[3377]: I0430 12:51:35.432446 3377 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:51:35.462856 kubelet[3377]: I0430 12:51:35.462820 3377 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.468710 kubelet[3377]: I0430 12:51:35.468652 3377 topology_manager.go:215] "Topology Admit Handler" podUID="bab729e493d131a326fc9d935ca468a4" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.468894 kubelet[3377]: I0430 12:51:35.468791 3377 topology_manager.go:215] "Topology Admit Handler" podUID="cbfeec3eeff1b14f5ba2a6c165899a94" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.468894 kubelet[3377]: I0430 12:51:35.468877 3377 topology_manager.go:215] "Topology Admit Handler" podUID="7b9dd61b94004695e516ac68482ea8a5" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.478629 kubelet[3377]: I0430 12:51:35.478591 3377 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.478845 kubelet[3377]: I0430 12:51:35.478682 3377 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.486718 kubelet[3377]: W0430 12:51:35.486292 3377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 12:51:35.486718 kubelet[3377]: W0430 12:51:35.486361 3377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 12:51:35.486718 kubelet[3377]: E0430 12:51:35.486423 3377 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.1.1-a-af46bb47a4\" already exists" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.486718 kubelet[3377]: W0430 12:51:35.486669 3377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 12:51:35.560639 kubelet[3377]: I0430 12:51:35.560573 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbfeec3eeff1b14f5ba2a6c165899a94-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-a-af46bb47a4\" (UID: \"cbfeec3eeff1b14f5ba2a6c165899a94\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.560639 kubelet[3377]: I0430 12:51:35.560628 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b9dd61b94004695e516ac68482ea8a5-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-af46bb47a4\" (UID: \"7b9dd61b94004695e516ac68482ea8a5\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.560896 kubelet[3377]: I0430 12:51:35.560659 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b9dd61b94004695e516ac68482ea8a5-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-a-af46bb47a4\" (UID: \"7b9dd61b94004695e516ac68482ea8a5\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.560896 kubelet[3377]: I0430 12:51:35.560685 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b9dd61b94004695e516ac68482ea8a5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-a-af46bb47a4\" (UID: \"7b9dd61b94004695e516ac68482ea8a5\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.560896 kubelet[3377]: I0430 12:51:35.560717 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b9dd61b94004695e516ac68482ea8a5-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-a-af46bb47a4\" (UID: \"7b9dd61b94004695e516ac68482ea8a5\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.560896 kubelet[3377]: I0430 12:51:35.560742 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bab729e493d131a326fc9d935ca468a4-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-a-af46bb47a4\" (UID: \"bab729e493d131a326fc9d935ca468a4\") " pod="kube-system/kube-scheduler-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.560896 kubelet[3377]: I0430 12:51:35.560765 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbfeec3eeff1b14f5ba2a6c165899a94-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-a-af46bb47a4\" (UID: \"cbfeec3eeff1b14f5ba2a6c165899a94\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.561163 kubelet[3377]: I0430 12:51:35.560805 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbfeec3eeff1b14f5ba2a6c165899a94-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-a-af46bb47a4\" (UID: \"cbfeec3eeff1b14f5ba2a6c165899a94\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:35.561163 kubelet[3377]: I0430 12:51:35.560837 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b9dd61b94004695e516ac68482ea8a5-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-af46bb47a4\" (UID: \"7b9dd61b94004695e516ac68482ea8a5\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:36.338354 kubelet[3377]: I0430 12:51:36.338015 3377 apiserver.go:52] "Watching apiserver" Apr 30 12:51:36.359691 kubelet[3377]: I0430 12:51:36.359647 3377 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:51:36.426734 kubelet[3377]: W0430 12:51:36.426694 3377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 12:51:36.426950 kubelet[3377]: E0430 12:51:36.426794 3377 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-a-af46bb47a4\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-a-af46bb47a4" Apr 30 12:51:36.443894 kubelet[3377]: I0430 12:51:36.443830 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-af46bb47a4" podStartSLOduration=4.443809724 podStartE2EDuration="4.443809724s" podCreationTimestamp="2025-04-30 12:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:51:36.442116201 +0000 UTC m=+1.190154152" watchObservedRunningTime="2025-04-30 12:51:36.443809724 +0000 UTC m=+1.191847675" Apr 30 12:51:36.465932 kubelet[3377]: I0430 12:51:36.464634 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-a-af46bb47a4" podStartSLOduration=1.4646102060000001 podStartE2EDuration="1.464610206s" podCreationTimestamp="2025-04-30 12:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:51:36.454484369 +0000 UTC m=+1.202522320" watchObservedRunningTime="2025-04-30 12:51:36.464610206 +0000 UTC m=+1.212648157" Apr 30 12:51:36.480519 sudo[2211]: pam_unix(sudo:session): session closed for user root Apr 30 12:51:36.488076 kubelet[3377]: I0430 12:51:36.488007 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-a-af46bb47a4" podStartSLOduration=1.487985623 podStartE2EDuration="1.487985623s" podCreationTimestamp="2025-04-30 12:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:51:36.46487181 +0000 UTC m=+1.212909761" watchObservedRunningTime="2025-04-30 12:51:36.487985623 +0000 UTC m=+1.236023574" Apr 30 12:51:36.580609 sshd[2210]: Connection closed by 10.200.16.10 port 58754 Apr 30 12:51:36.581465 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:36.584862 systemd[1]: sshd@4-10.200.4.14:22-10.200.16.10:58754.service: Deactivated successfully. Apr 30 12:51:36.587338 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 12:51:36.587567 systemd[1]: session-7.scope: Consumed 3.807s CPU time, 250.2M memory peak. Apr 30 12:51:36.589813 systemd-logind[1694]: Session 7 logged out. Waiting for processes to exit. Apr 30 12:51:36.591282 systemd-logind[1694]: Removed session 7. Apr 30 12:51:46.539636 kubelet[3377]: I0430 12:51:46.539578 3377 topology_manager.go:215] "Topology Admit Handler" podUID="69165aa7-0a82-4d62-85f2-b55b0557ceb2" podNamespace="kube-system" podName="kube-proxy-g95lg" Apr 30 12:51:46.552417 kubelet[3377]: I0430 12:51:46.552363 3377 topology_manager.go:215] "Topology Admit Handler" podUID="7d7bbda2-dfad-4bef-9f00-ccb5910d3071" podNamespace="kube-flannel" podName="kube-flannel-ds-zn2wk" Apr 30 12:51:46.554296 systemd[1]: Created slice kubepods-besteffort-pod69165aa7_0a82_4d62_85f2_b55b0557ceb2.slice - libcontainer container kubepods-besteffort-pod69165aa7_0a82_4d62_85f2_b55b0557ceb2.slice. Apr 30 12:51:46.555391 kubelet[3377]: I0430 12:51:46.555360 3377 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 12:51:46.557126 containerd[1715]: time="2025-04-30T12:51:46.557079939Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 12:51:46.558060 kubelet[3377]: I0430 12:51:46.558034 3377 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 12:51:46.575089 systemd[1]: Created slice kubepods-burstable-pod7d7bbda2_dfad_4bef_9f00_ccb5910d3071.slice - libcontainer container kubepods-burstable-pod7d7bbda2_dfad_4bef_9f00_ccb5910d3071.slice. Apr 30 12:51:46.627082 kubelet[3377]: I0430 12:51:46.627026 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5b4z\" (UniqueName: \"kubernetes.io/projected/69165aa7-0a82-4d62-85f2-b55b0557ceb2-kube-api-access-q5b4z\") pod \"kube-proxy-g95lg\" (UID: \"69165aa7-0a82-4d62-85f2-b55b0557ceb2\") " pod="kube-system/kube-proxy-g95lg" Apr 30 12:51:46.627082 kubelet[3377]: I0430 12:51:46.627082 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/7d7bbda2-dfad-4bef-9f00-ccb5910d3071-flannel-cfg\") pod \"kube-flannel-ds-zn2wk\" (UID: \"7d7bbda2-dfad-4bef-9f00-ccb5910d3071\") " pod="kube-flannel/kube-flannel-ds-zn2wk" Apr 30 12:51:46.627345 kubelet[3377]: I0430 12:51:46.627110 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/69165aa7-0a82-4d62-85f2-b55b0557ceb2-kube-proxy\") pod \"kube-proxy-g95lg\" (UID: \"69165aa7-0a82-4d62-85f2-b55b0557ceb2\") " pod="kube-system/kube-proxy-g95lg" Apr 30 12:51:46.627345 kubelet[3377]: I0430 12:51:46.627133 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69165aa7-0a82-4d62-85f2-b55b0557ceb2-xtables-lock\") pod \"kube-proxy-g95lg\" (UID: \"69165aa7-0a82-4d62-85f2-b55b0557ceb2\") " pod="kube-system/kube-proxy-g95lg" Apr 30 12:51:46.627345 kubelet[3377]: I0430 12:51:46.627153 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9brk8\" (UniqueName: \"kubernetes.io/projected/7d7bbda2-dfad-4bef-9f00-ccb5910d3071-kube-api-access-9brk8\") pod \"kube-flannel-ds-zn2wk\" (UID: \"7d7bbda2-dfad-4bef-9f00-ccb5910d3071\") " pod="kube-flannel/kube-flannel-ds-zn2wk" Apr 30 12:51:46.627345 kubelet[3377]: I0430 12:51:46.627175 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/7d7bbda2-dfad-4bef-9f00-ccb5910d3071-cni-plugin\") pod \"kube-flannel-ds-zn2wk\" (UID: \"7d7bbda2-dfad-4bef-9f00-ccb5910d3071\") " pod="kube-flannel/kube-flannel-ds-zn2wk" Apr 30 12:51:46.627345 kubelet[3377]: I0430 12:51:46.627198 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d7bbda2-dfad-4bef-9f00-ccb5910d3071-xtables-lock\") pod \"kube-flannel-ds-zn2wk\" (UID: \"7d7bbda2-dfad-4bef-9f00-ccb5910d3071\") " pod="kube-flannel/kube-flannel-ds-zn2wk" Apr 30 12:51:46.627535 kubelet[3377]: I0430 12:51:46.627220 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/7d7bbda2-dfad-4bef-9f00-ccb5910d3071-cni\") pod \"kube-flannel-ds-zn2wk\" (UID: \"7d7bbda2-dfad-4bef-9f00-ccb5910d3071\") " pod="kube-flannel/kube-flannel-ds-zn2wk" Apr 30 12:51:46.627535 kubelet[3377]: I0430 12:51:46.627241 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69165aa7-0a82-4d62-85f2-b55b0557ceb2-lib-modules\") pod \"kube-proxy-g95lg\" (UID: \"69165aa7-0a82-4d62-85f2-b55b0557ceb2\") " pod="kube-system/kube-proxy-g95lg" Apr 30 12:51:46.627535 kubelet[3377]: I0430 12:51:46.627260 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7d7bbda2-dfad-4bef-9f00-ccb5910d3071-run\") pod \"kube-flannel-ds-zn2wk\" (UID: \"7d7bbda2-dfad-4bef-9f00-ccb5910d3071\") " pod="kube-flannel/kube-flannel-ds-zn2wk" Apr 30 12:51:46.734485 kubelet[3377]: E0430 12:51:46.734081 3377 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 12:51:46.734485 kubelet[3377]: E0430 12:51:46.734120 3377 projected.go:200] Error preparing data for projected volume kube-api-access-q5b4z for pod kube-system/kube-proxy-g95lg: configmap "kube-root-ca.crt" not found Apr 30 12:51:46.734485 kubelet[3377]: E0430 12:51:46.734190 3377 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69165aa7-0a82-4d62-85f2-b55b0557ceb2-kube-api-access-q5b4z podName:69165aa7-0a82-4d62-85f2-b55b0557ceb2 nodeName:}" failed. No retries permitted until 2025-04-30 12:51:47.234168057 +0000 UTC m=+11.982206108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q5b4z" (UniqueName: "kubernetes.io/projected/69165aa7-0a82-4d62-85f2-b55b0557ceb2-kube-api-access-q5b4z") pod "kube-proxy-g95lg" (UID: "69165aa7-0a82-4d62-85f2-b55b0557ceb2") : configmap "kube-root-ca.crt" not found Apr 30 12:51:46.735034 kubelet[3377]: E0430 12:51:46.734681 3377 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 12:51:46.735034 kubelet[3377]: E0430 12:51:46.734702 3377 projected.go:200] Error preparing data for projected volume kube-api-access-9brk8 for pod kube-flannel/kube-flannel-ds-zn2wk: configmap "kube-root-ca.crt" not found Apr 30 12:51:46.735264 kubelet[3377]: E0430 12:51:46.735217 3377 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d7bbda2-dfad-4bef-9f00-ccb5910d3071-kube-api-access-9brk8 podName:7d7bbda2-dfad-4bef-9f00-ccb5910d3071 nodeName:}" failed. No retries permitted until 2025-04-30 12:51:47.235193971 +0000 UTC m=+11.983231922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9brk8" (UniqueName: "kubernetes.io/projected/7d7bbda2-dfad-4bef-9f00-ccb5910d3071-kube-api-access-9brk8") pod "kube-flannel-ds-zn2wk" (UID: "7d7bbda2-dfad-4bef-9f00-ccb5910d3071") : configmap "kube-root-ca.crt" not found Apr 30 12:51:47.332094 kubelet[3377]: E0430 12:51:47.332052 3377 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 12:51:47.332094 kubelet[3377]: E0430 12:51:47.332097 3377 projected.go:200] Error preparing data for projected volume kube-api-access-q5b4z for pod kube-system/kube-proxy-g95lg: configmap "kube-root-ca.crt" not found Apr 30 12:51:47.332352 kubelet[3377]: E0430 12:51:47.332151 3377 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69165aa7-0a82-4d62-85f2-b55b0557ceb2-kube-api-access-q5b4z podName:69165aa7-0a82-4d62-85f2-b55b0557ceb2 nodeName:}" failed. No retries permitted until 2025-04-30 12:51:48.332134023 +0000 UTC m=+13.080172074 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-q5b4z" (UniqueName: "kubernetes.io/projected/69165aa7-0a82-4d62-85f2-b55b0557ceb2-kube-api-access-q5b4z") pod "kube-proxy-g95lg" (UID: "69165aa7-0a82-4d62-85f2-b55b0557ceb2") : configmap "kube-root-ca.crt" not found Apr 30 12:51:47.332601 kubelet[3377]: E0430 12:51:47.332052 3377 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 12:51:47.332601 kubelet[3377]: E0430 12:51:47.332506 3377 projected.go:200] Error preparing data for projected volume kube-api-access-9brk8 for pod kube-flannel/kube-flannel-ds-zn2wk: configmap "kube-root-ca.crt" not found Apr 30 12:51:47.332601 kubelet[3377]: E0430 12:51:47.332568 3377 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d7bbda2-dfad-4bef-9f00-ccb5910d3071-kube-api-access-9brk8 podName:7d7bbda2-dfad-4bef-9f00-ccb5910d3071 nodeName:}" failed. No retries permitted until 2025-04-30 12:51:48.332551029 +0000 UTC m=+13.080588980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9brk8" (UniqueName: "kubernetes.io/projected/7d7bbda2-dfad-4bef-9f00-ccb5910d3071-kube-api-access-9brk8") pod "kube-flannel-ds-zn2wk" (UID: "7d7bbda2-dfad-4bef-9f00-ccb5910d3071") : configmap "kube-root-ca.crt" not found Apr 30 12:51:48.365202 containerd[1715]: time="2025-04-30T12:51:48.365148930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g95lg,Uid:69165aa7-0a82-4d62-85f2-b55b0557ceb2,Namespace:kube-system,Attempt:0,}" Apr 30 12:51:48.378969 containerd[1715]: time="2025-04-30T12:51:48.378892518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zn2wk,Uid:7d7bbda2-dfad-4bef-9f00-ccb5910d3071,Namespace:kube-flannel,Attempt:0,}" Apr 30 12:51:48.416049 containerd[1715]: time="2025-04-30T12:51:48.415949824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:51:48.416049 containerd[1715]: time="2025-04-30T12:51:48.415993025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:51:48.416049 containerd[1715]: time="2025-04-30T12:51:48.416007225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:48.416467 containerd[1715]: time="2025-04-30T12:51:48.416100526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:48.455124 systemd[1]: Started cri-containerd-e11259233f35922e96b7fed397db0657a48c3e84e54293fc04eec44f8981a20b.scope - libcontainer container e11259233f35922e96b7fed397db0657a48c3e84e54293fc04eec44f8981a20b. Apr 30 12:51:48.458147 containerd[1715]: time="2025-04-30T12:51:48.457683394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:51:48.460377 containerd[1715]: time="2025-04-30T12:51:48.460036626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:51:48.460377 containerd[1715]: time="2025-04-30T12:51:48.460065427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:48.460377 containerd[1715]: time="2025-04-30T12:51:48.460202528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:48.485080 systemd[1]: Started cri-containerd-96722274dfae00188cf60b15b27c15ad8b9deaf495208cb88b1662cd02a80192.scope - libcontainer container 96722274dfae00188cf60b15b27c15ad8b9deaf495208cb88b1662cd02a80192. Apr 30 12:51:48.504158 containerd[1715]: time="2025-04-30T12:51:48.503544620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g95lg,Uid:69165aa7-0a82-4d62-85f2-b55b0557ceb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e11259233f35922e96b7fed397db0657a48c3e84e54293fc04eec44f8981a20b\"" Apr 30 12:51:48.510843 containerd[1715]: time="2025-04-30T12:51:48.510733119Z" level=info msg="CreateContainer within sandbox \"e11259233f35922e96b7fed397db0657a48c3e84e54293fc04eec44f8981a20b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 12:51:48.541170 containerd[1715]: time="2025-04-30T12:51:48.541113533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zn2wk,Uid:7d7bbda2-dfad-4bef-9f00-ccb5910d3071,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"96722274dfae00188cf60b15b27c15ad8b9deaf495208cb88b1662cd02a80192\"" Apr 30 12:51:48.543132 containerd[1715]: time="2025-04-30T12:51:48.542986759Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Apr 30 12:51:48.550432 containerd[1715]: time="2025-04-30T12:51:48.550398260Z" level=info msg="CreateContainer within sandbox \"e11259233f35922e96b7fed397db0657a48c3e84e54293fc04eec44f8981a20b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e37c340bee6ec3a339f3cfe0254768b7cdb1bd7601a9f6cb0ad2ae1df3cafb00\"" Apr 30 12:51:48.551462 containerd[1715]: time="2025-04-30T12:51:48.551332873Z" level=info msg="StartContainer for \"e37c340bee6ec3a339f3cfe0254768b7cdb1bd7601a9f6cb0ad2ae1df3cafb00\"" Apr 30 12:51:48.579087 systemd[1]: Started cri-containerd-e37c340bee6ec3a339f3cfe0254768b7cdb1bd7601a9f6cb0ad2ae1df3cafb00.scope - libcontainer container e37c340bee6ec3a339f3cfe0254768b7cdb1bd7601a9f6cb0ad2ae1df3cafb00. Apr 30 12:51:48.612036 containerd[1715]: time="2025-04-30T12:51:48.611987301Z" level=info msg="StartContainer for \"e37c340bee6ec3a339f3cfe0254768b7cdb1bd7601a9f6cb0ad2ae1df3cafb00\" returns successfully" Apr 30 12:51:49.449121 kubelet[3377]: I0430 12:51:49.448686 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g95lg" podStartSLOduration=3.448664613 podStartE2EDuration="3.448664613s" podCreationTimestamp="2025-04-30 12:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:51:49.448554512 +0000 UTC m=+14.196592463" watchObservedRunningTime="2025-04-30 12:51:49.448664613 +0000 UTC m=+14.196702664" Apr 30 12:51:50.625793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117234306.mount: Deactivated successfully. Apr 30 12:51:50.704881 containerd[1715]: time="2025-04-30T12:51:50.704826504Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:50.706645 containerd[1715]: time="2025-04-30T12:51:50.706585727Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Apr 30 12:51:50.708979 containerd[1715]: time="2025-04-30T12:51:50.708946659Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:50.713881 containerd[1715]: time="2025-04-30T12:51:50.713789623Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:50.715122 containerd[1715]: time="2025-04-30T12:51:50.714514833Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.171489574s" Apr 30 12:51:50.715122 containerd[1715]: time="2025-04-30T12:51:50.714552334Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Apr 30 12:51:50.717046 containerd[1715]: time="2025-04-30T12:51:50.717012866Z" level=info msg="CreateContainer within sandbox \"96722274dfae00188cf60b15b27c15ad8b9deaf495208cb88b1662cd02a80192\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Apr 30 12:51:50.752061 containerd[1715]: time="2025-04-30T12:51:50.752011334Z" level=info msg="CreateContainer within sandbox \"96722274dfae00188cf60b15b27c15ad8b9deaf495208cb88b1662cd02a80192\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"6195f06fd6af57ef1178bee6880d191ef3042567f6df84230263c867d9e8042c\"" Apr 30 12:51:50.752640 containerd[1715]: time="2025-04-30T12:51:50.752600542Z" level=info msg="StartContainer for \"6195f06fd6af57ef1178bee6880d191ef3042567f6df84230263c867d9e8042c\"" Apr 30 12:51:50.782105 systemd[1]: Started cri-containerd-6195f06fd6af57ef1178bee6880d191ef3042567f6df84230263c867d9e8042c.scope - libcontainer container 6195f06fd6af57ef1178bee6880d191ef3042567f6df84230263c867d9e8042c. Apr 30 12:51:50.806712 systemd[1]: cri-containerd-6195f06fd6af57ef1178bee6880d191ef3042567f6df84230263c867d9e8042c.scope: Deactivated successfully. Apr 30 12:51:50.810218 containerd[1715]: time="2025-04-30T12:51:50.809602804Z" level=info msg="StartContainer for \"6195f06fd6af57ef1178bee6880d191ef3042567f6df84230263c867d9e8042c\" returns successfully" Apr 30 12:51:50.936975 containerd[1715]: time="2025-04-30T12:51:50.936333498Z" level=info msg="shim disconnected" id=6195f06fd6af57ef1178bee6880d191ef3042567f6df84230263c867d9e8042c namespace=k8s.io Apr 30 12:51:50.936975 containerd[1715]: time="2025-04-30T12:51:50.936404299Z" level=warning msg="cleaning up after shim disconnected" id=6195f06fd6af57ef1178bee6880d191ef3042567f6df84230263c867d9e8042c namespace=k8s.io Apr 30 12:51:50.936975 containerd[1715]: time="2025-04-30T12:51:50.936415799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:51:51.443248 containerd[1715]: time="2025-04-30T12:51:51.443175972Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Apr 30 12:51:51.542616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6195f06fd6af57ef1178bee6880d191ef3042567f6df84230263c867d9e8042c-rootfs.mount: Deactivated successfully. Apr 30 12:51:53.543292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount454596344.mount: Deactivated successfully. Apr 30 12:51:54.441581 containerd[1715]: time="2025-04-30T12:51:54.441515149Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:54.445219 containerd[1715]: time="2025-04-30T12:51:54.445141197Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Apr 30 12:51:54.448585 containerd[1715]: time="2025-04-30T12:51:54.448511542Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:54.454793 containerd[1715]: time="2025-04-30T12:51:54.454715425Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:54.455993 containerd[1715]: time="2025-04-30T12:51:54.455839640Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.012610867s" Apr 30 12:51:54.455993 containerd[1715]: time="2025-04-30T12:51:54.455880441Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Apr 30 12:51:54.458505 containerd[1715]: time="2025-04-30T12:51:54.458475676Z" level=info msg="CreateContainer within sandbox \"96722274dfae00188cf60b15b27c15ad8b9deaf495208cb88b1662cd02a80192\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 12:51:54.496886 containerd[1715]: time="2025-04-30T12:51:54.496833688Z" level=info msg="CreateContainer within sandbox \"96722274dfae00188cf60b15b27c15ad8b9deaf495208cb88b1662cd02a80192\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0796e4a1d5fe3d3c6cfd717ac07dd988fd2de32b843d09f318c84c8a86147e05\"" Apr 30 12:51:54.497849 containerd[1715]: time="2025-04-30T12:51:54.497540398Z" level=info msg="StartContainer for \"0796e4a1d5fe3d3c6cfd717ac07dd988fd2de32b843d09f318c84c8a86147e05\"" Apr 30 12:51:54.530857 systemd[1]: run-containerd-runc-k8s.io-0796e4a1d5fe3d3c6cfd717ac07dd988fd2de32b843d09f318c84c8a86147e05-runc.pfTTJV.mount: Deactivated successfully. Apr 30 12:51:54.538099 systemd[1]: Started cri-containerd-0796e4a1d5fe3d3c6cfd717ac07dd988fd2de32b843d09f318c84c8a86147e05.scope - libcontainer container 0796e4a1d5fe3d3c6cfd717ac07dd988fd2de32b843d09f318c84c8a86147e05. Apr 30 12:51:54.563392 systemd[1]: cri-containerd-0796e4a1d5fe3d3c6cfd717ac07dd988fd2de32b843d09f318c84c8a86147e05.scope: Deactivated successfully. Apr 30 12:51:54.570219 containerd[1715]: time="2025-04-30T12:51:54.570172569Z" level=info msg="StartContainer for \"0796e4a1d5fe3d3c6cfd717ac07dd988fd2de32b843d09f318c84c8a86147e05\" returns successfully" Apr 30 12:51:54.632700 kubelet[3377]: I0430 12:51:54.632378 3377 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 12:51:54.669924 kubelet[3377]: I0430 12:51:54.658402 3377 topology_manager.go:215] "Topology Admit Handler" podUID="639ccc2f-a18f-40d8-a52f-6557d4259016" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6x22q" Apr 30 12:51:54.669924 kubelet[3377]: I0430 12:51:54.664560 3377 topology_manager.go:215] "Topology Admit Handler" podUID="dc400bf5-7a18-4d4e-84f9-40c91095e62a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pxllk" Apr 30 12:51:54.678951 systemd[1]: Created slice kubepods-burstable-pod639ccc2f_a18f_40d8_a52f_6557d4259016.slice - libcontainer container kubepods-burstable-pod639ccc2f_a18f_40d8_a52f_6557d4259016.slice. Apr 30 12:51:54.684850 systemd[1]: Created slice kubepods-burstable-poddc400bf5_7a18_4d4e_84f9_40c91095e62a.slice - libcontainer container kubepods-burstable-poddc400bf5_7a18_4d4e_84f9_40c91095e62a.slice. Apr 30 12:51:54.783999 kubelet[3377]: I0430 12:51:54.783657 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc400bf5-7a18-4d4e-84f9-40c91095e62a-config-volume\") pod \"coredns-7db6d8ff4d-pxllk\" (UID: \"dc400bf5-7a18-4d4e-84f9-40c91095e62a\") " pod="kube-system/coredns-7db6d8ff4d-pxllk" Apr 30 12:51:54.783999 kubelet[3377]: I0430 12:51:54.783723 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6zlp\" (UniqueName: \"kubernetes.io/projected/639ccc2f-a18f-40d8-a52f-6557d4259016-kube-api-access-t6zlp\") pod \"coredns-7db6d8ff4d-6x22q\" (UID: \"639ccc2f-a18f-40d8-a52f-6557d4259016\") " pod="kube-system/coredns-7db6d8ff4d-6x22q" Apr 30 12:51:54.783999 kubelet[3377]: I0430 12:51:54.783751 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzc9z\" (UniqueName: \"kubernetes.io/projected/dc400bf5-7a18-4d4e-84f9-40c91095e62a-kube-api-access-hzc9z\") pod \"coredns-7db6d8ff4d-pxllk\" (UID: \"dc400bf5-7a18-4d4e-84f9-40c91095e62a\") " pod="kube-system/coredns-7db6d8ff4d-pxllk" Apr 30 12:51:54.783999 kubelet[3377]: I0430 12:51:54.783776 3377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/639ccc2f-a18f-40d8-a52f-6557d4259016-config-volume\") pod \"coredns-7db6d8ff4d-6x22q\" (UID: \"639ccc2f-a18f-40d8-a52f-6557d4259016\") " pod="kube-system/coredns-7db6d8ff4d-6x22q" Apr 30 12:51:54.983765 containerd[1715]: time="2025-04-30T12:51:54.983710196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6x22q,Uid:639ccc2f-a18f-40d8-a52f-6557d4259016,Namespace:kube-system,Attempt:0,}" Apr 30 12:51:54.990515 containerd[1715]: time="2025-04-30T12:51:54.990475286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pxllk,Uid:dc400bf5-7a18-4d4e-84f9-40c91095e62a,Namespace:kube-system,Attempt:0,}" Apr 30 12:51:55.151179 containerd[1715]: time="2025-04-30T12:51:55.151084833Z" level=info msg="shim disconnected" id=0796e4a1d5fe3d3c6cfd717ac07dd988fd2de32b843d09f318c84c8a86147e05 namespace=k8s.io Apr 30 12:51:55.151179 containerd[1715]: time="2025-04-30T12:51:55.151169934Z" level=warning msg="cleaning up after shim disconnected" id=0796e4a1d5fe3d3c6cfd717ac07dd988fd2de32b843d09f318c84c8a86147e05 namespace=k8s.io Apr 30 12:51:55.151179 containerd[1715]: time="2025-04-30T12:51:55.151199735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:51:55.227080 containerd[1715]: time="2025-04-30T12:51:55.227011048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6x22q,Uid:639ccc2f-a18f-40d8-a52f-6557d4259016,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b6ccbec9252efd24ea919977613cf5d28f32d51f843fafdb2bbffc7df277b5e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 30 12:51:55.227341 kubelet[3377]: E0430 12:51:55.227293 3377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b6ccbec9252efd24ea919977613cf5d28f32d51f843fafdb2bbffc7df277b5e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 30 12:51:55.227449 kubelet[3377]: E0430 12:51:55.227376 3377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b6ccbec9252efd24ea919977613cf5d28f32d51f843fafdb2bbffc7df277b5e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-6x22q" Apr 30 12:51:55.227449 kubelet[3377]: E0430 12:51:55.227403 3377 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b6ccbec9252efd24ea919977613cf5d28f32d51f843fafdb2bbffc7df277b5e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-6x22q" Apr 30 12:51:55.227536 kubelet[3377]: E0430 12:51:55.227467 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6x22q_kube-system(639ccc2f-a18f-40d8-a52f-6557d4259016)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6x22q_kube-system(639ccc2f-a18f-40d8-a52f-6557d4259016)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b6ccbec9252efd24ea919977613cf5d28f32d51f843fafdb2bbffc7df277b5e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-6x22q" podUID="639ccc2f-a18f-40d8-a52f-6557d4259016" Apr 30 12:51:55.231062 containerd[1715]: time="2025-04-30T12:51:55.231003801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pxllk,Uid:dc400bf5-7a18-4d4e-84f9-40c91095e62a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52e41ce5ee92b235cd5e324a9e589d737dd534044557568234eeac827eb006e7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 30 12:51:55.231282 kubelet[3377]: E0430 12:51:55.231220 3377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52e41ce5ee92b235cd5e324a9e589d737dd534044557568234eeac827eb006e7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Apr 30 12:51:55.231376 kubelet[3377]: E0430 12:51:55.231311 3377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52e41ce5ee92b235cd5e324a9e589d737dd534044557568234eeac827eb006e7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-pxllk" Apr 30 12:51:55.231376 kubelet[3377]: E0430 12:51:55.231338 3377 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52e41ce5ee92b235cd5e324a9e589d737dd534044557568234eeac827eb006e7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-pxllk" Apr 30 12:51:55.231464 kubelet[3377]: E0430 12:51:55.231393 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pxllk_kube-system(dc400bf5-7a18-4d4e-84f9-40c91095e62a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pxllk_kube-system(dc400bf5-7a18-4d4e-84f9-40c91095e62a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52e41ce5ee92b235cd5e324a9e589d737dd534044557568234eeac827eb006e7\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-pxllk" podUID="dc400bf5-7a18-4d4e-84f9-40c91095e62a" Apr 30 12:51:55.457003 containerd[1715]: time="2025-04-30T12:51:55.456846720Z" level=info msg="CreateContainer within sandbox \"96722274dfae00188cf60b15b27c15ad8b9deaf495208cb88b1662cd02a80192\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Apr 30 12:51:55.489284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0796e4a1d5fe3d3c6cfd717ac07dd988fd2de32b843d09f318c84c8a86147e05-rootfs.mount: Deactivated successfully. Apr 30 12:51:55.497464 containerd[1715]: time="2025-04-30T12:51:55.497416462Z" level=info msg="CreateContainer within sandbox \"96722274dfae00188cf60b15b27c15ad8b9deaf495208cb88b1662cd02a80192\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"414a7a09c4eb9b74666eb13850f441a061119ba5b4c572888d4695bf511d8a90\"" Apr 30 12:51:55.498580 containerd[1715]: time="2025-04-30T12:51:55.498035971Z" level=info msg="StartContainer for \"414a7a09c4eb9b74666eb13850f441a061119ba5b4c572888d4695bf511d8a90\"" Apr 30 12:51:55.539110 systemd[1]: Started cri-containerd-414a7a09c4eb9b74666eb13850f441a061119ba5b4c572888d4695bf511d8a90.scope - libcontainer container 414a7a09c4eb9b74666eb13850f441a061119ba5b4c572888d4695bf511d8a90. Apr 30 12:51:55.567890 containerd[1715]: time="2025-04-30T12:51:55.567842804Z" level=info msg="StartContainer for \"414a7a09c4eb9b74666eb13850f441a061119ba5b4c572888d4695bf511d8a90\" returns successfully" Apr 30 12:51:56.471780 kubelet[3377]: I0430 12:51:56.471692 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-zn2wk" podStartSLOduration=4.557071779 podStartE2EDuration="10.471663684s" podCreationTimestamp="2025-04-30 12:51:46 +0000 UTC" firstStartedPulling="2025-04-30 12:51:48.542425651 +0000 UTC m=+13.290463602" lastFinishedPulling="2025-04-30 12:51:54.457017556 +0000 UTC m=+19.205055507" observedRunningTime="2025-04-30 12:51:56.471307779 +0000 UTC m=+21.219345830" watchObservedRunningTime="2025-04-30 12:51:56.471663684 +0000 UTC m=+21.219701735" Apr 30 12:51:56.694868 systemd-networkd[1436]: flannel.1: Link UP Apr 30 12:51:56.694879 systemd-networkd[1436]: flannel.1: Gained carrier Apr 30 12:51:58.487136 systemd-networkd[1436]: flannel.1: Gained IPv6LL Apr 30 12:52:06.370569 containerd[1715]: time="2025-04-30T12:52:06.370454255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6x22q,Uid:639ccc2f-a18f-40d8-a52f-6557d4259016,Namespace:kube-system,Attempt:0,}" Apr 30 12:52:06.420298 systemd-networkd[1436]: cni0: Link UP Apr 30 12:52:06.420308 systemd-networkd[1436]: cni0: Gained carrier Apr 30 12:52:06.424812 systemd-networkd[1436]: cni0: Lost carrier Apr 30 12:52:06.484986 systemd-networkd[1436]: veth8b642101: Link UP Apr 30 12:52:06.491047 kernel: cni0: port 1(veth8b642101) entered blocking state Apr 30 12:52:06.491166 kernel: cni0: port 1(veth8b642101) entered disabled state Apr 30 12:52:06.491198 kernel: veth8b642101: entered allmulticast mode Apr 30 12:52:06.492675 kernel: veth8b642101: entered promiscuous mode Apr 30 12:52:06.498465 kernel: cni0: port 1(veth8b642101) entered blocking state Apr 30 12:52:06.498555 kernel: cni0: port 1(veth8b642101) entered forwarding state Apr 30 12:52:06.498586 kernel: cni0: port 1(veth8b642101) entered disabled state Apr 30 12:52:06.506416 kernel: cni0: port 1(veth8b642101) entered blocking state Apr 30 12:52:06.506516 kernel: cni0: port 1(veth8b642101) entered forwarding state Apr 30 12:52:06.506727 systemd-networkd[1436]: veth8b642101: Gained carrier Apr 30 12:52:06.507118 systemd-networkd[1436]: cni0: Gained carrier Apr 30 12:52:06.509327 containerd[1715]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Apr 30 12:52:06.509327 containerd[1715]: delegateAdd: netconf sent to delegate plugin: Apr 30 12:52:06.530791 containerd[1715]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-04-30T12:52:06.530685663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:52:06.530791 containerd[1715]: time="2025-04-30T12:52:06.530758864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:52:06.531121 containerd[1715]: time="2025-04-30T12:52:06.530776064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:52:06.531121 containerd[1715]: time="2025-04-30T12:52:06.530975466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:52:06.563095 systemd[1]: Started cri-containerd-7071465b5358cc7bb761ee24e0ef151e6d8e68bb7f6b785d555700bb20aa2fee.scope - libcontainer container 7071465b5358cc7bb761ee24e0ef151e6d8e68bb7f6b785d555700bb20aa2fee. Apr 30 12:52:06.601896 containerd[1715]: time="2025-04-30T12:52:06.601843554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6x22q,Uid:639ccc2f-a18f-40d8-a52f-6557d4259016,Namespace:kube-system,Attempt:0,} returns sandbox id \"7071465b5358cc7bb761ee24e0ef151e6d8e68bb7f6b785d555700bb20aa2fee\"" Apr 30 12:52:06.605298 containerd[1715]: time="2025-04-30T12:52:06.605258797Z" level=info msg="CreateContainer within sandbox \"7071465b5358cc7bb761ee24e0ef151e6d8e68bb7f6b785d555700bb20aa2fee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:52:06.638260 containerd[1715]: time="2025-04-30T12:52:06.638068908Z" level=info msg="CreateContainer within sandbox \"7071465b5358cc7bb761ee24e0ef151e6d8e68bb7f6b785d555700bb20aa2fee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7668e0cf4b9f725f43e9b083569e85dc665f30b4befc0e70d0263b91e6dc063c\"" Apr 30 12:52:06.639438 containerd[1715]: time="2025-04-30T12:52:06.639399725Z" level=info msg="StartContainer for \"7668e0cf4b9f725f43e9b083569e85dc665f30b4befc0e70d0263b91e6dc063c\"" Apr 30 12:52:06.666105 systemd[1]: Started cri-containerd-7668e0cf4b9f725f43e9b083569e85dc665f30b4befc0e70d0263b91e6dc063c.scope - libcontainer container 7668e0cf4b9f725f43e9b083569e85dc665f30b4befc0e70d0263b91e6dc063c. Apr 30 12:52:06.694646 containerd[1715]: time="2025-04-30T12:52:06.694590617Z" level=info msg="StartContainer for \"7668e0cf4b9f725f43e9b083569e85dc665f30b4befc0e70d0263b91e6dc063c\" returns successfully" Apr 30 12:52:07.370740 containerd[1715]: time="2025-04-30T12:52:07.370199583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pxllk,Uid:dc400bf5-7a18-4d4e-84f9-40c91095e62a,Namespace:kube-system,Attempt:0,}" Apr 30 12:52:07.431514 systemd-networkd[1436]: veth797db877: Link UP Apr 30 12:52:07.436917 kernel: cni0: port 2(veth797db877) entered blocking state Apr 30 12:52:07.436990 kernel: cni0: port 2(veth797db877) entered disabled state Apr 30 12:52:07.437009 kernel: veth797db877: entered allmulticast mode Apr 30 12:52:07.439182 kernel: veth797db877: entered promiscuous mode Apr 30 12:52:07.448396 kernel: cni0: port 2(veth797db877) entered blocking state Apr 30 12:52:07.448699 kernel: cni0: port 2(veth797db877) entered forwarding state Apr 30 12:52:07.448833 systemd-networkd[1436]: veth797db877: Gained carrier Apr 30 12:52:07.450360 containerd[1715]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} Apr 30 12:52:07.450360 containerd[1715]: delegateAdd: netconf sent to delegate plugin: Apr 30 12:52:07.472210 containerd[1715]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-04-30T12:52:07.472109060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:52:07.472210 containerd[1715]: time="2025-04-30T12:52:07.472154561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:52:07.472210 containerd[1715]: time="2025-04-30T12:52:07.472168561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:52:07.472736 containerd[1715]: time="2025-04-30T12:52:07.472295563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:52:07.514074 systemd[1]: Started cri-containerd-68fd6e4d023369f256935759a44d79bdac38e8c146c3330f47ca9db3e8576044.scope - libcontainer container 68fd6e4d023369f256935759a44d79bdac38e8c146c3330f47ca9db3e8576044. Apr 30 12:52:07.526507 kubelet[3377]: I0430 12:52:07.526439 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6x22q" podStartSLOduration=20.526414641 podStartE2EDuration="20.526414641s" podCreationTimestamp="2025-04-30 12:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:52:07.505316477 +0000 UTC m=+32.253354428" watchObservedRunningTime="2025-04-30 12:52:07.526414641 +0000 UTC m=+32.274452692" Apr 30 12:52:07.575332 systemd-networkd[1436]: veth8b642101: Gained IPv6LL Apr 30 12:52:07.577440 containerd[1715]: time="2025-04-30T12:52:07.577403280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pxllk,Uid:dc400bf5-7a18-4d4e-84f9-40c91095e62a,Namespace:kube-system,Attempt:0,} returns sandbox id \"68fd6e4d023369f256935759a44d79bdac38e8c146c3330f47ca9db3e8576044\"" Apr 30 12:52:07.582015 containerd[1715]: time="2025-04-30T12:52:07.581983237Z" level=info msg="CreateContainer within sandbox \"68fd6e4d023369f256935759a44d79bdac38e8c146c3330f47ca9db3e8576044\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:52:07.622158 containerd[1715]: time="2025-04-30T12:52:07.622028439Z" level=info msg="CreateContainer within sandbox \"68fd6e4d023369f256935759a44d79bdac38e8c146c3330f47ca9db3e8576044\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"494451a28bd440f08ee6019634a6bfc09e0861865c1d2ce85c625886e34a0ea6\"" Apr 30 12:52:07.623197 containerd[1715]: time="2025-04-30T12:52:07.623143353Z" level=info msg="StartContainer for \"494451a28bd440f08ee6019634a6bfc09e0861865c1d2ce85c625886e34a0ea6\"" Apr 30 12:52:07.648081 systemd[1]: Started cri-containerd-494451a28bd440f08ee6019634a6bfc09e0861865c1d2ce85c625886e34a0ea6.scope - libcontainer container 494451a28bd440f08ee6019634a6bfc09e0861865c1d2ce85c625886e34a0ea6. Apr 30 12:52:07.676834 containerd[1715]: time="2025-04-30T12:52:07.676559423Z" level=info msg="StartContainer for \"494451a28bd440f08ee6019634a6bfc09e0861865c1d2ce85c625886e34a0ea6\" returns successfully" Apr 30 12:52:08.151089 systemd-networkd[1436]: cni0: Gained IPv6LL Apr 30 12:52:08.501597 kubelet[3377]: I0430 12:52:08.501263 3377 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pxllk" podStartSLOduration=21.501232357 podStartE2EDuration="21.501232357s" podCreationTimestamp="2025-04-30 12:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:52:08.499754739 +0000 UTC m=+33.247792790" watchObservedRunningTime="2025-04-30 12:52:08.501232357 +0000 UTC m=+33.249270308" Apr 30 12:52:08.919152 systemd-networkd[1436]: veth797db877: Gained IPv6LL Apr 30 12:53:10.446245 systemd[1]: Started sshd@5-10.200.4.14:22-10.200.16.10:46960.service - OpenSSH per-connection server daemon (10.200.16.10:46960). Apr 30 12:53:11.051479 sshd[4516]: Accepted publickey for core from 10.200.16.10 port 46960 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:53:11.053256 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:53:11.058317 systemd-logind[1694]: New session 8 of user core. Apr 30 12:53:11.063069 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 12:53:11.606994 sshd[4518]: Connection closed by 10.200.16.10 port 46960 Apr 30 12:53:11.607986 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Apr 30 12:53:11.612803 systemd[1]: sshd@5-10.200.4.14:22-10.200.16.10:46960.service: Deactivated successfully. Apr 30 12:53:11.617787 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 12:53:11.619738 systemd-logind[1694]: Session 8 logged out. Waiting for processes to exit. Apr 30 12:53:11.623521 systemd-logind[1694]: Removed session 8. Apr 30 12:53:16.721218 systemd[1]: Started sshd@6-10.200.4.14:22-10.200.16.10:46974.service - OpenSSH per-connection server daemon (10.200.16.10:46974). Apr 30 12:53:17.318731 sshd[4552]: Accepted publickey for core from 10.200.16.10 port 46974 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:53:17.320389 sshd-session[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:53:17.324853 systemd-logind[1694]: New session 9 of user core. Apr 30 12:53:17.330055 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 12:53:17.817227 sshd[4575]: Connection closed by 10.200.16.10 port 46974 Apr 30 12:53:17.818153 sshd-session[4552]: pam_unix(sshd:session): session closed for user core Apr 30 12:53:17.823099 systemd[1]: sshd@6-10.200.4.14:22-10.200.16.10:46974.service: Deactivated successfully. Apr 30 12:53:17.825518 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 12:53:17.826571 systemd-logind[1694]: Session 9 logged out. Waiting for processes to exit. Apr 30 12:53:17.827645 systemd-logind[1694]: Removed session 9. Apr 30 12:53:22.921258 systemd[1]: Started sshd@7-10.200.4.14:22-10.200.16.10:55742.service - OpenSSH per-connection server daemon (10.200.16.10:55742). Apr 30 12:53:23.520669 sshd[4611]: Accepted publickey for core from 10.200.16.10 port 55742 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:53:23.522298 sshd-session[4611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:53:23.526875 systemd-logind[1694]: New session 10 of user core. Apr 30 12:53:23.532056 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 12:53:24.014700 sshd[4613]: Connection closed by 10.200.16.10 port 55742 Apr 30 12:53:24.015622 sshd-session[4611]: pam_unix(sshd:session): session closed for user core Apr 30 12:53:24.019994 systemd[1]: sshd@7-10.200.4.14:22-10.200.16.10:55742.service: Deactivated successfully. Apr 30 12:53:24.022105 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 12:53:24.023293 systemd-logind[1694]: Session 10 logged out. Waiting for processes to exit. Apr 30 12:53:24.024404 systemd-logind[1694]: Removed session 10. Apr 30 12:53:24.134239 systemd[1]: Started sshd@8-10.200.4.14:22-10.200.16.10:55752.service - OpenSSH per-connection server daemon (10.200.16.10:55752). Apr 30 12:53:24.737400 sshd[4626]: Accepted publickey for core from 10.200.16.10 port 55752 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:53:24.738848 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:53:24.744165 systemd-logind[1694]: New session 11 of user core. Apr 30 12:53:24.747106 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 12:53:25.283271 sshd[4628]: Connection closed by 10.200.16.10 port 55752 Apr 30 12:53:25.284280 sshd-session[4626]: pam_unix(sshd:session): session closed for user core Apr 30 12:53:25.288139 systemd[1]: sshd@8-10.200.4.14:22-10.200.16.10:55752.service: Deactivated successfully. Apr 30 12:53:25.290593 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 12:53:25.292318 systemd-logind[1694]: Session 11 logged out. Waiting for processes to exit. Apr 30 12:53:25.293528 systemd-logind[1694]: Removed session 11. Apr 30 12:53:25.394252 systemd[1]: Started sshd@9-10.200.4.14:22-10.200.16.10:55760.service - OpenSSH per-connection server daemon (10.200.16.10:55760). Apr 30 12:53:25.994514 sshd[4638]: Accepted publickey for core from 10.200.16.10 port 55760 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:53:25.996193 sshd-session[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:53:26.000792 systemd-logind[1694]: New session 12 of user core. Apr 30 12:53:26.011098 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 12:53:26.484767 sshd[4640]: Connection closed by 10.200.16.10 port 55760 Apr 30 12:53:26.485524 sshd-session[4638]: pam_unix(sshd:session): session closed for user core Apr 30 12:53:26.490140 systemd[1]: sshd@9-10.200.4.14:22-10.200.16.10:55760.service: Deactivated successfully. Apr 30 12:53:26.492512 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 12:53:26.494268 systemd-logind[1694]: Session 12 logged out. Waiting for processes to exit. Apr 30 12:53:26.495463 systemd-logind[1694]: Removed session 12. Apr 30 12:53:31.597466 systemd[1]: Started sshd@10-10.200.4.14:22-10.200.16.10:48666.service - OpenSSH per-connection server daemon (10.200.16.10:48666). Apr 30 12:53:32.207915 sshd[4673]: Accepted publickey for core from 10.200.16.10 port 48666 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:53:32.210645 sshd-session[4673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:53:32.215284 systemd-logind[1694]: New session 13 of user core. Apr 30 12:53:32.221090 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 12:53:32.719494 sshd[4696]: Connection closed by 10.200.16.10 port 48666 Apr 30 12:53:32.720661 sshd-session[4673]: pam_unix(sshd:session): session closed for user core Apr 30 12:53:32.724029 systemd[1]: sshd@10-10.200.4.14:22-10.200.16.10:48666.service: Deactivated successfully. Apr 30 12:53:32.727538 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 12:53:32.730060 systemd-logind[1694]: Session 13 logged out. Waiting for processes to exit. Apr 30 12:53:32.731036 systemd-logind[1694]: Removed session 13. Apr 30 12:53:32.832235 systemd[1]: Started sshd@11-10.200.4.14:22-10.200.16.10:48682.service - OpenSSH per-connection server daemon (10.200.16.10:48682). Apr 30 12:53:33.441149 sshd[4708]: Accepted publickey for core from 10.200.16.10 port 48682 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:53:33.442997 sshd-session[4708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:53:33.448348 systemd-logind[1694]: New session 14 of user core. Apr 30 12:53:33.455085 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 12:53:33.998216 sshd[4710]: Connection closed by 10.200.16.10 port 48682 Apr 30 12:53:33.999135 sshd-session[4708]: pam_unix(sshd:session): session closed for user core Apr 30 12:53:34.002882 systemd[1]: sshd@11-10.200.4.14:22-10.200.16.10:48682.service: Deactivated successfully. Apr 30 12:53:34.005390 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 12:53:34.007100 systemd-logind[1694]: Session 14 logged out. Waiting for processes to exit. Apr 30 12:53:34.008294 systemd-logind[1694]: Removed session 14. Apr 30 12:53:34.111235 systemd[1]: Started sshd@12-10.200.4.14:22-10.200.16.10:48698.service - OpenSSH per-connection server daemon (10.200.16.10:48698). Apr 30 12:53:34.711391 sshd[4719]: Accepted publickey for core from 10.200.16.10 port 48698 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:53:34.712973 sshd-session[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:53:34.717572 systemd-logind[1694]: New session 15 of user core. Apr 30 12:53:34.732082 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 12:53:36.661016 sshd[4721]: Connection closed by 10.200.16.10 port 48698 Apr 30 12:53:36.661883 sshd-session[4719]: pam_unix(sshd:session): session closed for user core Apr 30 12:53:36.666190 systemd[1]: sshd@12-10.200.4.14:22-10.200.16.10:48698.service: Deactivated successfully. Apr 30 12:53:36.669141 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 12:53:36.670857 systemd-logind[1694]: Session 15 logged out. Waiting for processes to exit. Apr 30 12:53:36.672047 systemd-logind[1694]: Removed session 15. Apr 30 12:53:36.780243 systemd[1]: Started sshd@13-10.200.4.14:22-10.200.16.10:48700.service - OpenSSH per-connection server daemon (10.200.16.10:48700). Apr 30 12:53:37.380095 sshd[4741]: Accepted publickey for core from 10.200.16.10 port 48700 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:53:37.381602 sshd-session[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:53:37.386132 systemd-logind[1694]: New session 16 of user core. Apr 30 12:53:37.391068 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 12:53:38.113607 sshd[4764]: Connection closed by 10.200.16.10 port 48700 Apr 30 12:53:38.115487 sshd-session[4741]: pam_unix(sshd:session): session closed for user core Apr 30 12:53:38.121739 systemd[1]: sshd@13-10.200.4.14:22-10.200.16.10:48700.service: Deactivated successfully. Apr 30 12:53:38.124068 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 12:53:38.125062 systemd-logind[1694]: Session 16 logged out. Waiting for processes to exit. Apr 30 12:53:38.126200 systemd-logind[1694]: Removed session 16. Apr 30 12:53:38.229227 systemd[1]: Started sshd@14-10.200.4.14:22-10.200.16.10:48706.service - OpenSSH per-connection server daemon (10.200.16.10:48706). Apr 30 12:53:38.828116 sshd[4774]: Accepted publickey for core from 10.200.16.10 port 48706 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:53:38.829667 sshd-session[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:53:38.834320 systemd-logind[1694]: New session 17 of user core. Apr 30 12:53:38.838065 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 12:53:39.314411 sshd[4776]: Connection closed by 10.200.16.10 port 48706 Apr 30 12:53:39.315407 sshd-session[4774]: pam_unix(sshd:session): session closed for user core Apr 30 12:53:39.319032 systemd[1]: sshd@14-10.200.4.14:22-10.200.16.10:48706.service: Deactivated successfully. Apr 30 12:53:39.321544 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 12:53:39.323206 systemd-logind[1694]: Session 17 logged out. Waiting for processes to exit. Apr 30 12:53:39.324410 systemd-logind[1694]: Removed session 17. Apr 30 12:53:44.428220 systemd[1]: Started sshd@15-10.200.4.14:22-10.200.16.10:35674.service - OpenSSH per-connection server daemon (10.200.16.10:35674). Apr 30 12:53:45.031169 sshd[4812]: Accepted publickey for core from 10.200.16.10 port 35674 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:53:45.032716 sshd-session[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:53:45.037449 systemd-logind[1694]: New session 18 of user core. Apr 30 12:53:45.045060 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 12:53:45.543018 sshd[4814]: Connection closed by 10.200.16.10 port 35674 Apr 30 12:53:45.543798 sshd-session[4812]: pam_unix(sshd:session): session closed for user core Apr 30 12:53:45.546972 systemd[1]: sshd@15-10.200.4.14:22-10.200.16.10:35674.service: Deactivated successfully. Apr 30 12:53:45.549380 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 12:53:45.551341 systemd-logind[1694]: Session 18 logged out. Waiting for processes to exit. Apr 30 12:53:45.552403 systemd-logind[1694]: Removed session 18. Apr 30 12:53:50.658239 systemd[1]: Started sshd@16-10.200.4.14:22-10.200.16.10:56794.service - OpenSSH per-connection server daemon (10.200.16.10:56794). Apr 30 12:53:51.267065 sshd[4848]: Accepted publickey for core from 10.200.16.10 port 56794 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:53:51.268893 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:53:51.273969 systemd-logind[1694]: New session 19 of user core. Apr 30 12:53:51.277089 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 12:53:51.777686 sshd[4850]: Connection closed by 10.200.16.10 port 56794 Apr 30 12:53:51.778537 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Apr 30 12:53:51.782709 systemd[1]: sshd@16-10.200.4.14:22-10.200.16.10:56794.service: Deactivated successfully. Apr 30 12:53:51.785190 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 12:53:51.786202 systemd-logind[1694]: Session 19 logged out. Waiting for processes to exit. Apr 30 12:53:51.787328 systemd-logind[1694]: Removed session 19. Apr 30 12:53:56.893224 systemd[1]: Started sshd@17-10.200.4.14:22-10.200.16.10:56808.service - OpenSSH per-connection server daemon (10.200.16.10:56808). Apr 30 12:53:57.492156 sshd[4883]: Accepted publickey for core from 10.200.16.10 port 56808 ssh2: RSA SHA256:IYow7hr8uYdfeTVHwFZpDLmtGZC4tZvjajKHomejV4A Apr 30 12:53:57.493989 sshd-session[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:53:57.498682 systemd-logind[1694]: New session 20 of user core. Apr 30 12:53:57.505085 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 12:53:57.982837 sshd[4906]: Connection closed by 10.200.16.10 port 56808 Apr 30 12:53:57.983745 sshd-session[4883]: pam_unix(sshd:session): session closed for user core Apr 30 12:53:57.987883 systemd[1]: sshd@17-10.200.4.14:22-10.200.16.10:56808.service: Deactivated successfully. Apr 30 12:53:57.990068 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 12:53:57.990885 systemd-logind[1694]: Session 20 logged out. Waiting for processes to exit. Apr 30 12:53:57.991850 systemd-logind[1694]: Removed session 20.