Jan 29 16:24:33.090702 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:24:33.090753 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:24:33.090768 kernel: BIOS-provided physical RAM map: Jan 29 16:24:33.090778 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 16:24:33.090789 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 29 16:24:33.090800 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 29 16:24:33.090813 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 29 16:24:33.090824 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 29 16:24:33.090838 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 29 16:24:33.090849 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 29 16:24:33.090860 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 29 16:24:33.090871 kernel: printk: bootconsole [earlyser0] enabled Jan 29 16:24:33.090882 kernel: NX (Execute Disable) protection: active Jan 29 16:24:33.090893 kernel: APIC: Static calls initialized Jan 29 16:24:33.090910 kernel: efi: EFI v2.7 by Microsoft Jan 29 16:24:33.090923 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 Jan 29 16:24:33.090936 kernel: random: crng init done Jan 29 16:24:33.090948 kernel: secureboot: Secure boot disabled Jan 29 16:24:33.090960 kernel: SMBIOS 3.1.0 present. Jan 29 16:24:33.090973 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 29 16:24:33.090985 kernel: Hypervisor detected: Microsoft Hyper-V Jan 29 16:24:33.090997 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 29 16:24:33.091010 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 29 16:24:33.091022 kernel: Hyper-V: Nested features: 0x1e0101 Jan 29 16:24:33.091036 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 29 16:24:33.091049 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 29 16:24:33.091061 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 29 16:24:33.091074 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 29 16:24:33.091087 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 29 16:24:33.091100 kernel: tsc: Detected 2593.908 MHz processor Jan 29 16:24:33.091113 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:24:33.091126 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:24:33.091138 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 29 16:24:33.091154 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 16:24:33.091166 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:24:33.091179 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 29 16:24:33.091191 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 29 16:24:33.091216 kernel: Using GB pages for direct mapping Jan 29 16:24:33.091229 kernel: ACPI: Early table checksum verification disabled Jan 29 16:24:33.091242 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 29 16:24:33.091261 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:24:33.091278 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:24:33.091291 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 29 16:24:33.091304 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 29 16:24:33.091318 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:24:33.091332 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:24:33.091345 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:24:33.091361 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:24:33.091375 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:24:33.091388 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:24:33.091402 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:24:33.091415 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 29 16:24:33.091428 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 29 16:24:33.091441 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 29 16:24:33.091455 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 29 16:24:33.091468 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 29 16:24:33.091484 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 29 16:24:33.091497 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 29 16:24:33.091510 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 29 16:24:33.091524 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 29 16:24:33.091537 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 29 16:24:33.091551 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 16:24:33.091564 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 16:24:33.091577 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 29 16:24:33.091591 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 29 16:24:33.091607 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 29 16:24:33.091620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 29 16:24:33.091634 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 29 16:24:33.091647 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 29 16:24:33.091660 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 29 16:24:33.091673 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 29 16:24:33.091687 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 29 16:24:33.091700 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 29 16:24:33.091716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 29 16:24:33.091729 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 29 16:24:33.091743 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 29 16:24:33.091756 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 29 16:24:33.091769 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 29 16:24:33.091783 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 29 16:24:33.091797 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 29 16:24:33.091810 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 29 16:24:33.091824 kernel: Zone ranges: Jan 29 16:24:33.091840 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:24:33.091853 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 16:24:33.091867 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 29 16:24:33.091880 kernel: Movable zone start for each node Jan 29 16:24:33.091893 kernel: Early memory node ranges Jan 29 16:24:33.091907 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 16:24:33.091920 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 29 16:24:33.091933 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 29 16:24:33.091947 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 29 16:24:33.091963 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 29 16:24:33.091976 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:24:33.091990 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 16:24:33.092003 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 29 16:24:33.092016 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 29 16:24:33.092030 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 29 16:24:33.092043 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:24:33.092057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:24:33.092070 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:24:33.092087 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 29 16:24:33.092100 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 16:24:33.092113 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 29 16:24:33.092127 kernel: Booting paravirtualized kernel on Hyper-V Jan 29 16:24:33.092141 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:24:33.092154 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 16:24:33.092168 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 16:24:33.092181 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 16:24:33.092194 kernel: pcpu-alloc: [0] 0 1 Jan 29 16:24:33.092217 kernel: Hyper-V: PV spinlocks enabled Jan 29 16:24:33.092230 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 16:24:33.092245 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:24:33.092260 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:24:33.092273 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 29 16:24:33.092286 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:24:33.092299 kernel: Fallback order for Node 0: 0 Jan 29 16:24:33.092312 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 29 16:24:33.092329 kernel: Policy zone: Normal Jan 29 16:24:33.092352 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:24:33.092366 kernel: software IO TLB: area num 2. Jan 29 16:24:33.092384 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 312164K reserved, 0K cma-reserved) Jan 29 16:24:33.092399 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:24:33.092413 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:24:33.092427 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:24:33.092441 kernel: Dynamic Preempt: voluntary Jan 29 16:24:33.092455 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:24:33.092469 kernel: rcu: RCU event tracing is enabled. Jan 29 16:24:33.092482 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:24:33.092514 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:24:33.092543 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:24:33.092560 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:24:33.092573 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:24:33.092587 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:24:33.092601 kernel: Using NULL legacy PIC Jan 29 16:24:33.092618 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 29 16:24:33.092631 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:24:33.092642 kernel: Console: colour dummy device 80x25 Jan 29 16:24:33.092655 kernel: printk: console [tty1] enabled Jan 29 16:24:33.092669 kernel: printk: console [ttyS0] enabled Jan 29 16:24:33.092683 kernel: printk: bootconsole [earlyser0] disabled Jan 29 16:24:33.092695 kernel: ACPI: Core revision 20230628 Jan 29 16:24:33.092707 kernel: Failed to register legacy timer interrupt Jan 29 16:24:33.092720 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:24:33.092737 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 29 16:24:33.092748 kernel: Hyper-V: Using IPI hypercalls Jan 29 16:24:33.092770 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 29 16:24:33.092782 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 29 16:24:33.092795 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 29 16:24:33.092807 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 29 16:24:33.092820 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 29 16:24:33.092833 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 29 16:24:33.092846 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Jan 29 16:24:33.092863 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 16:24:33.092876 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 16:24:33.092890 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:24:33.092904 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:24:33.092916 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:24:33.092929 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:24:33.092944 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 16:24:33.092959 kernel: RETBleed: Vulnerable Jan 29 16:24:33.092973 kernel: Speculative Store Bypass: Vulnerable Jan 29 16:24:33.092987 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 16:24:33.093004 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 16:24:33.093019 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 16:24:33.093033 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:24:33.093047 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:24:33.093061 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:24:33.093076 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 16:24:33.093089 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 16:24:33.093104 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 16:24:33.093118 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:24:33.093132 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 29 16:24:33.093146 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 29 16:24:33.093164 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 29 16:24:33.093178 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 29 16:24:33.093192 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:24:33.093229 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:24:33.093244 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:24:33.093258 kernel: landlock: Up and running. Jan 29 16:24:33.093272 kernel: SELinux: Initializing. Jan 29 16:24:33.093287 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:24:33.093299 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:24:33.093312 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 29 16:24:33.093325 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:24:33.093344 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:24:33.093358 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:24:33.093372 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 16:24:33.093386 kernel: signal: max sigframe size: 3632 Jan 29 16:24:33.093399 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:24:33.093414 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:24:33.093428 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 16:24:33.093441 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:24:33.093455 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:24:33.093471 kernel: .... node #0, CPUs: #1 Jan 29 16:24:33.093485 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 29 16:24:33.093500 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 16:24:33.093514 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:24:33.093527 kernel: smpboot: Max logical packages: 1 Jan 29 16:24:33.093541 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Jan 29 16:24:33.093555 kernel: devtmpfs: initialized Jan 29 16:24:33.093568 kernel: x86/mm: Memory block size: 128MB Jan 29 16:24:33.093582 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 29 16:24:33.093599 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:24:33.093612 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:24:33.093626 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:24:33.093640 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:24:33.093654 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:24:33.093668 kernel: audit: type=2000 audit(1738167871.029:1): state=initialized audit_enabled=0 res=1 Jan 29 16:24:33.093681 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:24:33.093694 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:24:33.093711 kernel: cpuidle: using governor menu Jan 29 16:24:33.093724 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:24:33.093738 kernel: dca service started, version 1.12.1 Jan 29 16:24:33.093752 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 29 16:24:33.093766 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:24:33.093779 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:24:33.093793 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:24:33.093806 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:24:33.093820 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:24:33.093836 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:24:33.093850 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:24:33.093864 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:24:33.093878 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:24:33.093891 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:24:33.093905 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:24:33.093919 kernel: ACPI: Interpreter enabled Jan 29 16:24:33.093933 kernel: ACPI: PM: (supports S0 S5) Jan 29 16:24:33.093946 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:24:33.093962 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:24:33.093976 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 29 16:24:33.093990 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 29 16:24:33.094003 kernel: iommu: Default domain type: Translated Jan 29 16:24:33.094017 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:24:33.094031 kernel: efivars: Registered efivars operations Jan 29 16:24:33.094044 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:24:33.094058 kernel: PCI: System does not support PCI Jan 29 16:24:33.094071 kernel: vgaarb: loaded Jan 29 16:24:33.094085 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 29 16:24:33.094101 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:24:33.094114 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:24:33.094127 kernel: pnp: PnP ACPI init Jan 29 16:24:33.094141 kernel: pnp: PnP ACPI: found 3 devices Jan 29 16:24:33.094155 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:24:33.094168 kernel: NET: Registered PF_INET protocol family Jan 29 16:24:33.094182 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 16:24:33.094195 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 29 16:24:33.094228 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:24:33.094242 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:24:33.094256 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 16:24:33.094269 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 29 16:24:33.094282 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 16:24:33.094309 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 16:24:33.094323 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:24:33.094335 kernel: NET: Registered PF_XDP protocol family Jan 29 16:24:33.094347 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:24:33.094365 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 16:24:33.094380 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Jan 29 16:24:33.094398 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 16:24:33.094411 kernel: Initialise system trusted keyrings Jan 29 16:24:33.094424 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 29 16:24:33.094439 kernel: Key type asymmetric registered Jan 29 16:24:33.094453 kernel: Asymmetric key parser 'x509' registered Jan 29 16:24:33.094467 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:24:33.094481 kernel: io scheduler mq-deadline registered Jan 29 16:24:33.094499 kernel: io scheduler kyber registered Jan 29 16:24:33.094514 kernel: io scheduler bfq registered Jan 29 16:24:33.094528 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:24:33.094543 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:24:33.094557 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:24:33.094571 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 16:24:33.094585 kernel: i8042: PNP: No PS/2 controller found. Jan 29 16:24:33.094759 kernel: rtc_cmos 00:02: registered as rtc0 Jan 29 16:24:33.094880 kernel: rtc_cmos 00:02: setting system clock to 2025-01-29T16:24:32 UTC (1738167872) Jan 29 16:24:33.094986 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 29 16:24:33.095003 kernel: intel_pstate: CPU model not supported Jan 29 16:24:33.095017 kernel: efifb: probing for efifb Jan 29 16:24:33.095031 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 29 16:24:33.095044 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 29 16:24:33.095058 kernel: efifb: scrolling: redraw Jan 29 16:24:33.095072 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 16:24:33.095086 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 16:24:33.095102 kernel: fb0: EFI VGA frame buffer device Jan 29 16:24:33.095116 kernel: pstore: Using crash dump compression: deflate Jan 29 16:24:33.095130 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 16:24:33.095143 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:24:33.095157 kernel: Segment Routing with IPv6 Jan 29 16:24:33.095171 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:24:33.095185 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:24:33.095209 kernel: Key type dns_resolver registered Jan 29 16:24:33.095236 kernel: IPI shorthand broadcast: enabled Jan 29 16:24:33.095254 kernel: sched_clock: Marking stable (971002800, 56023800)->(1305612900, -278586300) Jan 29 16:24:33.095267 kernel: registered taskstats version 1 Jan 29 16:24:33.095281 kernel: Loading compiled-in X.509 certificates Jan 29 16:24:33.095295 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:24:33.095308 kernel: Key type .fscrypt registered Jan 29 16:24:33.095321 kernel: Key type fscrypt-provisioning registered Jan 29 16:24:33.095335 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:24:33.095349 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:24:33.095363 kernel: ima: No architecture policies found Jan 29 16:24:33.095379 kernel: clk: Disabling unused clocks Jan 29 16:24:33.095392 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:24:33.095406 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:24:33.095420 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:24:33.095433 kernel: Run /init as init process Jan 29 16:24:33.095446 kernel: with arguments: Jan 29 16:24:33.095468 kernel: /init Jan 29 16:24:33.095481 kernel: with environment: Jan 29 16:24:33.095495 kernel: HOME=/ Jan 29 16:24:33.095512 kernel: TERM=linux Jan 29 16:24:33.095529 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:24:33.095545 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:24:33.095564 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:24:33.095580 systemd[1]: Detected virtualization microsoft. Jan 29 16:24:33.095595 systemd[1]: Detected architecture x86-64. Jan 29 16:24:33.095610 systemd[1]: Running in initrd. Jan 29 16:24:33.095624 systemd[1]: No hostname configured, using default hostname. Jan 29 16:24:33.095644 systemd[1]: Hostname set to . Jan 29 16:24:33.095659 systemd[1]: Initializing machine ID from random generator. Jan 29 16:24:33.095674 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:24:33.095689 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:24:33.095704 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:24:33.095720 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:24:33.095736 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:24:33.095754 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:24:33.095771 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:24:33.095787 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:24:33.095803 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:24:33.095818 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:24:33.095834 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:24:33.095849 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:24:33.095867 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:24:33.095882 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:24:33.095898 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:24:33.095913 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:24:33.095928 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:24:33.095944 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:24:33.095959 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:24:33.095975 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:24:33.095990 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:24:33.096008 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:24:33.096023 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:24:33.096038 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:24:33.096054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:24:33.096069 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:24:33.096084 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:24:33.096100 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:24:33.096115 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:24:33.096131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:24:33.096173 systemd-journald[177]: Collecting audit messages is disabled. Jan 29 16:24:33.096219 systemd-journald[177]: Journal started Jan 29 16:24:33.096268 systemd-journald[177]: Runtime Journal (/run/log/journal/94c820358ee4429db4db07dac2fee7ec) is 8M, max 158.8M, 150.8M free. Jan 29 16:24:33.092793 systemd-modules-load[178]: Inserted module 'overlay' Jan 29 16:24:33.102461 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:24:33.106414 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:24:33.113251 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:24:33.129482 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:24:33.137966 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:33.142967 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:24:33.142994 kernel: Bridge firewalling registered Jan 29 16:24:33.142376 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 29 16:24:33.145362 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:24:33.156403 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:24:33.163376 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:24:33.170378 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:24:33.182684 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:24:33.186073 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:24:33.190764 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:24:33.193330 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:24:33.211916 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:24:33.219867 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:24:33.224345 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:24:33.238533 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:24:33.254369 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:24:33.270597 dracut-cmdline[215]: dracut-dracut-053 Jan 29 16:24:33.276307 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:24:33.278680 systemd-resolved[208]: Positive Trust Anchors: Jan 29 16:24:33.278687 systemd-resolved[208]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:24:33.278726 systemd-resolved[208]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:24:33.281503 systemd-resolved[208]: Defaulting to hostname 'linux'. Jan 29 16:24:33.282515 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:24:33.292882 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:24:33.364227 kernel: SCSI subsystem initialized Jan 29 16:24:33.374224 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:24:33.385227 kernel: iscsi: registered transport (tcp) Jan 29 16:24:33.405931 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:24:33.406009 kernel: QLogic iSCSI HBA Driver Jan 29 16:24:33.441929 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:24:33.451353 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:24:33.478596 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:24:33.478683 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:24:33.481794 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:24:33.521220 kernel: raid6: avx512x4 gen() 18411 MB/s Jan 29 16:24:33.540218 kernel: raid6: avx512x2 gen() 18401 MB/s Jan 29 16:24:33.558224 kernel: raid6: avx512x1 gen() 18300 MB/s Jan 29 16:24:33.577214 kernel: raid6: avx2x4 gen() 18213 MB/s Jan 29 16:24:33.597243 kernel: raid6: avx2x2 gen() 18347 MB/s Jan 29 16:24:33.617064 kernel: raid6: avx2x1 gen() 13631 MB/s Jan 29 16:24:33.617142 kernel: raid6: using algorithm avx512x4 gen() 18411 MB/s Jan 29 16:24:33.637729 kernel: raid6: .... xor() 7058 MB/s, rmw enabled Jan 29 16:24:33.637764 kernel: raid6: using avx512x2 recovery algorithm Jan 29 16:24:33.660232 kernel: xor: automatically using best checksumming function avx Jan 29 16:24:33.801232 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:24:33.810714 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:24:33.821398 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:24:33.840243 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 29 16:24:33.845629 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:24:33.858400 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:24:33.872922 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 29 16:24:33.904426 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:24:33.911455 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:24:33.955043 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:24:33.966421 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:24:33.992431 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:24:34.000166 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:24:34.007052 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:24:34.012584 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:24:34.023413 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:24:34.039222 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:24:34.062495 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:24:34.066827 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:24:34.066858 kernel: AES CTR mode by8 optimization enabled Jan 29 16:24:34.094225 kernel: hv_vmbus: Vmbus version:5.2 Jan 29 16:24:34.098840 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:24:34.115593 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 16:24:34.115617 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 16:24:34.115637 kernel: PTP clock support registered Jan 29 16:24:34.115651 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 29 16:24:34.115661 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 29 16:24:34.099082 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:24:34.103380 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:24:34.136219 kernel: hv_utils: Registering HyperV Utility Driver Jan 29 16:24:34.136259 kernel: hv_vmbus: registering driver hv_utils Jan 29 16:24:34.138245 kernel: hv_utils: Heartbeat IC version 3.0 Jan 29 16:24:34.138281 kernel: hv_utils: Shutdown IC version 3.2 Jan 29 16:24:34.138298 kernel: hv_utils: TimeSync IC version 4.0 Jan 29 16:24:34.230904 systemd-resolved[208]: Clock change detected. Flushing caches. Jan 29 16:24:34.275604 kernel: hv_vmbus: registering driver hv_storvsc Jan 29 16:24:34.275639 kernel: scsi host1: storvsc_host_t Jan 29 16:24:34.275810 kernel: scsi host0: storvsc_host_t Jan 29 16:24:34.275923 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 29 16:24:34.276052 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 29 16:24:34.276174 kernel: hv_vmbus: registering driver hv_netvsc Jan 29 16:24:34.264479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:24:34.264793 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:34.270690 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:24:34.289818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:24:34.295153 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:24:34.300968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:24:34.301740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:34.316466 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 16:24:34.318705 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:24:34.341374 kernel: hv_vmbus: registering driver hid_hyperv Jan 29 16:24:34.341418 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 29 16:24:34.362888 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:24:34.362908 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 29 16:24:34.362922 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 29 16:24:34.363052 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 29 16:24:34.377496 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 29 16:24:34.377697 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 16:24:34.377854 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 16:24:34.377978 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 29 16:24:34.378093 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 29 16:24:34.378214 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:24:34.378228 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 16:24:34.358235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:34.376929 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:24:34.400273 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:24:34.462732 kernel: hv_netvsc 0022489b-1f32-0022-489b-1f320022489b eth0: VF slot 1 added Jan 29 16:24:34.469459 kernel: hv_vmbus: registering driver hv_pci Jan 29 16:24:34.476664 kernel: hv_pci c42aa17e-b309-4ae5-9852-a7021a57aac5: PCI VMBus probing: Using version 0x10004 Jan 29 16:24:34.515896 kernel: hv_pci c42aa17e-b309-4ae5-9852-a7021a57aac5: PCI host bridge to bus b309:00 Jan 29 16:24:34.516096 kernel: pci_bus b309:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 29 16:24:34.516278 kernel: pci_bus b309:00: No busn resource found for root bus, will use [bus 00-ff] Jan 29 16:24:34.516464 kernel: pci b309:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 29 16:24:34.516657 kernel: pci b309:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 29 16:24:34.516826 kernel: pci b309:00:02.0: enabling Extended Tags Jan 29 16:24:34.517001 kernel: pci b309:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b309:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 29 16:24:34.517180 kernel: pci_bus b309:00: busn_res: [bus 00-ff] end is updated to 00 Jan 29 16:24:34.517345 kernel: pci b309:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 29 16:24:34.678349 kernel: mlx5_core b309:00:02.0: enabling device (0000 -> 0002) Jan 29 16:24:34.903242 kernel: mlx5_core b309:00:02.0: firmware version: 14.30.5000 Jan 29 16:24:34.903747 kernel: hv_netvsc 0022489b-1f32-0022-489b-1f320022489b eth0: VF registering: eth1 Jan 29 16:24:34.903977 kernel: mlx5_core b309:00:02.0 eth1: joined to eth0 Jan 29 16:24:34.904226 kernel: mlx5_core b309:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 16:24:34.911477 kernel: mlx5_core b309:00:02.0 enP45833s1: renamed from eth1 Jan 29 16:24:34.931635 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 29 16:24:34.959495 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (451) Jan 29 16:24:34.997404 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 16:24:35.018462 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (457) Jan 29 16:24:35.036089 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 29 16:24:35.037293 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 29 16:24:35.050644 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:24:35.068331 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 29 16:24:35.074851 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:24:36.086836 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:24:36.090150 disk-uuid[603]: The operation has completed successfully. Jan 29 16:24:36.172330 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:24:36.172462 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:24:36.216590 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:24:36.224491 sh[689]: Success Jan 29 16:24:36.255546 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 16:24:36.482753 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:24:36.496567 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:24:36.501672 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:24:36.517466 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:24:36.517505 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:24:36.522975 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:24:36.525666 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:24:36.527984 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:24:36.821395 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:24:36.828687 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:24:36.839694 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:24:36.850621 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:24:36.871457 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:36.871507 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:24:36.871526 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:24:36.891966 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:24:36.900719 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:24:36.906264 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:36.912855 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:24:36.925715 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:24:36.943184 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:24:36.953195 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:24:36.977917 systemd-networkd[874]: lo: Link UP Jan 29 16:24:36.977926 systemd-networkd[874]: lo: Gained carrier Jan 29 16:24:36.980185 systemd-networkd[874]: Enumeration completed Jan 29 16:24:36.980460 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:24:36.984532 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:24:36.984539 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:24:36.989002 systemd[1]: Reached target network.target - Network. Jan 29 16:24:37.051662 kernel: mlx5_core b309:00:02.0 enP45833s1: Link up Jan 29 16:24:37.088561 kernel: hv_netvsc 0022489b-1f32-0022-489b-1f320022489b eth0: Data path switched to VF: enP45833s1 Jan 29 16:24:37.088924 systemd-networkd[874]: enP45833s1: Link UP Jan 29 16:24:37.089039 systemd-networkd[874]: eth0: Link UP Jan 29 16:24:37.089196 systemd-networkd[874]: eth0: Gained carrier Jan 29 16:24:37.089210 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:24:37.092679 systemd-networkd[874]: enP45833s1: Gained carrier Jan 29 16:24:37.118503 systemd-networkd[874]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 16:24:38.113097 ignition[843]: Ignition 2.20.0 Jan 29 16:24:38.113109 ignition[843]: Stage: fetch-offline Jan 29 16:24:38.113159 ignition[843]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:38.113169 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:24:38.117336 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:24:38.113294 ignition[843]: parsed url from cmdline: "" Jan 29 16:24:38.134751 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:24:38.113299 ignition[843]: no config URL provided Jan 29 16:24:38.113306 ignition[843]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:24:38.113317 ignition[843]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:24:38.113326 ignition[843]: failed to fetch config: resource requires networking Jan 29 16:24:38.115215 ignition[843]: Ignition finished successfully Jan 29 16:24:38.152483 ignition[883]: Ignition 2.20.0 Jan 29 16:24:38.152491 ignition[883]: Stage: fetch Jan 29 16:24:38.152719 ignition[883]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:38.152730 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:24:38.152856 ignition[883]: parsed url from cmdline: "" Jan 29 16:24:38.152861 ignition[883]: no config URL provided Jan 29 16:24:38.152866 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:24:38.152876 ignition[883]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:24:38.152905 ignition[883]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 29 16:24:38.237997 ignition[883]: GET result: OK Jan 29 16:24:38.238131 ignition[883]: config has been read from IMDS userdata Jan 29 16:24:38.238168 ignition[883]: parsing config with SHA512: 055381908ba1ac2ae6874c217eee4e3111f9cae767e7f38debd95dffe9c99a8b98abaa255e1070303555c17130708e6e0e578fb67c90d4aa1ebe00f9fb4c7f1a Jan 29 16:24:38.245972 unknown[883]: fetched base config from "system" Jan 29 16:24:38.245986 unknown[883]: fetched base config from "system" Jan 29 16:24:38.245994 unknown[883]: fetched user config from "azure" Jan 29 16:24:38.253240 ignition[883]: fetch: fetch complete Jan 29 16:24:38.253248 ignition[883]: fetch: fetch passed Jan 29 16:24:38.256218 ignition[883]: Ignition finished successfully Jan 29 16:24:38.258748 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:24:38.269614 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:24:38.289040 ignition[890]: Ignition 2.20.0 Jan 29 16:24:38.289053 ignition[890]: Stage: kargs Jan 29 16:24:38.289288 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:38.292608 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:24:38.289302 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:24:38.290360 ignition[890]: kargs: kargs passed Jan 29 16:24:38.290412 ignition[890]: Ignition finished successfully Jan 29 16:24:38.306614 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:24:38.320543 ignition[897]: Ignition 2.20.0 Jan 29 16:24:38.320553 ignition[897]: Stage: disks Jan 29 16:24:38.322589 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:24:38.320785 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:38.326587 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:24:38.320797 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:24:38.330923 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:24:38.321607 ignition[897]: disks: disks passed Jan 29 16:24:38.335615 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:24:38.321652 ignition[897]: Ignition finished successfully Jan 29 16:24:38.340023 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:24:38.342479 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:24:38.361697 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:24:38.427825 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 29 16:24:38.433033 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:24:38.444537 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:24:38.503039 systemd-networkd[874]: eth0: Gained IPv6LL Jan 29 16:24:38.534661 kernel: EXT4-fs (sda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:24:38.535403 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:24:38.538153 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:24:38.573576 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:24:38.579237 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:24:38.589637 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (916) Jan 29 16:24:38.595393 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:38.595465 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:24:38.595665 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 16:24:38.603758 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:24:38.604081 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:24:38.604143 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:24:38.609458 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:24:38.617602 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:24:38.619804 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:24:38.628587 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:24:38.885755 systemd-networkd[874]: enP45833s1: Gained IPv6LL Jan 29 16:24:39.225328 initrd-setup-root[941]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:24:39.282695 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:24:39.307874 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:24:39.335087 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:24:39.351150 coreos-metadata[918]: Jan 29 16:24:39.351 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 16:24:39.359876 coreos-metadata[918]: Jan 29 16:24:39.359 INFO Fetch successful Jan 29 16:24:39.367766 coreos-metadata[918]: Jan 29 16:24:39.361 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 29 16:24:39.379386 coreos-metadata[918]: Jan 29 16:24:39.379 INFO Fetch successful Jan 29 16:24:39.382598 coreos-metadata[918]: Jan 29 16:24:39.379 INFO wrote hostname ci-4230.0.0-a-6998ca2965 to /sysroot/etc/hostname Jan 29 16:24:39.384647 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:24:40.152129 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:24:40.164598 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:24:40.177306 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:24:40.191580 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:24:40.199178 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:40.232247 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:24:40.239955 ignition[1035]: INFO : Ignition 2.20.0 Jan 29 16:24:40.239955 ignition[1035]: INFO : Stage: mount Jan 29 16:24:40.246191 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:40.246191 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:24:40.246191 ignition[1035]: INFO : mount: mount passed Jan 29 16:24:40.246191 ignition[1035]: INFO : Ignition finished successfully Jan 29 16:24:40.241941 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:24:40.259556 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:24:40.268620 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:24:40.294465 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1047) Jan 29 16:24:40.294521 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:40.298458 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:24:40.302735 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:24:40.308459 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:24:40.309878 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:24:40.333319 ignition[1063]: INFO : Ignition 2.20.0 Jan 29 16:24:40.333319 ignition[1063]: INFO : Stage: files Jan 29 16:24:40.339535 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:40.339535 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:24:40.339535 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:24:40.349450 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:24:40.349450 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:24:40.430879 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:24:40.436469 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:24:40.436469 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:24:40.431389 unknown[1063]: wrote ssh authorized keys file for user: core Jan 29 16:24:40.459389 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:24:40.464547 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 16:24:40.533906 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:24:40.655625 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:24:40.660651 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:24:40.660651 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:24:40.660651 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:24:40.673031 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:24:40.673031 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:24:40.681571 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:24:40.681571 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:24:40.689959 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:24:40.694298 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:24:40.694298 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:24:40.694298 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:24:40.694298 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:24:40.694298 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:24:40.694298 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 16:24:41.269744 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 16:24:41.629929 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:24:41.629929 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 16:24:41.659409 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:24:41.666722 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:24:41.666722 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 16:24:41.666722 ignition[1063]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:24:41.683531 ignition[1063]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:24:41.683531 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:24:41.683531 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:24:41.683531 ignition[1063]: INFO : files: files passed Jan 29 16:24:41.683531 ignition[1063]: INFO : Ignition finished successfully Jan 29 16:24:41.668598 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:24:41.699509 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:24:41.709634 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:24:41.712860 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:24:41.714505 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:24:41.737493 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:24:41.737493 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:24:41.751165 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:24:41.752519 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:24:41.761039 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:24:41.768676 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:24:41.792624 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:24:41.792738 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:24:41.798271 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:24:41.802708 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:24:41.804988 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:24:41.814615 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:24:41.828693 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:24:41.840615 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:24:41.853668 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:24:41.854939 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:24:41.855310 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:24:41.856125 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:24:41.856269 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:24:41.856910 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:24:41.857313 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:24:41.858053 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:24:41.858554 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:24:41.858935 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:24:41.859332 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:24:41.859717 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:24:41.860110 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:24:41.860497 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:24:41.860866 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:24:41.861187 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:24:41.861292 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:24:41.861961 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:24:41.862343 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:24:41.863089 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:24:41.864349 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:24:41.904014 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:24:41.904147 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:24:41.954941 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:24:41.955173 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:24:41.963607 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:24:41.963796 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:24:41.968294 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 16:24:41.968436 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:24:41.981785 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:24:41.984140 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:24:41.986565 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:24:41.992560 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:24:41.999647 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:24:41.999870 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:24:42.003078 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:24:42.024791 ignition[1116]: INFO : Ignition 2.20.0 Jan 29 16:24:42.024791 ignition[1116]: INFO : Stage: umount Jan 29 16:24:42.024791 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:42.024791 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:24:42.024791 ignition[1116]: INFO : umount: umount passed Jan 29 16:24:42.024791 ignition[1116]: INFO : Ignition finished successfully Jan 29 16:24:42.003214 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:24:42.016190 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:24:42.016299 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:24:42.020066 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:24:42.020163 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:24:42.027254 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:24:42.027368 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:24:42.036588 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:24:42.036640 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:24:42.040489 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:24:42.040543 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:24:42.044840 systemd[1]: Stopped target network.target - Network. Jan 29 16:24:42.049624 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:24:42.049688 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:24:42.052534 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:24:42.058781 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:24:42.063996 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:24:42.066739 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:24:42.068807 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:24:42.073066 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:24:42.075147 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:24:42.079478 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:24:42.079533 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:24:42.084155 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:24:42.084231 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:24:42.088210 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:24:42.088275 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:24:42.092708 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:24:42.098882 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:24:42.141032 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:24:42.145389 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:24:42.145534 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:24:42.151556 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:24:42.151911 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:24:42.152007 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:24:42.160107 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:24:42.161233 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:24:42.161312 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:24:42.189685 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:24:42.194738 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:24:42.197057 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:24:42.203223 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:24:42.203294 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:24:42.208077 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:24:42.208127 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:24:42.213014 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:24:42.213067 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:24:42.223315 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:24:42.233544 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:24:42.234025 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:24:42.245920 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:24:42.246143 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:24:42.255342 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:24:42.255431 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:24:42.261149 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:24:42.261187 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:24:42.267922 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:24:42.267986 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:24:42.276148 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:24:42.276208 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:24:42.280544 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:24:42.280594 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:24:42.301475 kernel: hv_netvsc 0022489b-1f32-0022-489b-1f320022489b eth0: Data path switched from VF: enP45833s1 Jan 29 16:24:42.304626 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:24:42.307503 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:24:42.307577 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:24:42.311160 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:24:42.311213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:42.320174 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:24:42.320229 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:24:42.320689 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:24:42.320812 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:24:42.334683 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:24:42.334786 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:24:42.613086 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:24:42.613259 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:24:42.621795 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:24:42.624553 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:24:42.624624 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:24:42.636658 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:24:42.646015 systemd[1]: Switching root. Jan 29 16:24:42.726831 systemd-journald[177]: Journal stopped Jan 29 16:24:46.708546 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 29 16:24:46.708577 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:24:46.708591 kernel: SELinux: policy capability open_perms=1 Jan 29 16:24:46.708600 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:24:46.708611 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:24:46.708619 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:24:46.708630 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:24:46.708642 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:24:46.708651 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:24:46.708662 kernel: audit: type=1403 audit(1738167883.778:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:24:46.708671 systemd[1]: Successfully loaded SELinux policy in 192.565ms. Jan 29 16:24:46.709712 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.262ms. Jan 29 16:24:46.709739 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:24:46.709758 systemd[1]: Detected virtualization microsoft. Jan 29 16:24:46.709781 systemd[1]: Detected architecture x86-64. Jan 29 16:24:46.709798 systemd[1]: Detected first boot. Jan 29 16:24:46.709817 systemd[1]: Hostname set to . Jan 29 16:24:46.709834 systemd[1]: Initializing machine ID from random generator. Jan 29 16:24:46.709850 zram_generator::config[1160]: No configuration found. Jan 29 16:24:46.709871 kernel: Guest personality initialized and is inactive Jan 29 16:24:46.709886 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jan 29 16:24:46.709901 kernel: Initialized host personality Jan 29 16:24:46.709916 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:24:46.709932 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:24:46.709949 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:24:46.709965 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:24:46.709982 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:24:46.710001 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:24:46.710017 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:24:46.710035 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:24:46.710051 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:24:46.710068 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:24:46.710085 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:24:46.710102 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:24:46.710121 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:24:46.710138 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:24:46.710154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:24:46.710173 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:24:46.710190 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:24:46.710207 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:24:46.710228 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:24:46.710246 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:24:46.710264 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:24:46.710284 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:24:46.710301 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:24:46.710319 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:24:46.710336 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:24:46.710353 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:24:46.710371 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:24:46.710388 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:24:46.710407 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:24:46.710424 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:24:46.718165 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:24:46.718195 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:24:46.718214 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:24:46.718234 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:24:46.718258 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:24:46.718275 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:24:46.718293 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:24:46.718311 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:24:46.718329 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:24:46.718347 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:24:46.718365 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:24:46.718386 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:24:46.718403 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:24:46.718421 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:24:46.718453 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:24:46.718472 systemd[1]: Reached target machines.target - Containers. Jan 29 16:24:46.718490 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:24:46.718509 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:24:46.718527 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:24:46.718550 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:24:46.718568 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:24:46.718586 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:24:46.718604 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:24:46.718622 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:24:46.718639 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:24:46.718657 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:24:46.718676 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:24:46.718697 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:24:46.718715 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:24:46.718733 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:24:46.718752 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:24:46.718771 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:24:46.718788 kernel: fuse: init (API version 7.39) Jan 29 16:24:46.718805 kernel: loop: module loaded Jan 29 16:24:46.718822 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:24:46.718844 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:24:46.718862 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:24:46.718880 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:24:46.718898 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:24:46.718916 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:24:46.718963 systemd-journald[1267]: Collecting audit messages is disabled. Jan 29 16:24:46.719004 systemd[1]: Stopped verity-setup.service. Jan 29 16:24:46.719022 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:24:46.719041 systemd-journald[1267]: Journal started Jan 29 16:24:46.719075 systemd-journald[1267]: Runtime Journal (/run/log/journal/8de9ddf55f8e4570b3cea69e9d0bebb0) is 8M, max 158.8M, 150.8M free. Jan 29 16:24:46.125229 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:24:46.133325 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 16:24:46.133750 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:24:46.731370 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:24:46.732516 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:24:46.735360 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:24:46.738240 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:24:46.740825 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:24:46.743811 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:24:46.746828 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:24:46.749513 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:24:46.752775 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:24:46.758928 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:24:46.759235 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:24:46.762239 kernel: ACPI: bus type drm_connector registered Jan 29 16:24:46.762562 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:24:46.762755 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:24:46.765733 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:24:46.765925 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:24:46.768581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:24:46.768756 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:24:46.771806 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:24:46.772125 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:24:46.775239 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:24:46.775464 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:24:46.778644 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:24:46.781753 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:24:46.785387 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:24:46.799365 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:24:46.810640 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:24:46.823522 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:24:46.827815 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:24:46.827963 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:24:46.838884 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:24:46.846578 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:24:46.853634 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:24:46.856461 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:24:46.877665 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:24:46.882187 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:24:46.884996 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:24:46.887801 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:24:46.890597 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:24:46.894743 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:24:46.899707 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:24:46.905611 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:24:46.913644 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:24:46.922511 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:24:46.931823 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:24:46.935539 systemd-journald[1267]: Time spent on flushing to /var/log/journal/8de9ddf55f8e4570b3cea69e9d0bebb0 is 45.222ms for 972 entries. Jan 29 16:24:46.935539 systemd-journald[1267]: System Journal (/var/log/journal/8de9ddf55f8e4570b3cea69e9d0bebb0) is 8M, max 2.6G, 2.6G free. Jan 29 16:24:47.004136 systemd-journald[1267]: Received client request to flush runtime journal. Jan 29 16:24:47.004193 kernel: loop0: detected capacity change from 0 to 28272 Jan 29 16:24:46.939542 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:24:46.946772 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:24:46.950410 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:24:46.958458 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:24:46.967587 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:24:46.981019 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:24:47.006279 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:24:47.013997 udevadm[1312]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 16:24:47.041614 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:24:47.053186 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:24:47.135263 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:24:47.277666 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:24:47.289921 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:24:47.340403 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jan 29 16:24:47.340944 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jan 29 16:24:47.351019 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:24:47.381404 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:24:47.456600 kernel: loop1: detected capacity change from 0 to 147912 Jan 29 16:24:47.865468 kernel: loop2: detected capacity change from 0 to 205544 Jan 29 16:24:47.908495 kernel: loop3: detected capacity change from 0 to 138176 Jan 29 16:24:48.053080 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:24:48.066870 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:24:48.091350 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Jan 29 16:24:48.208137 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:24:48.221588 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:24:48.268593 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:24:48.354982 kernel: loop4: detected capacity change from 0 to 28272 Jan 29 16:24:48.366473 kernel: loop5: detected capacity change from 0 to 147912 Jan 29 16:24:48.381100 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:24:48.383606 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:24:48.401498 kernel: loop6: detected capacity change from 0 to 205544 Jan 29 16:24:48.420731 kernel: loop7: detected capacity change from 0 to 138176 Jan 29 16:24:48.430946 kernel: hv_vmbus: registering driver hyperv_fb Jan 29 16:24:48.431032 kernel: hv_vmbus: registering driver hv_balloon Jan 29 16:24:48.444393 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 29 16:24:48.444512 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 29 16:24:48.444545 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 29 16:24:48.444570 kernel: Console: switching to colour dummy device 80x25 Jan 29 16:24:48.451909 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 16:24:48.469434 (sd-merge)[1353]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 29 16:24:48.470119 (sd-merge)[1353]: Merged extensions into '/usr'. Jan 29 16:24:48.484567 systemd[1]: Reload requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:24:48.484584 systemd[1]: Reloading... Jan 29 16:24:48.720857 zram_generator::config[1409]: No configuration found. Jan 29 16:24:48.868479 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1334) Jan 29 16:24:48.966485 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 29 16:24:48.994491 systemd-networkd[1337]: lo: Link UP Jan 29 16:24:48.994943 systemd-networkd[1337]: lo: Gained carrier Jan 29 16:24:49.007101 systemd-networkd[1337]: Enumeration completed Jan 29 16:24:49.014677 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:24:49.014689 systemd-networkd[1337]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:24:49.073472 kernel: mlx5_core b309:00:02.0 enP45833s1: Link up Jan 29 16:24:49.097483 kernel: hv_netvsc 0022489b-1f32-0022-489b-1f320022489b eth0: Data path switched to VF: enP45833s1 Jan 29 16:24:49.098359 systemd-networkd[1337]: enP45833s1: Link UP Jan 29 16:24:49.098521 systemd-networkd[1337]: eth0: Link UP Jan 29 16:24:49.098527 systemd-networkd[1337]: eth0: Gained carrier Jan 29 16:24:49.098832 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:24:49.103805 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:24:49.105757 systemd-networkd[1337]: enP45833s1: Gained carrier Jan 29 16:24:49.132543 systemd-networkd[1337]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 16:24:49.221402 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 16:24:49.225400 systemd[1]: Reloading finished in 739 ms. Jan 29 16:24:49.246976 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:24:49.250473 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:24:49.253717 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:24:49.312926 systemd[1]: Starting ensure-sysext.service... Jan 29 16:24:49.318591 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:24:49.330383 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:24:49.337865 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:24:49.343764 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:24:49.351782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:24:49.385402 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:24:49.399372 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:24:49.399801 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:24:49.400725 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:24:49.402347 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:24:49.402757 systemd-tmpfiles[1526]: ACLs are not supported, ignoring. Jan 29 16:24:49.402894 systemd-tmpfiles[1526]: ACLs are not supported, ignoring. Jan 29 16:24:49.404636 systemd[1]: Reload requested from client PID 1522 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:24:49.404656 systemd[1]: Reloading... Jan 29 16:24:49.420799 systemd-tmpfiles[1526]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:24:49.420817 systemd-tmpfiles[1526]: Skipping /boot Jan 29 16:24:49.436094 systemd-tmpfiles[1526]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:24:49.436109 systemd-tmpfiles[1526]: Skipping /boot Jan 29 16:24:49.507475 zram_generator::config[1562]: No configuration found. Jan 29 16:24:49.656731 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:24:49.777452 systemd[1]: Reloading finished in 372 ms. Jan 29 16:24:49.803273 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:24:49.807321 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:24:49.810819 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:49.825785 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:24:49.843694 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:24:49.851779 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:24:49.864141 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:24:49.871770 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:24:49.877768 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:24:49.888915 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:24:49.889200 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:24:49.896769 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:24:49.907820 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:24:49.924747 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:24:49.930826 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:24:49.931867 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:24:49.932284 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:24:49.940643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:24:49.940875 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:24:49.944516 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:24:49.944728 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:24:49.948501 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:24:49.948725 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:24:49.959289 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:24:49.960047 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:24:49.964422 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:24:49.967104 lvm[1633]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:24:49.976520 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:24:49.976854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:24:49.985728 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:24:49.999216 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:24:50.005273 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:24:50.007822 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:24:50.007995 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:24:50.008137 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:24:50.010554 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:24:50.011014 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:24:50.025952 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:24:50.032273 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:24:50.033058 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:24:50.038046 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:24:50.038649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:24:50.046653 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:24:50.047140 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:24:50.052700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:24:50.062678 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:24:50.065506 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:24:50.065573 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:24:50.065635 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:24:50.065700 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:24:50.066454 systemd-resolved[1636]: Positive Trust Anchors: Jan 29 16:24:50.066724 systemd-resolved[1636]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:24:50.066816 systemd-resolved[1636]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:24:50.071632 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:24:50.072383 systemd[1]: Finished ensure-sysext.service. Jan 29 16:24:50.075152 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:24:50.080320 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:24:50.080674 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:24:50.088242 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:24:50.088471 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:24:50.098808 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:24:50.107607 systemd-resolved[1636]: Using system hostname 'ci-4230.0.0-a-6998ca2965'. Jan 29 16:24:50.112661 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:24:50.116452 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:24:50.116815 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:24:50.119687 lvm[1674]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:24:50.120380 systemd[1]: Reached target network.target - Network. Jan 29 16:24:50.123216 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:24:50.139890 augenrules[1678]: No rules Jan 29 16:24:50.141793 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:24:50.142056 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:24:50.153315 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:24:50.469631 systemd-networkd[1337]: enP45833s1: Gained IPv6LL Jan 29 16:24:50.585871 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:24:50.589836 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:24:50.917670 systemd-networkd[1337]: eth0: Gained IPv6LL Jan 29 16:24:50.920732 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:24:50.926581 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:24:53.110798 ldconfig[1296]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:24:53.125539 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:24:53.135621 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:24:53.144659 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:24:53.147863 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:24:53.150811 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:24:53.153897 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:24:53.157092 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:24:53.159774 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:24:53.162748 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:24:53.165589 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:24:53.165630 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:24:53.168028 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:24:53.188897 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:24:53.193651 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:24:53.198964 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:24:53.202498 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:24:53.205553 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:24:53.216109 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:24:53.219407 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:24:53.223059 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:24:53.225731 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:24:53.227971 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:24:53.230276 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:24:53.230312 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:24:53.238539 systemd[1]: Starting chronyd.service - NTP client/server... Jan 29 16:24:53.242574 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:24:53.256096 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:24:53.260983 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:24:53.266639 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:24:53.273618 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:24:53.276688 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:24:53.276749 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 29 16:24:53.284633 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 29 16:24:53.290727 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 29 16:24:53.292242 KVP[1701]: KVP starting; pid is:1701 Jan 29 16:24:53.298468 kernel: hv_utils: KVP IC version 4.0 Jan 29 16:24:53.298573 KVP[1701]: KVP LIC Version: 3.1 Jan 29 16:24:53.303583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:24:53.311462 jq[1696]: false Jan 29 16:24:53.313629 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:24:53.329083 (chronyd)[1692]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 29 16:24:53.330563 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:24:53.340720 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:24:53.341338 chronyd[1709]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 29 16:24:53.346995 extend-filesystems[1700]: Found loop4 Jan 29 16:24:53.349335 extend-filesystems[1700]: Found loop5 Jan 29 16:24:53.349335 extend-filesystems[1700]: Found loop6 Jan 29 16:24:53.349335 extend-filesystems[1700]: Found loop7 Jan 29 16:24:53.349335 extend-filesystems[1700]: Found sda Jan 29 16:24:53.349335 extend-filesystems[1700]: Found sda1 Jan 29 16:24:53.349335 extend-filesystems[1700]: Found sda2 Jan 29 16:24:53.349335 extend-filesystems[1700]: Found sda3 Jan 29 16:24:53.349335 extend-filesystems[1700]: Found usr Jan 29 16:24:53.349335 extend-filesystems[1700]: Found sda4 Jan 29 16:24:53.349335 extend-filesystems[1700]: Found sda6 Jan 29 16:24:53.349335 extend-filesystems[1700]: Found sda7 Jan 29 16:24:53.349335 extend-filesystems[1700]: Found sda9 Jan 29 16:24:53.349335 extend-filesystems[1700]: Checking size of /dev/sda9 Jan 29 16:24:53.360624 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:24:53.376723 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:24:53.391513 chronyd[1709]: Timezone right/UTC failed leap second check, ignoring Jan 29 16:24:53.391783 chronyd[1709]: Loaded seccomp filter (level 2) Jan 29 16:24:53.395463 extend-filesystems[1700]: Old size kept for /dev/sda9 Jan 29 16:24:53.395463 extend-filesystems[1700]: Found sr0 Jan 29 16:24:53.411638 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:24:53.416554 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:24:53.417247 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:24:53.425612 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:24:53.430840 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:24:53.441426 systemd[1]: Started chronyd.service - NTP client/server. Jan 29 16:24:53.452998 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:24:53.453602 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:24:53.453981 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:24:53.455242 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:24:53.468918 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:24:53.469531 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:24:53.473670 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:24:53.481509 jq[1728]: true Jan 29 16:24:53.483530 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:24:53.484150 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:24:53.502044 dbus-daemon[1695]: [system] SELinux support is enabled Jan 29 16:24:53.503156 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:24:53.527453 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:24:53.527509 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:24:53.533076 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:24:53.533110 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:24:53.541959 update_engine[1725]: I20250129 16:24:53.540975 1725 main.cc:92] Flatcar Update Engine starting Jan 29 16:24:53.548690 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:24:53.549508 (ntainerd)[1737]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:24:53.553949 update_engine[1725]: I20250129 16:24:53.553757 1725 update_check_scheduler.cc:74] Next update check in 6m10s Jan 29 16:24:53.555092 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:24:53.565432 jq[1736]: true Jan 29 16:24:53.598609 systemd-logind[1723]: New seat seat0. Jan 29 16:24:53.603097 tar[1735]: linux-amd64/helm Jan 29 16:24:53.602889 systemd-logind[1723]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:24:53.603156 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:24:53.664207 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1770) Jan 29 16:24:53.678583 coreos-metadata[1694]: Jan 29 16:24:53.678 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 16:24:53.685380 coreos-metadata[1694]: Jan 29 16:24:53.682 INFO Fetch successful Jan 29 16:24:53.685380 coreos-metadata[1694]: Jan 29 16:24:53.683 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 29 16:24:53.692535 coreos-metadata[1694]: Jan 29 16:24:53.692 INFO Fetch successful Jan 29 16:24:53.692535 coreos-metadata[1694]: Jan 29 16:24:53.692 INFO Fetching http://168.63.129.16/machine/ec929b5d-5398-4697-89e1-1c6d0394ec47/e4d98072%2Dd342%2D4926%2Da02d%2D1b411a2067fd.%5Fci%2D4230.0.0%2Da%2D6998ca2965?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 29 16:24:53.700329 coreos-metadata[1694]: Jan 29 16:24:53.699 INFO Fetch successful Jan 29 16:24:53.700651 coreos-metadata[1694]: Jan 29 16:24:53.700 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 29 16:24:53.720251 coreos-metadata[1694]: Jan 29 16:24:53.720 INFO Fetch successful Jan 29 16:24:53.798300 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:24:53.806221 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:24:53.837086 locksmithd[1753]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:24:53.950646 bash[1799]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:24:53.954987 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:24:53.967975 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 16:24:54.420088 sshd_keygen[1742]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:24:54.461454 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:24:54.474531 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:24:54.486515 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 29 16:24:54.502606 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:24:54.502882 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:24:54.515582 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:24:54.541605 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 29 16:24:54.562694 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:24:54.580844 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:24:54.593835 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:24:54.599007 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:24:54.611798 tar[1735]: linux-amd64/LICENSE Jan 29 16:24:54.611798 tar[1735]: linux-amd64/README.md Jan 29 16:24:54.624814 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:24:54.902846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:24:55.005728 (kubelet)[1882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:24:55.153701 containerd[1737]: time="2025-01-29T16:24:55.152626200Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:24:55.190924 containerd[1737]: time="2025-01-29T16:24:55.190201200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:24:55.192237 containerd[1737]: time="2025-01-29T16:24:55.192191500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:24:55.192237 containerd[1737]: time="2025-01-29T16:24:55.192234500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:24:55.192375 containerd[1737]: time="2025-01-29T16:24:55.192254400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:24:55.193136 containerd[1737]: time="2025-01-29T16:24:55.192484600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:24:55.193136 containerd[1737]: time="2025-01-29T16:24:55.192514600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:24:55.193136 containerd[1737]: time="2025-01-29T16:24:55.192596700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:24:55.193136 containerd[1737]: time="2025-01-29T16:24:55.192613000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:24:55.193136 containerd[1737]: time="2025-01-29T16:24:55.192889000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:24:55.193136 containerd[1737]: time="2025-01-29T16:24:55.192910200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:24:55.193136 containerd[1737]: time="2025-01-29T16:24:55.192929100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:24:55.193136 containerd[1737]: time="2025-01-29T16:24:55.192942700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:24:55.193136 containerd[1737]: time="2025-01-29T16:24:55.193037600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:24:55.193482 containerd[1737]: time="2025-01-29T16:24:55.193279900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:24:55.193525 containerd[1737]: time="2025-01-29T16:24:55.193508700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:24:55.193561 containerd[1737]: time="2025-01-29T16:24:55.193531600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:24:55.193896 containerd[1737]: time="2025-01-29T16:24:55.193641900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:24:55.193896 containerd[1737]: time="2025-01-29T16:24:55.193706900Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:24:55.209384 containerd[1737]: time="2025-01-29T16:24:55.208546600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:24:55.209384 containerd[1737]: time="2025-01-29T16:24:55.208645400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:24:55.209384 containerd[1737]: time="2025-01-29T16:24:55.208696400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:24:55.209384 containerd[1737]: time="2025-01-29T16:24:55.208725700Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:24:55.209384 containerd[1737]: time="2025-01-29T16:24:55.208760200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:24:55.209384 containerd[1737]: time="2025-01-29T16:24:55.208946800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:24:55.209384 containerd[1737]: time="2025-01-29T16:24:55.209354400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:24:55.209716 containerd[1737]: time="2025-01-29T16:24:55.209557900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:24:55.209716 containerd[1737]: time="2025-01-29T16:24:55.209583600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:24:55.209716 containerd[1737]: time="2025-01-29T16:24:55.209604400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:24:55.209716 containerd[1737]: time="2025-01-29T16:24:55.209637200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:24:55.209716 containerd[1737]: time="2025-01-29T16:24:55.209659300Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:24:55.209716 containerd[1737]: time="2025-01-29T16:24:55.209678200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:24:55.209716 containerd[1737]: time="2025-01-29T16:24:55.209701200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:24:55.209947 containerd[1737]: time="2025-01-29T16:24:55.209736400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:24:55.209947 containerd[1737]: time="2025-01-29T16:24:55.209756100Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:24:55.209947 containerd[1737]: time="2025-01-29T16:24:55.209774700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:24:55.209947 containerd[1737]: time="2025-01-29T16:24:55.209804200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:24:55.209947 containerd[1737]: time="2025-01-29T16:24:55.209834200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.209947 containerd[1737]: time="2025-01-29T16:24:55.209855900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.209947 containerd[1737]: time="2025-01-29T16:24:55.209888700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.209947 containerd[1737]: time="2025-01-29T16:24:55.209913200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.209947 containerd[1737]: time="2025-01-29T16:24:55.209931200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.210240 containerd[1737]: time="2025-01-29T16:24:55.209962200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.210240 containerd[1737]: time="2025-01-29T16:24:55.209979400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.210240 containerd[1737]: time="2025-01-29T16:24:55.209996400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.210240 containerd[1737]: time="2025-01-29T16:24:55.210014300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.210240 containerd[1737]: time="2025-01-29T16:24:55.210044200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.210240 containerd[1737]: time="2025-01-29T16:24:55.210061100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.210240 containerd[1737]: time="2025-01-29T16:24:55.210078300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.210240 containerd[1737]: time="2025-01-29T16:24:55.210096400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.210240 containerd[1737]: time="2025-01-29T16:24:55.210116600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:24:55.210240 containerd[1737]: time="2025-01-29T16:24:55.210145400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.210240 containerd[1737]: time="2025-01-29T16:24:55.210164300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.210240 containerd[1737]: time="2025-01-29T16:24:55.210182400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:24:55.210652 containerd[1737]: time="2025-01-29T16:24:55.210244300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:24:55.210652 containerd[1737]: time="2025-01-29T16:24:55.210270900Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:24:55.210652 containerd[1737]: time="2025-01-29T16:24:55.210285900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:24:55.210652 containerd[1737]: time="2025-01-29T16:24:55.210302500Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:24:55.210652 containerd[1737]: time="2025-01-29T16:24:55.210315800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.210652 containerd[1737]: time="2025-01-29T16:24:55.210332800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:24:55.210652 containerd[1737]: time="2025-01-29T16:24:55.210346500Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:24:55.210652 containerd[1737]: time="2025-01-29T16:24:55.210360600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:24:55.211868 containerd[1737]: time="2025-01-29T16:24:55.210958400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:24:55.211868 containerd[1737]: time="2025-01-29T16:24:55.211046800Z" level=info msg="Connect containerd service" Jan 29 16:24:55.211868 containerd[1737]: time="2025-01-29T16:24:55.211103300Z" level=info msg="using legacy CRI server" Jan 29 16:24:55.211868 containerd[1737]: time="2025-01-29T16:24:55.211115400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:24:55.211868 containerd[1737]: time="2025-01-29T16:24:55.211334900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:24:55.212351 containerd[1737]: time="2025-01-29T16:24:55.212329300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:24:55.212943 containerd[1737]: time="2025-01-29T16:24:55.212481100Z" level=info msg="Start subscribing containerd event" Jan 29 16:24:55.212943 containerd[1737]: time="2025-01-29T16:24:55.212539800Z" level=info msg="Start recovering state" Jan 29 16:24:55.212943 containerd[1737]: time="2025-01-29T16:24:55.212612200Z" level=info msg="Start event monitor" Jan 29 16:24:55.212943 containerd[1737]: time="2025-01-29T16:24:55.212633300Z" level=info msg="Start snapshots syncer" Jan 29 16:24:55.212943 containerd[1737]: time="2025-01-29T16:24:55.212644600Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:24:55.212943 containerd[1737]: time="2025-01-29T16:24:55.212657500Z" level=info msg="Start streaming server" Jan 29 16:24:55.221670 containerd[1737]: time="2025-01-29T16:24:55.213562300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:24:55.221670 containerd[1737]: time="2025-01-29T16:24:55.213619400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:24:55.221670 containerd[1737]: time="2025-01-29T16:24:55.214934300Z" level=info msg="containerd successfully booted in 0.063538s" Jan 29 16:24:55.213791 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:24:55.217504 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:24:55.222545 systemd[1]: Startup finished in 797ms (firmware) + 28.434s (loader) + 1.114s (kernel) + 10.784s (initrd) + 11.634s (userspace) = 52.765s. Jan 29 16:24:55.542742 kubelet[1882]: E0129 16:24:55.542615 1882 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:24:55.545138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:24:55.545352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:24:55.545914 systemd[1]: kubelet.service: Consumed 898ms CPU time, 233.5M memory peak. Jan 29 16:24:55.575759 login[1873]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 29 16:24:55.576083 login[1872]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 16:24:55.591309 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:24:55.597708 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:24:55.604868 systemd-logind[1723]: New session 2 of user core. Jan 29 16:24:55.629402 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:24:55.634739 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:24:55.644641 (systemd)[1899]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:24:55.646833 systemd-logind[1723]: New session c1 of user core. Jan 29 16:24:55.859733 systemd[1899]: Queued start job for default target default.target. Jan 29 16:24:55.869655 systemd[1899]: Created slice app.slice - User Application Slice. Jan 29 16:24:55.869693 systemd[1899]: Reached target paths.target - Paths. Jan 29 16:24:55.869752 systemd[1899]: Reached target timers.target - Timers. Jan 29 16:24:55.871165 systemd[1899]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:24:55.882271 systemd[1899]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:24:55.882413 systemd[1899]: Reached target sockets.target - Sockets. Jan 29 16:24:55.882483 systemd[1899]: Reached target basic.target - Basic System. Jan 29 16:24:55.882535 systemd[1899]: Reached target default.target - Main User Target. Jan 29 16:24:55.882573 systemd[1899]: Startup finished in 229ms. Jan 29 16:24:55.882765 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:24:55.890627 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:24:56.428262 waagent[1870]: 2025-01-29T16:24:56.428147Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 29 16:24:56.432705 waagent[1870]: 2025-01-29T16:24:56.432625Z INFO Daemon Daemon OS: flatcar 4230.0.0 Jan 29 16:24:56.434893 waagent[1870]: 2025-01-29T16:24:56.434826Z INFO Daemon Daemon Python: 3.11.11 Jan 29 16:24:56.437123 waagent[1870]: 2025-01-29T16:24:56.437062Z INFO Daemon Daemon Run daemon Jan 29 16:24:56.439252 waagent[1870]: 2025-01-29T16:24:56.439200Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.0.0' Jan 29 16:24:56.443555 waagent[1870]: 2025-01-29T16:24:56.443475Z INFO Daemon Daemon Using waagent for provisioning Jan 29 16:24:56.446324 waagent[1870]: 2025-01-29T16:24:56.446267Z INFO Daemon Daemon Activate resource disk Jan 29 16:24:56.448902 waagent[1870]: 2025-01-29T16:24:56.448818Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 29 16:24:56.457674 waagent[1870]: 2025-01-29T16:24:56.457599Z INFO Daemon Daemon Found device: None Jan 29 16:24:56.459969 waagent[1870]: 2025-01-29T16:24:56.459910Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 29 16:24:56.466282 waagent[1870]: 2025-01-29T16:24:56.466205Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 29 16:24:56.475510 waagent[1870]: 2025-01-29T16:24:56.475385Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 16:24:56.480381 waagent[1870]: 2025-01-29T16:24:56.476434Z INFO Daemon Daemon Running default provisioning handler Jan 29 16:24:56.485961 waagent[1870]: 2025-01-29T16:24:56.485704Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 29 16:24:56.492469 waagent[1870]: 2025-01-29T16:24:56.492407Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 29 16:24:56.500331 waagent[1870]: 2025-01-29T16:24:56.493363Z INFO Daemon Daemon cloud-init is enabled: False Jan 29 16:24:56.500331 waagent[1870]: 2025-01-29T16:24:56.494214Z INFO Daemon Daemon Copying ovf-env.xml Jan 29 16:24:56.578780 login[1873]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 16:24:56.588529 waagent[1870]: 2025-01-29T16:24:56.585709Z INFO Daemon Daemon Successfully mounted dvd Jan 29 16:24:56.587016 systemd-logind[1723]: New session 1 of user core. Jan 29 16:24:56.595609 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:24:56.616464 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 29 16:24:56.619175 waagent[1870]: 2025-01-29T16:24:56.619108Z INFO Daemon Daemon Detect protocol endpoint Jan 29 16:24:56.622302 waagent[1870]: 2025-01-29T16:24:56.622236Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 16:24:56.625238 waagent[1870]: 2025-01-29T16:24:56.625185Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 29 16:24:56.635005 waagent[1870]: 2025-01-29T16:24:56.626224Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 29 16:24:56.635005 waagent[1870]: 2025-01-29T16:24:56.627258Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 29 16:24:56.635005 waagent[1870]: 2025-01-29T16:24:56.628055Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 29 16:24:56.669251 waagent[1870]: 2025-01-29T16:24:56.669178Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 29 16:24:56.676747 waagent[1870]: 2025-01-29T16:24:56.670643Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 29 16:24:56.676747 waagent[1870]: 2025-01-29T16:24:56.671306Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 29 16:24:56.784335 waagent[1870]: 2025-01-29T16:24:56.784161Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 29 16:24:56.787412 waagent[1870]: 2025-01-29T16:24:56.787335Z INFO Daemon Daemon Forcing an update of the goal state. Jan 29 16:24:56.793930 waagent[1870]: 2025-01-29T16:24:56.793870Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 16:24:56.810610 waagent[1870]: 2025-01-29T16:24:56.810541Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 29 16:24:56.825380 waagent[1870]: 2025-01-29T16:24:56.812428Z INFO Daemon Jan 29 16:24:56.825380 waagent[1870]: 2025-01-29T16:24:56.813988Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 588f0763-2d5f-4b00-bd7f-8211e0364e25 eTag: 7821137158503527155 source: Fabric] Jan 29 16:24:56.825380 waagent[1870]: 2025-01-29T16:24:56.815429Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 29 16:24:56.825380 waagent[1870]: 2025-01-29T16:24:56.816524Z INFO Daemon Jan 29 16:24:56.825380 waagent[1870]: 2025-01-29T16:24:56.817219Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 29 16:24:56.828612 waagent[1870]: 2025-01-29T16:24:56.828571Z INFO Daemon Daemon Downloading artifacts profile blob Jan 29 16:24:56.907126 waagent[1870]: 2025-01-29T16:24:56.907025Z INFO Daemon Downloaded certificate {'thumbprint': '43AE18DC2B671C24B45E4B290895CCC03591AB72', 'hasPrivateKey': False} Jan 29 16:24:56.912385 waagent[1870]: 2025-01-29T16:24:56.912319Z INFO Daemon Downloaded certificate {'thumbprint': '25073A57319BBD96C045879929FBB3456779A833', 'hasPrivateKey': True} Jan 29 16:24:56.917078 waagent[1870]: 2025-01-29T16:24:56.917018Z INFO Daemon Fetch goal state completed Jan 29 16:24:56.926745 waagent[1870]: 2025-01-29T16:24:56.926695Z INFO Daemon Daemon Starting provisioning Jan 29 16:24:56.929153 waagent[1870]: 2025-01-29T16:24:56.929095Z INFO Daemon Daemon Handle ovf-env.xml. Jan 29 16:24:56.931238 waagent[1870]: 2025-01-29T16:24:56.931185Z INFO Daemon Daemon Set hostname [ci-4230.0.0-a-6998ca2965] Jan 29 16:24:56.958303 waagent[1870]: 2025-01-29T16:24:56.958202Z INFO Daemon Daemon Publish hostname [ci-4230.0.0-a-6998ca2965] Jan 29 16:24:56.965356 waagent[1870]: 2025-01-29T16:24:56.959621Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 29 16:24:56.965356 waagent[1870]: 2025-01-29T16:24:56.959956Z INFO Daemon Daemon Primary interface is [eth0] Jan 29 16:24:56.969287 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:24:56.969299 systemd-networkd[1337]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:24:56.969351 systemd-networkd[1337]: eth0: DHCP lease lost Jan 29 16:24:56.970641 waagent[1870]: 2025-01-29T16:24:56.970534Z INFO Daemon Daemon Create user account if not exists Jan 29 16:24:56.985919 waagent[1870]: 2025-01-29T16:24:56.972044Z INFO Daemon Daemon User core already exists, skip useradd Jan 29 16:24:56.985919 waagent[1870]: 2025-01-29T16:24:56.972742Z INFO Daemon Daemon Configure sudoer Jan 29 16:24:56.985919 waagent[1870]: 2025-01-29T16:24:56.973783Z INFO Daemon Daemon Configure sshd Jan 29 16:24:56.985919 waagent[1870]: 2025-01-29T16:24:56.974479Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 29 16:24:56.985919 waagent[1870]: 2025-01-29T16:24:56.975071Z INFO Daemon Daemon Deploy ssh public key. Jan 29 16:24:57.019532 systemd-networkd[1337]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 16:25:05.598842 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:25:05.604740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:05.816330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:05.826856 (kubelet)[1961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:06.329341 kubelet[1961]: E0129 16:25:06.329229 1961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:06.333111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:06.333306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:06.333745 systemd[1]: kubelet.service: Consumed 151ms CPU time, 97.5M memory peak. Jan 29 16:25:16.349061 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:25:16.354730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:16.699484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:16.703674 (kubelet)[1977]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:17.037227 kubelet[1977]: E0129 16:25:17.037080 1977 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:17.039799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:17.039991 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:17.040685 systemd[1]: kubelet.service: Consumed 145ms CPU time, 95.1M memory peak. Jan 29 16:25:17.183341 chronyd[1709]: Selected source PHC0 Jan 29 16:25:27.060540 waagent[1870]: 2025-01-29T16:25:27.060410Z INFO Daemon Daemon Provisioning complete Jan 29 16:25:27.075508 waagent[1870]: 2025-01-29T16:25:27.075417Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 29 16:25:27.085170 waagent[1870]: 2025-01-29T16:25:27.076936Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 29 16:25:27.085170 waagent[1870]: 2025-01-29T16:25:27.078092Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 29 16:25:27.098889 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 16:25:27.109752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:27.230690 waagent[1985]: 2025-01-29T16:25:27.230582Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 29 16:25:27.231152 waagent[1985]: 2025-01-29T16:25:27.230761Z INFO ExtHandler ExtHandler OS: flatcar 4230.0.0 Jan 29 16:25:27.231152 waagent[1985]: 2025-01-29T16:25:27.230846Z INFO ExtHandler ExtHandler Python: 3.11.11 Jan 29 16:25:27.484175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:27.496772 (kubelet)[1998]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:27.532475 waagent[1985]: 2025-01-29T16:25:27.530652Z INFO ExtHandler ExtHandler Distro: flatcar-4230.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 29 16:25:27.532475 waagent[1985]: 2025-01-29T16:25:27.531026Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 16:25:27.532475 waagent[1985]: 2025-01-29T16:25:27.531164Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 16:25:27.533893 kubelet[1998]: E0129 16:25:27.533856 1998 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:27.536576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:27.536770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:27.537160 systemd[1]: kubelet.service: Consumed 145ms CPU time, 95.7M memory peak. Jan 29 16:25:27.541138 waagent[1985]: 2025-01-29T16:25:27.541074Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 16:25:27.551979 waagent[1985]: 2025-01-29T16:25:27.551930Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 29 16:25:27.552462 waagent[1985]: 2025-01-29T16:25:27.552402Z INFO ExtHandler Jan 29 16:25:27.552571 waagent[1985]: 2025-01-29T16:25:27.552526Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 285066a4-a910-401b-ab08-6d9c494dafbf eTag: 7821137158503527155 source: Fabric] Jan 29 16:25:27.552906 waagent[1985]: 2025-01-29T16:25:27.552860Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 29 16:25:27.653780 waagent[1985]: 2025-01-29T16:25:27.653615Z INFO ExtHandler Jan 29 16:25:27.654163 waagent[1985]: 2025-01-29T16:25:27.654057Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 29 16:25:27.659632 waagent[1985]: 2025-01-29T16:25:27.659554Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 29 16:25:27.733107 waagent[1985]: 2025-01-29T16:25:27.733013Z INFO ExtHandler Downloaded certificate {'thumbprint': '43AE18DC2B671C24B45E4B290895CCC03591AB72', 'hasPrivateKey': False} Jan 29 16:25:27.733575 waagent[1985]: 2025-01-29T16:25:27.733519Z INFO ExtHandler Downloaded certificate {'thumbprint': '25073A57319BBD96C045879929FBB3456779A833', 'hasPrivateKey': True} Jan 29 16:25:27.734045 waagent[1985]: 2025-01-29T16:25:27.733993Z INFO ExtHandler Fetch goal state completed Jan 29 16:25:27.750185 waagent[1985]: 2025-01-29T16:25:27.750069Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1985 Jan 29 16:25:27.750282 waagent[1985]: 2025-01-29T16:25:27.750247Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 29 16:25:27.751890 waagent[1985]: 2025-01-29T16:25:27.751831Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 29 16:25:27.752270 waagent[1985]: 2025-01-29T16:25:27.752219Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 29 16:25:27.781221 waagent[1985]: 2025-01-29T16:25:27.781160Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 29 16:25:27.781650 waagent[1985]: 2025-01-29T16:25:27.781436Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 29 16:25:27.789118 waagent[1985]: 2025-01-29T16:25:27.788978Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 29 16:25:27.796533 systemd[1]: Reload requested from client PID 2016 ('systemctl') (unit waagent.service)... Jan 29 16:25:27.796552 systemd[1]: Reloading... Jan 29 16:25:27.900487 zram_generator::config[2058]: No configuration found. Jan 29 16:25:28.020761 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:28.134006 systemd[1]: Reloading finished in 337 ms. Jan 29 16:25:28.152492 waagent[1985]: 2025-01-29T16:25:28.151860Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 29 16:25:28.162173 systemd[1]: Reload requested from client PID 2111 ('systemctl') (unit waagent.service)... Jan 29 16:25:28.162190 systemd[1]: Reloading... Jan 29 16:25:28.254522 zram_generator::config[2147]: No configuration found. Jan 29 16:25:28.402773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:28.519508 systemd[1]: Reloading finished in 356 ms. Jan 29 16:25:28.538484 waagent[1985]: 2025-01-29T16:25:28.538293Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 29 16:25:28.540015 waagent[1985]: 2025-01-29T16:25:28.539018Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 29 16:25:28.934408 waagent[1985]: 2025-01-29T16:25:28.934301Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 29 16:25:28.935160 waagent[1985]: 2025-01-29T16:25:28.935076Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 29 16:25:28.936105 waagent[1985]: 2025-01-29T16:25:28.936040Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 29 16:25:28.936635 waagent[1985]: 2025-01-29T16:25:28.936555Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 29 16:25:28.936798 waagent[1985]: 2025-01-29T16:25:28.936735Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 16:25:28.936966 waagent[1985]: 2025-01-29T16:25:28.936890Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 16:25:28.937375 waagent[1985]: 2025-01-29T16:25:28.937310Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 16:25:28.937591 waagent[1985]: 2025-01-29T16:25:28.937435Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 29 16:25:28.937810 waagent[1985]: 2025-01-29T16:25:28.937747Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 29 16:25:28.937946 waagent[1985]: 2025-01-29T16:25:28.937894Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 16:25:28.938339 waagent[1985]: 2025-01-29T16:25:28.938292Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 29 16:25:28.938598 waagent[1985]: 2025-01-29T16:25:28.938539Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 29 16:25:28.938805 waagent[1985]: 2025-01-29T16:25:28.938760Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 29 16:25:28.938941 waagent[1985]: 2025-01-29T16:25:28.938896Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 29 16:25:28.939334 waagent[1985]: 2025-01-29T16:25:28.939282Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 29 16:25:28.939334 waagent[1985]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 29 16:25:28.939334 waagent[1985]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 29 16:25:28.939334 waagent[1985]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 29 16:25:28.939334 waagent[1985]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 29 16:25:28.939334 waagent[1985]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 16:25:28.939334 waagent[1985]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 16:25:28.939712 waagent[1985]: 2025-01-29T16:25:28.939482Z INFO EnvHandler ExtHandler Configure routes Jan 29 16:25:28.940463 waagent[1985]: 2025-01-29T16:25:28.940406Z INFO EnvHandler ExtHandler Gateway:None Jan 29 16:25:28.941398 waagent[1985]: 2025-01-29T16:25:28.941343Z INFO EnvHandler ExtHandler Routes:None Jan 29 16:25:28.946037 waagent[1985]: 2025-01-29T16:25:28.945978Z INFO ExtHandler ExtHandler Jan 29 16:25:28.946392 waagent[1985]: 2025-01-29T16:25:28.946341Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 65ec5910-b46f-45d8-b61a-ebff81b24043 correlation 74ec7b3a-1c6b-415f-a044-b641dee2bbb5 created: 2025-01-29T16:23:52.327782Z] Jan 29 16:25:28.947199 waagent[1985]: 2025-01-29T16:25:28.947145Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 29 16:25:28.949357 waagent[1985]: 2025-01-29T16:25:28.949314Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Jan 29 16:25:28.980204 waagent[1985]: 2025-01-29T16:25:28.980143Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 3F37B6B9-CD1C-4CE9-B792-C7DA908D545A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 29 16:25:28.994771 waagent[1985]: 2025-01-29T16:25:28.994710Z INFO MonitorHandler ExtHandler Network interfaces: Jan 29 16:25:28.994771 waagent[1985]: Executing ['ip', '-a', '-o', 'link']: Jan 29 16:25:28.994771 waagent[1985]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 29 16:25:28.994771 waagent[1985]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9b:1f:32 brd ff:ff:ff:ff:ff:ff Jan 29 16:25:28.994771 waagent[1985]: 3: enP45833s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9b:1f:32 brd ff:ff:ff:ff:ff:ff\ altname enP45833p0s2 Jan 29 16:25:28.994771 waagent[1985]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 29 16:25:28.994771 waagent[1985]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 29 16:25:28.994771 waagent[1985]: 2: eth0 inet 10.200.8.22/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 29 16:25:28.994771 waagent[1985]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 29 16:25:28.994771 waagent[1985]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 29 16:25:28.994771 waagent[1985]: 2: eth0 inet6 fe80::222:48ff:fe9b:1f32/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 16:25:28.994771 waagent[1985]: 3: enP45833s1 inet6 fe80::222:48ff:fe9b:1f32/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 16:25:29.081130 waagent[1985]: 2025-01-29T16:25:29.081038Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 29 16:25:29.081130 waagent[1985]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:25:29.081130 waagent[1985]: pkts bytes target prot opt in out source destination Jan 29 16:25:29.081130 waagent[1985]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:25:29.081130 waagent[1985]: pkts bytes target prot opt in out source destination Jan 29 16:25:29.081130 waagent[1985]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:25:29.081130 waagent[1985]: pkts bytes target prot opt in out source destination Jan 29 16:25:29.081130 waagent[1985]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 16:25:29.081130 waagent[1985]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 16:25:29.081130 waagent[1985]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 16:25:29.084655 waagent[1985]: 2025-01-29T16:25:29.084591Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 29 16:25:29.084655 waagent[1985]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:25:29.084655 waagent[1985]: pkts bytes target prot opt in out source destination Jan 29 16:25:29.084655 waagent[1985]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:25:29.084655 waagent[1985]: pkts bytes target prot opt in out source destination Jan 29 16:25:29.084655 waagent[1985]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:25:29.084655 waagent[1985]: pkts bytes target prot opt in out source destination Jan 29 16:25:29.084655 waagent[1985]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 16:25:29.084655 waagent[1985]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 16:25:29.084655 waagent[1985]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 16:25:29.085044 waagent[1985]: 2025-01-29T16:25:29.084908Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 29 16:25:36.528558 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 29 16:25:37.598760 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 16:25:37.604903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:37.727069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:37.737763 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:38.276188 kubelet[2250]: E0129 16:25:38.276064 2250 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:38.278533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:38.278732 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:38.279131 systemd[1]: kubelet.service: Consumed 146ms CPU time, 97.6M memory peak. Jan 29 16:25:38.714826 update_engine[1725]: I20250129 16:25:38.714722 1725 update_attempter.cc:509] Updating boot flags... Jan 29 16:25:38.770499 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2272) Jan 29 16:25:48.348873 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 16:25:48.355688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:48.454562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:48.466981 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:49.044145 kubelet[2328]: E0129 16:25:49.044077 2328 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:49.046857 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:49.047096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:49.047582 systemd[1]: kubelet.service: Consumed 141ms CPU time, 96M memory peak. Jan 29 16:25:59.098857 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 16:25:59.114692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:59.335492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:59.339914 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:59.377115 kubelet[2343]: E0129 16:25:59.376961 2343 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:59.379551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:59.379754 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:59.380138 systemd[1]: kubelet.service: Consumed 136ms CPU time, 97.4M memory peak. Jan 29 16:25:59.624410 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:25:59.629758 systemd[1]: Started sshd@0-10.200.8.22:22-10.200.16.10:46618.service - OpenSSH per-connection server daemon (10.200.16.10:46618). Jan 29 16:26:00.563117 sshd[2351]: Accepted publickey for core from 10.200.16.10 port 46618 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:26:00.564705 sshd-session[2351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:00.569118 systemd-logind[1723]: New session 3 of user core. Jan 29 16:26:00.579600 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:26:01.157765 systemd[1]: Started sshd@1-10.200.8.22:22-10.200.16.10:46622.service - OpenSSH per-connection server daemon (10.200.16.10:46622). Jan 29 16:26:01.828531 sshd[2356]: Accepted publickey for core from 10.200.16.10 port 46622 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:26:01.830254 sshd-session[2356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:01.836531 systemd-logind[1723]: New session 4 of user core. Jan 29 16:26:01.845825 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:26:02.304828 sshd[2358]: Connection closed by 10.200.16.10 port 46622 Jan 29 16:26:02.305682 sshd-session[2356]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:02.308719 systemd[1]: sshd@1-10.200.8.22:22-10.200.16.10:46622.service: Deactivated successfully. Jan 29 16:26:02.311072 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:26:02.312810 systemd-logind[1723]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:26:02.313880 systemd-logind[1723]: Removed session 4. Jan 29 16:26:02.429797 systemd[1]: Started sshd@2-10.200.8.22:22-10.200.16.10:46630.service - OpenSSH per-connection server daemon (10.200.16.10:46630). Jan 29 16:26:03.103829 sshd[2364]: Accepted publickey for core from 10.200.16.10 port 46630 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:26:03.105351 sshd-session[2364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:03.109805 systemd-logind[1723]: New session 5 of user core. Jan 29 16:26:03.116586 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:26:03.576874 sshd[2366]: Connection closed by 10.200.16.10 port 46630 Jan 29 16:26:03.577619 sshd-session[2364]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:03.580880 systemd[1]: sshd@2-10.200.8.22:22-10.200.16.10:46630.service: Deactivated successfully. Jan 29 16:26:03.582943 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:26:03.584481 systemd-logind[1723]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:26:03.585385 systemd-logind[1723]: Removed session 5. Jan 29 16:26:03.699760 systemd[1]: Started sshd@3-10.200.8.22:22-10.200.16.10:46646.service - OpenSSH per-connection server daemon (10.200.16.10:46646). Jan 29 16:26:04.393056 sshd[2372]: Accepted publickey for core from 10.200.16.10 port 46646 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:26:04.395705 sshd-session[2372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:04.400133 systemd-logind[1723]: New session 6 of user core. Jan 29 16:26:04.410653 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:26:04.879674 sshd[2374]: Connection closed by 10.200.16.10 port 46646 Jan 29 16:26:04.880545 sshd-session[2372]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:04.884341 systemd[1]: sshd@3-10.200.8.22:22-10.200.16.10:46646.service: Deactivated successfully. Jan 29 16:26:04.886659 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:26:04.888344 systemd-logind[1723]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:26:04.889486 systemd-logind[1723]: Removed session 6. Jan 29 16:26:05.004834 systemd[1]: Started sshd@4-10.200.8.22:22-10.200.16.10:46652.service - OpenSSH per-connection server daemon (10.200.16.10:46652). Jan 29 16:26:05.714884 sshd[2380]: Accepted publickey for core from 10.200.16.10 port 46652 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:26:05.716556 sshd-session[2380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:05.721286 systemd-logind[1723]: New session 7 of user core. Jan 29 16:26:05.731633 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:26:06.348873 sudo[2383]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:26:06.349280 sudo[2383]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:26:08.319803 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:26:08.322722 (dockerd)[2400]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:26:09.598809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 16:26:09.612475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:10.239660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:10.242810 (kubelet)[2413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:26:10.278228 kubelet[2413]: E0129 16:26:10.278117 2413 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:26:10.280534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:26:10.280762 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:26:10.281183 systemd[1]: kubelet.service: Consumed 140ms CPU time, 97.4M memory peak. Jan 29 16:26:11.260397 dockerd[2400]: time="2025-01-29T16:26:11.260332734Z" level=info msg="Starting up" Jan 29 16:26:11.924645 dockerd[2400]: time="2025-01-29T16:26:11.924591838Z" level=info msg="Loading containers: start." Jan 29 16:26:12.120472 kernel: Initializing XFRM netlink socket Jan 29 16:26:12.270707 systemd-networkd[1337]: docker0: Link UP Jan 29 16:26:12.315677 dockerd[2400]: time="2025-01-29T16:26:12.315627972Z" level=info msg="Loading containers: done." Jan 29 16:26:12.338428 dockerd[2400]: time="2025-01-29T16:26:12.338370961Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:26:12.338719 dockerd[2400]: time="2025-01-29T16:26:12.338502765Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:26:12.338788 dockerd[2400]: time="2025-01-29T16:26:12.338728872Z" level=info msg="Daemon has completed initialization" Jan 29 16:26:12.397988 dockerd[2400]: time="2025-01-29T16:26:12.397843561Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:26:12.398510 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:26:13.797674 containerd[1737]: time="2025-01-29T16:26:13.797633125Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 16:26:14.673589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount596899426.mount: Deactivated successfully. Jan 29 16:26:16.327474 containerd[1737]: time="2025-01-29T16:26:16.327396356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:16.332826 containerd[1737]: time="2025-01-29T16:26:16.332762225Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976729" Jan 29 16:26:16.340584 containerd[1737]: time="2025-01-29T16:26:16.340393464Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:16.347459 containerd[1737]: time="2025-01-29T16:26:16.347359483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:16.349274 containerd[1737]: time="2025-01-29T16:26:16.348970634Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.551296707s" Jan 29 16:26:16.349274 containerd[1737]: time="2025-01-29T16:26:16.349020535Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 16:26:16.351155 containerd[1737]: time="2025-01-29T16:26:16.351116801Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 16:26:18.080741 containerd[1737]: time="2025-01-29T16:26:18.080605604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:18.084718 containerd[1737]: time="2025-01-29T16:26:18.084649431Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701151" Jan 29 16:26:18.087478 containerd[1737]: time="2025-01-29T16:26:18.087405318Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:18.094671 containerd[1737]: time="2025-01-29T16:26:18.094632545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:18.095885 containerd[1737]: time="2025-01-29T16:26:18.095695978Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.744544376s" Jan 29 16:26:18.095885 containerd[1737]: time="2025-01-29T16:26:18.095734680Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 16:26:18.096697 containerd[1737]: time="2025-01-29T16:26:18.096419901Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 16:26:19.467901 containerd[1737]: time="2025-01-29T16:26:19.467834961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:19.471476 containerd[1737]: time="2025-01-29T16:26:19.471361872Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652061" Jan 29 16:26:19.477804 containerd[1737]: time="2025-01-29T16:26:19.477761073Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:19.484145 containerd[1737]: time="2025-01-29T16:26:19.484086672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:19.485311 containerd[1737]: time="2025-01-29T16:26:19.485125904Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.388657102s" Jan 29 16:26:19.485311 containerd[1737]: time="2025-01-29T16:26:19.485165306Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 16:26:19.485931 containerd[1737]: time="2025-01-29T16:26:19.485878028Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 16:26:20.348896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 16:26:20.354697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:20.494557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:20.499066 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:26:20.535342 kubelet[2670]: E0129 16:26:20.535283 2670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:26:20.537776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:26:20.537980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:26:20.538421 systemd[1]: kubelet.service: Consumed 138ms CPU time, 95.8M memory peak. Jan 29 16:26:21.419539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount496162488.mount: Deactivated successfully. Jan 29 16:26:21.963700 containerd[1737]: time="2025-01-29T16:26:21.963637405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:21.966615 containerd[1737]: time="2025-01-29T16:26:21.966546095Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231136" Jan 29 16:26:21.969584 containerd[1737]: time="2025-01-29T16:26:21.969526387Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:21.974098 containerd[1737]: time="2025-01-29T16:26:21.974033225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:21.975132 containerd[1737]: time="2025-01-29T16:26:21.974630144Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.48859681s" Jan 29 16:26:21.975132 containerd[1737]: time="2025-01-29T16:26:21.974667645Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 16:26:21.975434 containerd[1737]: time="2025-01-29T16:26:21.975352066Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:26:22.617460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914274011.mount: Deactivated successfully. Jan 29 16:26:23.965975 containerd[1737]: time="2025-01-29T16:26:23.965899244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:23.969941 containerd[1737]: time="2025-01-29T16:26:23.969865666Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 29 16:26:23.973881 containerd[1737]: time="2025-01-29T16:26:23.973808188Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:23.979449 containerd[1737]: time="2025-01-29T16:26:23.979372959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:23.980737 containerd[1737]: time="2025-01-29T16:26:23.980406191Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.005005123s" Jan 29 16:26:23.980737 containerd[1737]: time="2025-01-29T16:26:23.980497894Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 16:26:23.981274 containerd[1737]: time="2025-01-29T16:26:23.981235116Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 16:26:24.583855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3827959718.mount: Deactivated successfully. Jan 29 16:26:24.613605 containerd[1737]: time="2025-01-29T16:26:24.613541682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:24.616988 containerd[1737]: time="2025-01-29T16:26:24.616925186Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 29 16:26:24.622541 containerd[1737]: time="2025-01-29T16:26:24.622483657Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:24.628405 containerd[1737]: time="2025-01-29T16:26:24.628371038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:24.631824 containerd[1737]: time="2025-01-29T16:26:24.630926217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 649.546697ms" Jan 29 16:26:24.631824 containerd[1737]: time="2025-01-29T16:26:24.630974318Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 16:26:24.632980 containerd[1737]: time="2025-01-29T16:26:24.632956479Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 16:26:25.199332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount271226761.mount: Deactivated successfully. Jan 29 16:26:27.681323 containerd[1737]: time="2025-01-29T16:26:27.681250620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:27.685140 containerd[1737]: time="2025-01-29T16:26:27.685067438Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Jan 29 16:26:27.689699 containerd[1737]: time="2025-01-29T16:26:27.689638378Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:27.698262 containerd[1737]: time="2025-01-29T16:26:27.698221643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:27.699168 containerd[1737]: time="2025-01-29T16:26:27.699130871Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.066052387s" Jan 29 16:26:27.699398 containerd[1737]: time="2025-01-29T16:26:27.699282875Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 16:26:30.184001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:30.184243 systemd[1]: kubelet.service: Consumed 138ms CPU time, 95.8M memory peak. Jan 29 16:26:30.191742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:30.226771 systemd[1]: Reload requested from client PID 2813 ('systemctl') (unit session-7.scope)... Jan 29 16:26:30.226797 systemd[1]: Reloading... Jan 29 16:26:30.375474 zram_generator::config[2861]: No configuration found. Jan 29 16:26:30.492319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:26:30.607673 systemd[1]: Reloading finished in 380 ms. Jan 29 16:26:30.660665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:30.673931 (kubelet)[2920]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:26:30.678997 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:30.680405 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:26:30.680740 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:30.680818 systemd[1]: kubelet.service: Consumed 114ms CPU time, 84.5M memory peak. Jan 29 16:26:30.686721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:30.849767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:30.856418 (kubelet)[2933]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:26:30.894736 kubelet[2933]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:26:30.894736 kubelet[2933]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:26:30.894736 kubelet[2933]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:26:31.456996 kubelet[2933]: I0129 16:26:31.456737 2933 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:26:31.909660 kubelet[2933]: I0129 16:26:31.909602 2933 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:26:31.909660 kubelet[2933]: I0129 16:26:31.909640 2933 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:26:31.910271 kubelet[2933]: I0129 16:26:31.910018 2933 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:26:31.941320 kubelet[2933]: I0129 16:26:31.940796 2933 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:26:31.941320 kubelet[2933]: E0129 16:26:31.941146 2933 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:31.950335 kubelet[2933]: E0129 16:26:31.950272 2933 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:26:31.950335 kubelet[2933]: I0129 16:26:31.950326 2933 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:26:31.956499 kubelet[2933]: I0129 16:26:31.956470 2933 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:26:31.957865 kubelet[2933]: I0129 16:26:31.957832 2933 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:26:31.958100 kubelet[2933]: I0129 16:26:31.958049 2933 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:26:31.958291 kubelet[2933]: I0129 16:26:31.958098 2933 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.0-a-6998ca2965","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:26:31.958437 kubelet[2933]: I0129 16:26:31.958307 2933 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:26:31.958437 kubelet[2933]: I0129 16:26:31.958321 2933 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:26:31.958560 kubelet[2933]: I0129 16:26:31.958489 2933 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:26:31.961819 kubelet[2933]: I0129 16:26:31.961457 2933 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:26:31.961819 kubelet[2933]: I0129 16:26:31.961493 2933 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:26:31.961819 kubelet[2933]: I0129 16:26:31.961538 2933 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:26:31.961819 kubelet[2933]: I0129 16:26:31.961560 2933 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:26:31.967265 kubelet[2933]: W0129 16:26:31.967203 2933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-a-6998ca2965&limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 29 16:26:31.967360 kubelet[2933]: E0129 16:26:31.967281 2933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-a-6998ca2965&limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:31.968527 kubelet[2933]: W0129 16:26:31.967801 2933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 29 16:26:31.968527 kubelet[2933]: E0129 16:26:31.967860 2933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:31.968527 kubelet[2933]: I0129 16:26:31.968386 2933 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:26:31.970956 kubelet[2933]: I0129 16:26:31.970612 2933 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:26:31.970956 kubelet[2933]: W0129 16:26:31.970694 2933 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:26:31.972480 kubelet[2933]: I0129 16:26:31.972242 2933 server.go:1269] "Started kubelet" Jan 29 16:26:31.983990 kubelet[2933]: I0129 16:26:31.983960 2933 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:26:31.989342 kubelet[2933]: I0129 16:26:31.989303 2933 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:26:31.990224 kubelet[2933]: I0129 16:26:31.990192 2933 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:26:31.993779 kubelet[2933]: E0129 16:26:31.987038 2933 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.22:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.0-a-6998ca2965.181f369459753e5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-a-6998ca2965,UID:ci-4230.0.0-a-6998ca2965,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-a-6998ca2965,},FirstTimestamp:2025-01-29 16:26:31.972216411 +0000 UTC m=+1.111279380,LastTimestamp:2025-01-29 16:26:31.972216411 +0000 UTC m=+1.111279380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-a-6998ca2965,}" Jan 29 16:26:31.993779 kubelet[2933]: I0129 16:26:31.991133 2933 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:26:31.993779 kubelet[2933]: I0129 16:26:31.993319 2933 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:26:31.993779 kubelet[2933]: E0129 16:26:31.993569 2933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.0-a-6998ca2965\" not found" Jan 29 16:26:31.994062 kubelet[2933]: I0129 16:26:31.993801 2933 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:26:31.994109 kubelet[2933]: I0129 16:26:31.994090 2933 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:26:31.997113 kubelet[2933]: E0129 16:26:31.997065 2933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-a-6998ca2965?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="200ms" Jan 29 16:26:31.998077 kubelet[2933]: I0129 16:26:31.997335 2933 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:26:31.998077 kubelet[2933]: I0129 16:26:31.997432 2933 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:26:32.000262 kubelet[2933]: I0129 16:26:32.000237 2933 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:26:32.003118 kubelet[2933]: I0129 16:26:32.003099 2933 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:26:32.003273 kubelet[2933]: I0129 16:26:32.003260 2933 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:26:32.009199 kubelet[2933]: E0129 16:26:32.009170 2933 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:26:32.025955 kubelet[2933]: W0129 16:26:32.025877 2933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 29 16:26:32.026108 kubelet[2933]: E0129 16:26:32.025974 2933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:32.034808 kubelet[2933]: I0129 16:26:32.034783 2933 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:26:32.034930 kubelet[2933]: I0129 16:26:32.034829 2933 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:26:32.034930 kubelet[2933]: I0129 16:26:32.034852 2933 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:26:32.041400 kubelet[2933]: I0129 16:26:32.041378 2933 policy_none.go:49] "None policy: Start" Jan 29 16:26:32.042450 kubelet[2933]: I0129 16:26:32.042412 2933 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:26:32.042562 kubelet[2933]: I0129 16:26:32.042476 2933 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:26:32.054527 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:26:32.072138 kubelet[2933]: I0129 16:26:32.072091 2933 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:26:32.076760 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:26:32.078684 kubelet[2933]: I0129 16:26:32.078659 2933 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:26:32.078966 kubelet[2933]: I0129 16:26:32.078946 2933 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:26:32.079051 kubelet[2933]: I0129 16:26:32.078986 2933 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:26:32.079097 kubelet[2933]: E0129 16:26:32.079059 2933 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:26:32.082053 kubelet[2933]: W0129 16:26:32.082015 2933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 29 16:26:32.082150 kubelet[2933]: E0129 16:26:32.082068 2933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:32.085830 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:26:32.093992 kubelet[2933]: E0129 16:26:32.093960 2933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.0-a-6998ca2965\" not found" Jan 29 16:26:32.096385 kubelet[2933]: I0129 16:26:32.096362 2933 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:26:32.097352 kubelet[2933]: I0129 16:26:32.096760 2933 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:26:32.097352 kubelet[2933]: I0129 16:26:32.096778 2933 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:26:32.097352 kubelet[2933]: I0129 16:26:32.097217 2933 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:26:32.099481 kubelet[2933]: E0129 16:26:32.099458 2933 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.0.0-a-6998ca2965\" not found" Jan 29 16:26:32.191669 systemd[1]: Created slice kubepods-burstable-podde12fe16e5b1ee7dab0c870638128e2c.slice - libcontainer container kubepods-burstable-podde12fe16e5b1ee7dab0c870638128e2c.slice. Jan 29 16:26:32.197968 kubelet[2933]: E0129 16:26:32.197878 2933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-a-6998ca2965?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="400ms" Jan 29 16:26:32.199932 kubelet[2933]: I0129 16:26:32.199847 2933 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-a-6998ca2965" Jan 29 16:26:32.200256 kubelet[2933]: E0129 16:26:32.200219 2933 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.22:6443/api/v1/nodes\": dial tcp 10.200.8.22:6443: connect: connection refused" node="ci-4230.0.0-a-6998ca2965" Jan 29 16:26:32.205577 kubelet[2933]: I0129 16:26:32.205544 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de12fe16e5b1ee7dab0c870638128e2c-ca-certs\") pod \"kube-apiserver-ci-4230.0.0-a-6998ca2965\" (UID: \"de12fe16e5b1ee7dab0c870638128e2c\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:32.205737 kubelet[2933]: I0129 16:26:32.205581 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51f6f01963759a772e1d02ceccbce4cc-kubeconfig\") pod \"kube-scheduler-ci-4230.0.0-a-6998ca2965\" (UID: \"51f6f01963759a772e1d02ceccbce4cc\") " pod="kube-system/kube-scheduler-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:32.205737 kubelet[2933]: I0129 16:26:32.205608 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/99b8b7b170871ab9ef81b9c4855450ef-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.0-a-6998ca2965\" (UID: \"99b8b7b170871ab9ef81b9c4855450ef\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:32.205737 kubelet[2933]: I0129 16:26:32.205643 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99b8b7b170871ab9ef81b9c4855450ef-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.0-a-6998ca2965\" (UID: \"99b8b7b170871ab9ef81b9c4855450ef\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:32.205737 kubelet[2933]: I0129 16:26:32.205696 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99b8b7b170871ab9ef81b9c4855450ef-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.0-a-6998ca2965\" (UID: \"99b8b7b170871ab9ef81b9c4855450ef\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:32.205737 kubelet[2933]: I0129 16:26:32.205725 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99b8b7b170871ab9ef81b9c4855450ef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.0-a-6998ca2965\" (UID: \"99b8b7b170871ab9ef81b9c4855450ef\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:32.206290 kubelet[2933]: I0129 16:26:32.206034 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de12fe16e5b1ee7dab0c870638128e2c-k8s-certs\") pod \"kube-apiserver-ci-4230.0.0-a-6998ca2965\" (UID: \"de12fe16e5b1ee7dab0c870638128e2c\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:32.206290 kubelet[2933]: I0129 16:26:32.206076 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de12fe16e5b1ee7dab0c870638128e2c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.0-a-6998ca2965\" (UID: \"de12fe16e5b1ee7dab0c870638128e2c\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:32.206290 kubelet[2933]: I0129 16:26:32.206124 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99b8b7b170871ab9ef81b9c4855450ef-ca-certs\") pod \"kube-controller-manager-ci-4230.0.0-a-6998ca2965\" (UID: \"99b8b7b170871ab9ef81b9c4855450ef\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:32.216998 systemd[1]: Created slice kubepods-burstable-pod99b8b7b170871ab9ef81b9c4855450ef.slice - libcontainer container kubepods-burstable-pod99b8b7b170871ab9ef81b9c4855450ef.slice. Jan 29 16:26:32.230228 systemd[1]: Created slice kubepods-burstable-pod51f6f01963759a772e1d02ceccbce4cc.slice - libcontainer container kubepods-burstable-pod51f6f01963759a772e1d02ceccbce4cc.slice. Jan 29 16:26:32.404195 kubelet[2933]: I0129 16:26:32.403647 2933 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-a-6998ca2965" Jan 29 16:26:32.404195 kubelet[2933]: E0129 16:26:32.404051 2933 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.22:6443/api/v1/nodes\": dial tcp 10.200.8.22:6443: connect: connection refused" node="ci-4230.0.0-a-6998ca2965" Jan 29 16:26:32.511023 containerd[1737]: time="2025-01-29T16:26:32.510862690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.0-a-6998ca2965,Uid:de12fe16e5b1ee7dab0c870638128e2c,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:32.528026 containerd[1737]: time="2025-01-29T16:26:32.527680195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.0-a-6998ca2965,Uid:99b8b7b170871ab9ef81b9c4855450ef,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:32.533870 containerd[1737]: time="2025-01-29T16:26:32.533821980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.0-a-6998ca2965,Uid:51f6f01963759a772e1d02ceccbce4cc,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:32.598499 kubelet[2933]: E0129 16:26:32.598394 2933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-a-6998ca2965?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="800ms" Jan 29 16:26:32.806818 kubelet[2933]: I0129 16:26:32.806702 2933 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-a-6998ca2965" Jan 29 16:26:32.807162 kubelet[2933]: E0129 16:26:32.807087 2933 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.22:6443/api/v1/nodes\": dial tcp 10.200.8.22:6443: connect: connection refused" node="ci-4230.0.0-a-6998ca2965" Jan 29 16:26:33.115412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819680439.mount: Deactivated successfully. Jan 29 16:26:33.163066 containerd[1737]: time="2025-01-29T16:26:33.163010478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:26:33.175532 containerd[1737]: time="2025-01-29T16:26:33.175366249Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 16:26:33.184548 containerd[1737]: time="2025-01-29T16:26:33.184500024Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:26:33.189189 containerd[1737]: time="2025-01-29T16:26:33.189146963Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:26:33.198022 containerd[1737]: time="2025-01-29T16:26:33.197961928Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:26:33.204418 containerd[1737]: time="2025-01-29T16:26:33.204372821Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:26:33.210344 containerd[1737]: time="2025-01-29T16:26:33.210258397Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:26:33.219043 containerd[1737]: time="2025-01-29T16:26:33.217788624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:26:33.219043 containerd[1737]: time="2025-01-29T16:26:33.218523346Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 690.727347ms" Jan 29 16:26:33.220472 containerd[1737]: time="2025-01-29T16:26:33.219975989Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 686.040906ms" Jan 29 16:26:33.240940 containerd[1737]: time="2025-01-29T16:26:33.240771214Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 729.77672ms" Jan 29 16:26:33.315165 kubelet[2933]: W0129 16:26:33.315102 2933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-a-6998ca2965&limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 29 16:26:33.315525 kubelet[2933]: E0129 16:26:33.315181 2933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-a-6998ca2965&limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:33.343138 kubelet[2933]: W0129 16:26:33.343084 2933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 29 16:26:33.343297 kubelet[2933]: E0129 16:26:33.343153 2933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:33.400026 kubelet[2933]: E0129 16:26:33.399877 2933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-a-6998ca2965?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="1.6s" Jan 29 16:26:33.495742 kubelet[2933]: W0129 16:26:33.495661 2933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 29 16:26:33.495742 kubelet[2933]: E0129 16:26:33.495750 2933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:33.507838 kubelet[2933]: W0129 16:26:33.507788 2933 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 29 16:26:33.507995 kubelet[2933]: E0129 16:26:33.507846 2933 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:33.609555 kubelet[2933]: I0129 16:26:33.609514 2933 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-a-6998ca2965" Jan 29 16:26:33.609941 kubelet[2933]: E0129 16:26:33.609904 2933 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.22:6443/api/v1/nodes\": dial tcp 10.200.8.22:6443: connect: connection refused" node="ci-4230.0.0-a-6998ca2965" Jan 29 16:26:34.131460 kubelet[2933]: E0129 16:26:34.130215 2933 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:26:34.164515 containerd[1737]: time="2025-01-29T16:26:34.163412627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:34.164515 containerd[1737]: time="2025-01-29T16:26:34.163493329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:34.164515 containerd[1737]: time="2025-01-29T16:26:34.163514530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:34.164515 containerd[1737]: time="2025-01-29T16:26:34.163605233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:34.165226 containerd[1737]: time="2025-01-29T16:26:34.164425257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:34.165226 containerd[1737]: time="2025-01-29T16:26:34.165071077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:34.165226 containerd[1737]: time="2025-01-29T16:26:34.165085777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:34.166650 containerd[1737]: time="2025-01-29T16:26:34.165258282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:34.166650 containerd[1737]: time="2025-01-29T16:26:34.162671505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:34.166650 containerd[1737]: time="2025-01-29T16:26:34.164762567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:34.166650 containerd[1737]: time="2025-01-29T16:26:34.164790668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:34.166650 containerd[1737]: time="2025-01-29T16:26:34.164911972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:34.219004 systemd[1]: run-containerd-runc-k8s.io-252e857f5251122b5e5d4f919af6bb28ff1959a733c2cd9297c374d33e6dee5b-runc.bYwqpF.mount: Deactivated successfully. Jan 29 16:26:34.232043 systemd[1]: Started cri-containerd-06b5a5fcb9258acc1cc1f82e8d66271ea6979146761e8e153e28c8cc43169bc8.scope - libcontainer container 06b5a5fcb9258acc1cc1f82e8d66271ea6979146761e8e153e28c8cc43169bc8. Jan 29 16:26:34.238183 systemd[1]: Started cri-containerd-252e857f5251122b5e5d4f919af6bb28ff1959a733c2cd9297c374d33e6dee5b.scope - libcontainer container 252e857f5251122b5e5d4f919af6bb28ff1959a733c2cd9297c374d33e6dee5b. Jan 29 16:26:34.241660 systemd[1]: Started cri-containerd-51aa4e44536ef9165ff66c4601a527c76f537a0aa21ba816e74adc4af8bf35aa.scope - libcontainer container 51aa4e44536ef9165ff66c4601a527c76f537a0aa21ba816e74adc4af8bf35aa. Jan 29 16:26:34.307066 containerd[1737]: time="2025-01-29T16:26:34.306875836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.0-a-6998ca2965,Uid:99b8b7b170871ab9ef81b9c4855450ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"252e857f5251122b5e5d4f919af6bb28ff1959a733c2cd9297c374d33e6dee5b\"" Jan 29 16:26:34.316454 containerd[1737]: time="2025-01-29T16:26:34.315922508Z" level=info msg="CreateContainer within sandbox \"252e857f5251122b5e5d4f919af6bb28ff1959a733c2cd9297c374d33e6dee5b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:26:34.325201 containerd[1737]: time="2025-01-29T16:26:34.325141285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.0-a-6998ca2965,Uid:de12fe16e5b1ee7dab0c870638128e2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"51aa4e44536ef9165ff66c4601a527c76f537a0aa21ba816e74adc4af8bf35aa\"" Jan 29 16:26:34.328077 containerd[1737]: time="2025-01-29T16:26:34.327973870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.0-a-6998ca2965,Uid:51f6f01963759a772e1d02ceccbce4cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"06b5a5fcb9258acc1cc1f82e8d66271ea6979146761e8e153e28c8cc43169bc8\"" Jan 29 16:26:34.330020 containerd[1737]: time="2025-01-29T16:26:34.329852926Z" level=info msg="CreateContainer within sandbox \"51aa4e44536ef9165ff66c4601a527c76f537a0aa21ba816e74adc4af8bf35aa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:26:34.330124 containerd[1737]: time="2025-01-29T16:26:34.330073733Z" level=info msg="CreateContainer within sandbox \"06b5a5fcb9258acc1cc1f82e8d66271ea6979146761e8e153e28c8cc43169bc8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:26:34.441835 containerd[1737]: time="2025-01-29T16:26:34.441698386Z" level=info msg="CreateContainer within sandbox \"252e857f5251122b5e5d4f919af6bb28ff1959a733c2cd9297c374d33e6dee5b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"11acad035c380366e5960fc4c4cd4dc90a15ec650f19e97bda2ae90b510e0004\"" Jan 29 16:26:34.442681 containerd[1737]: time="2025-01-29T16:26:34.442510710Z" level=info msg="StartContainer for \"11acad035c380366e5960fc4c4cd4dc90a15ec650f19e97bda2ae90b510e0004\"" Jan 29 16:26:34.455884 containerd[1737]: time="2025-01-29T16:26:34.455710907Z" level=info msg="CreateContainer within sandbox \"51aa4e44536ef9165ff66c4601a527c76f537a0aa21ba816e74adc4af8bf35aa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"525a84f1d3bab924bfab2cd9637d1d8f778b6dbf7fd92cdb426a809d4a3673a1\"" Jan 29 16:26:34.456424 containerd[1737]: time="2025-01-29T16:26:34.456375226Z" level=info msg="StartContainer for \"525a84f1d3bab924bfab2cd9637d1d8f778b6dbf7fd92cdb426a809d4a3673a1\"" Jan 29 16:26:34.463462 containerd[1737]: time="2025-01-29T16:26:34.463416538Z" level=info msg="CreateContainer within sandbox \"06b5a5fcb9258acc1cc1f82e8d66271ea6979146761e8e153e28c8cc43169bc8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7af5ca809b8186785ea8f2761a258e9fbfdc57252b84b44dc14d72e1165c625f\"" Jan 29 16:26:34.464610 containerd[1737]: time="2025-01-29T16:26:34.464573973Z" level=info msg="StartContainer for \"7af5ca809b8186785ea8f2761a258e9fbfdc57252b84b44dc14d72e1165c625f\"" Jan 29 16:26:34.474653 systemd[1]: Started cri-containerd-11acad035c380366e5960fc4c4cd4dc90a15ec650f19e97bda2ae90b510e0004.scope - libcontainer container 11acad035c380366e5960fc4c4cd4dc90a15ec650f19e97bda2ae90b510e0004. Jan 29 16:26:34.498612 systemd[1]: Started cri-containerd-525a84f1d3bab924bfab2cd9637d1d8f778b6dbf7fd92cdb426a809d4a3673a1.scope - libcontainer container 525a84f1d3bab924bfab2cd9637d1d8f778b6dbf7fd92cdb426a809d4a3673a1. Jan 29 16:26:34.530910 systemd[1]: Started cri-containerd-7af5ca809b8186785ea8f2761a258e9fbfdc57252b84b44dc14d72e1165c625f.scope - libcontainer container 7af5ca809b8186785ea8f2761a258e9fbfdc57252b84b44dc14d72e1165c625f. Jan 29 16:26:35.065187 containerd[1737]: time="2025-01-29T16:26:35.065121611Z" level=info msg="StartContainer for \"11acad035c380366e5960fc4c4cd4dc90a15ec650f19e97bda2ae90b510e0004\" returns successfully" Jan 29 16:26:35.065365 containerd[1737]: time="2025-01-29T16:26:35.065332117Z" level=info msg="StartContainer for \"7af5ca809b8186785ea8f2761a258e9fbfdc57252b84b44dc14d72e1165c625f\" returns successfully" Jan 29 16:26:35.065622 containerd[1737]: time="2025-01-29T16:26:35.065429820Z" level=info msg="StartContainer for \"525a84f1d3bab924bfab2cd9637d1d8f778b6dbf7fd92cdb426a809d4a3673a1\" returns successfully" Jan 29 16:26:35.213716 kubelet[2933]: I0129 16:26:35.213673 2933 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-a-6998ca2965" Jan 29 16:26:36.858752 kubelet[2933]: E0129 16:26:36.858623 2933 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.0.0-a-6998ca2965\" not found" node="ci-4230.0.0-a-6998ca2965" Jan 29 16:26:36.933019 kubelet[2933]: E0129 16:26:36.932888 2933 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.0.0-a-6998ca2965.181f369459753e5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-a-6998ca2965,UID:ci-4230.0.0-a-6998ca2965,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-a-6998ca2965,},FirstTimestamp:2025-01-29 16:26:31.972216411 +0000 UTC m=+1.111279380,LastTimestamp:2025-01-29 16:26:31.972216411 +0000 UTC m=+1.111279380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-a-6998ca2965,}" Jan 29 16:26:36.970701 kubelet[2933]: I0129 16:26:36.970640 2933 apiserver.go:52] "Watching apiserver" Jan 29 16:26:36.987429 kubelet[2933]: I0129 16:26:36.987178 2933 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.0.0-a-6998ca2965" Jan 29 16:26:36.987429 kubelet[2933]: E0129 16:26:36.987236 2933 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230.0.0-a-6998ca2965\": node \"ci-4230.0.0-a-6998ca2965\" not found" Jan 29 16:26:36.988875 kubelet[2933]: E0129 16:26:36.988750 2933 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.0.0-a-6998ca2965.181f36945ba8e168 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-a-6998ca2965,UID:ci-4230.0.0-a-6998ca2965,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-a-6998ca2965,},FirstTimestamp:2025-01-29 16:26:32.00915492 +0000 UTC m=+1.148217889,LastTimestamp:2025-01-29 16:26:32.00915492 +0000 UTC m=+1.148217889,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-a-6998ca2965,}" Jan 29 16:26:37.003473 kubelet[2933]: I0129 16:26:37.003405 2933 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:26:37.083507 kubelet[2933]: E0129 16:26:37.083056 2933 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.0.0-a-6998ca2965.181f36945d1cc250 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-a-6998ca2965,UID:ci-4230.0.0-a-6998ca2965,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4230.0.0-a-6998ca2965 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-a-6998ca2965,},FirstTimestamp:2025-01-29 16:26:32.033526352 +0000 UTC m=+1.172589321,LastTimestamp:2025-01-29 16:26:32.033526352 +0000 UTC m=+1.172589321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-a-6998ca2965,}" Jan 29 16:26:37.143588 kubelet[2933]: E0129 16:26:37.142565 2933 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.0.0-a-6998ca2965.181f36945d1cf389 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-a-6998ca2965,UID:ci-4230.0.0-a-6998ca2965,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4230.0.0-a-6998ca2965 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-a-6998ca2965,},FirstTimestamp:2025-01-29 16:26:32.033538953 +0000 UTC m=+1.172602022,LastTimestamp:2025-01-29 16:26:32.033538953 +0000 UTC m=+1.172602022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-a-6998ca2965,}" Jan 29 16:26:37.204490 kubelet[2933]: E0129 16:26:37.204333 2933 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.0.0-a-6998ca2965.181f36945d1d051d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-a-6998ca2965,UID:ci-4230.0.0-a-6998ca2965,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ci-4230.0.0-a-6998ca2965 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-a-6998ca2965,},FirstTimestamp:2025-01-29 16:26:32.033543453 +0000 UTC m=+1.172606522,LastTimestamp:2025-01-29 16:26:32.033543453 +0000 UTC m=+1.172606522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-a-6998ca2965,}" Jan 29 16:26:37.266035 kubelet[2933]: E0129 16:26:37.265812 2933 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.0.0-a-6998ca2965.181f36946114957c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-a-6998ca2965,UID:ci-4230.0.0-a-6998ca2965,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-a-6998ca2965,},FirstTimestamp:2025-01-29 16:26:32.100099452 +0000 UTC m=+1.239162421,LastTimestamp:2025-01-29 16:26:32.100099452 +0000 UTC m=+1.239162421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-a-6998ca2965,}" Jan 29 16:26:38.957836 systemd[1]: Reload requested from client PID 3207 ('systemctl') (unit session-7.scope)... Jan 29 16:26:38.957856 systemd[1]: Reloading... Jan 29 16:26:39.099492 zram_generator::config[3254]: No configuration found. Jan 29 16:26:39.123785 kubelet[2933]: W0129 16:26:39.123573 2933 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:26:39.229790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:26:39.358709 systemd[1]: Reloading finished in 400 ms. Jan 29 16:26:39.388611 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:39.409286 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:26:39.409579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:39.409653 systemd[1]: kubelet.service: Consumed 1.033s CPU time, 117M memory peak. Jan 29 16:26:39.415905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:39.535497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:39.544810 (kubelet)[3321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:26:39.594862 kubelet[3321]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:26:39.594862 kubelet[3321]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:26:39.594862 kubelet[3321]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:26:39.595360 kubelet[3321]: I0129 16:26:39.594938 3321 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:26:39.602813 kubelet[3321]: I0129 16:26:39.602768 3321 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:26:39.602813 kubelet[3321]: I0129 16:26:39.602794 3321 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:26:39.603082 kubelet[3321]: I0129 16:26:39.603058 3321 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:26:39.604251 kubelet[3321]: I0129 16:26:39.604222 3321 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:26:39.609915 kubelet[3321]: I0129 16:26:39.609493 3321 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:26:39.615549 kubelet[3321]: E0129 16:26:39.615463 3321 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:26:39.615549 kubelet[3321]: I0129 16:26:39.615501 3321 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:26:39.620475 kubelet[3321]: I0129 16:26:39.620149 3321 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:26:39.620475 kubelet[3321]: I0129 16:26:39.620267 3321 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:26:39.620693 kubelet[3321]: I0129 16:26:39.620653 3321 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:26:39.620942 kubelet[3321]: I0129 16:26:39.620764 3321 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.0-a-6998ca2965","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:26:39.621210 kubelet[3321]: I0129 16:26:39.621060 3321 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:26:39.621210 kubelet[3321]: I0129 16:26:39.621073 3321 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:26:39.621210 kubelet[3321]: I0129 16:26:39.621111 3321 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:26:39.622032 kubelet[3321]: I0129 16:26:39.621386 3321 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:26:39.622032 kubelet[3321]: I0129 16:26:39.621401 3321 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:26:39.622032 kubelet[3321]: I0129 16:26:39.621497 3321 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:26:39.622032 kubelet[3321]: I0129 16:26:39.621533 3321 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:26:39.622463 kubelet[3321]: I0129 16:26:39.622398 3321 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:26:39.623142 kubelet[3321]: I0129 16:26:39.623119 3321 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:26:39.623646 kubelet[3321]: I0129 16:26:39.623625 3321 server.go:1269] "Started kubelet" Jan 29 16:26:39.628779 kubelet[3321]: I0129 16:26:39.628746 3321 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:26:39.640027 kubelet[3321]: I0129 16:26:39.639980 3321 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:26:39.640475 kubelet[3321]: I0129 16:26:39.640453 3321 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:26:39.640733 kubelet[3321]: E0129 16:26:39.640712 3321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.0-a-6998ca2965\" not found" Jan 29 16:26:39.643706 kubelet[3321]: I0129 16:26:39.643401 3321 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:26:39.643706 kubelet[3321]: I0129 16:26:39.643662 3321 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:26:39.644188 kubelet[3321]: I0129 16:26:39.644171 3321 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:26:39.652543 kubelet[3321]: I0129 16:26:39.652521 3321 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:26:39.652677 kubelet[3321]: I0129 16:26:39.652656 3321 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:26:39.652954 kubelet[3321]: I0129 16:26:39.652896 3321 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:26:39.653233 kubelet[3321]: I0129 16:26:39.653214 3321 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:26:39.653752 kubelet[3321]: I0129 16:26:39.653667 3321 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:26:39.660132 kubelet[3321]: I0129 16:26:39.660092 3321 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:26:39.662006 kubelet[3321]: I0129 16:26:39.661983 3321 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:26:39.662006 kubelet[3321]: I0129 16:26:39.662009 3321 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:26:39.662131 kubelet[3321]: I0129 16:26:39.662027 3321 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:26:39.662131 kubelet[3321]: E0129 16:26:39.662068 3321 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:26:39.663721 kubelet[3321]: I0129 16:26:39.663569 3321 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:26:39.715679 kubelet[3321]: I0129 16:26:39.715645 3321 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:26:39.715679 kubelet[3321]: I0129 16:26:39.715664 3321 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:26:39.715679 kubelet[3321]: I0129 16:26:39.715685 3321 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:26:39.716228 kubelet[3321]: I0129 16:26:39.715859 3321 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:26:39.716228 kubelet[3321]: I0129 16:26:39.715871 3321 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:26:39.716228 kubelet[3321]: I0129 16:26:39.715896 3321 policy_none.go:49] "None policy: Start" Jan 29 16:26:39.717688 kubelet[3321]: I0129 16:26:39.716784 3321 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:26:39.717688 kubelet[3321]: I0129 16:26:39.716813 3321 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:26:39.717688 kubelet[3321]: I0129 16:26:39.716960 3321 state_mem.go:75] "Updated machine memory state" Jan 29 16:26:39.721348 kubelet[3321]: I0129 16:26:39.721320 3321 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:26:39.721592 kubelet[3321]: I0129 16:26:39.721575 3321 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:26:39.721664 kubelet[3321]: I0129 16:26:39.721595 3321 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:26:39.722508 kubelet[3321]: I0129 16:26:39.722338 3321 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:26:39.770685 kubelet[3321]: W0129 16:26:39.770651 3321 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:26:39.773487 kubelet[3321]: W0129 16:26:39.773341 3321 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:26:39.773487 kubelet[3321]: W0129 16:26:39.773411 3321 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:26:39.773487 kubelet[3321]: E0129 16:26:39.773481 3321 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.0.0-a-6998ca2965\" already exists" pod="kube-system/kube-controller-manager-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:39.826754 kubelet[3321]: I0129 16:26:39.826165 3321 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-a-6998ca2965" Jan 29 16:26:39.837079 kubelet[3321]: I0129 16:26:39.837050 3321 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.0.0-a-6998ca2965" Jan 29 16:26:39.837204 kubelet[3321]: I0129 16:26:39.837128 3321 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.0.0-a-6998ca2965" Jan 29 16:26:39.844867 kubelet[3321]: I0129 16:26:39.844825 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99b8b7b170871ab9ef81b9c4855450ef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.0-a-6998ca2965\" (UID: \"99b8b7b170871ab9ef81b9c4855450ef\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:39.844867 kubelet[3321]: I0129 16:26:39.844862 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de12fe16e5b1ee7dab0c870638128e2c-k8s-certs\") pod \"kube-apiserver-ci-4230.0.0-a-6998ca2965\" (UID: \"de12fe16e5b1ee7dab0c870638128e2c\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:39.845090 kubelet[3321]: I0129 16:26:39.844888 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de12fe16e5b1ee7dab0c870638128e2c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.0-a-6998ca2965\" (UID: \"de12fe16e5b1ee7dab0c870638128e2c\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:39.845090 kubelet[3321]: I0129 16:26:39.844914 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99b8b7b170871ab9ef81b9c4855450ef-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.0-a-6998ca2965\" (UID: \"99b8b7b170871ab9ef81b9c4855450ef\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:39.845090 kubelet[3321]: I0129 16:26:39.844949 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99b8b7b170871ab9ef81b9c4855450ef-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.0-a-6998ca2965\" (UID: \"99b8b7b170871ab9ef81b9c4855450ef\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:39.845090 kubelet[3321]: I0129 16:26:39.844980 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51f6f01963759a772e1d02ceccbce4cc-kubeconfig\") pod \"kube-scheduler-ci-4230.0.0-a-6998ca2965\" (UID: \"51f6f01963759a772e1d02ceccbce4cc\") " pod="kube-system/kube-scheduler-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:39.845090 kubelet[3321]: I0129 16:26:39.845024 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de12fe16e5b1ee7dab0c870638128e2c-ca-certs\") pod \"kube-apiserver-ci-4230.0.0-a-6998ca2965\" (UID: \"de12fe16e5b1ee7dab0c870638128e2c\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:39.845255 kubelet[3321]: I0129 16:26:39.845046 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99b8b7b170871ab9ef81b9c4855450ef-ca-certs\") pod \"kube-controller-manager-ci-4230.0.0-a-6998ca2965\" (UID: \"99b8b7b170871ab9ef81b9c4855450ef\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:39.845255 kubelet[3321]: I0129 16:26:39.845078 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/99b8b7b170871ab9ef81b9c4855450ef-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.0-a-6998ca2965\" (UID: \"99b8b7b170871ab9ef81b9c4855450ef\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-6998ca2965" Jan 29 16:26:40.632619 kubelet[3321]: I0129 16:26:40.632527 3321 apiserver.go:52] "Watching apiserver" Jan 29 16:26:40.645261 kubelet[3321]: I0129 16:26:40.645224 3321 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:26:40.746272 kubelet[3321]: I0129 16:26:40.746194 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.0.0-a-6998ca2965" podStartSLOduration=1.746168596 podStartE2EDuration="1.746168596s" podCreationTimestamp="2025-01-29 16:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:26:40.745931789 +0000 UTC m=+1.195953369" watchObservedRunningTime="2025-01-29 16:26:40.746168596 +0000 UTC m=+1.196190076" Jan 29 16:26:40.746540 kubelet[3321]: I0129 16:26:40.746363 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.0.0-a-6998ca2965" podStartSLOduration=1.746353402 podStartE2EDuration="1.746353402s" podCreationTimestamp="2025-01-29 16:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:26:40.726827303 +0000 UTC m=+1.176848783" watchObservedRunningTime="2025-01-29 16:26:40.746353402 +0000 UTC m=+1.196374882" Jan 29 16:26:40.777288 kubelet[3321]: I0129 16:26:40.777214 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.0.0-a-6998ca2965" podStartSLOduration=1.777195047 podStartE2EDuration="1.777195047s" podCreationTimestamp="2025-01-29 16:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:26:40.760089123 +0000 UTC m=+1.210110703" watchObservedRunningTime="2025-01-29 16:26:40.777195047 +0000 UTC m=+1.227216627" Jan 29 16:26:42.004792 sudo[2383]: pam_unix(sudo:session): session closed for user root Jan 29 16:26:42.112519 sshd[2382]: Connection closed by 10.200.16.10 port 46652 Jan 29 16:26:42.113350 sshd-session[2380]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:42.118833 systemd[1]: sshd@4-10.200.8.22:22-10.200.16.10:46652.service: Deactivated successfully. Jan 29 16:26:42.122084 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:26:42.122600 systemd[1]: session-7.scope: Consumed 3.271s CPU time, 217.4M memory peak. Jan 29 16:26:42.125193 systemd-logind[1723]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:26:42.126555 systemd-logind[1723]: Removed session 7. Jan 29 16:26:44.718185 kubelet[3321]: I0129 16:26:44.718139 3321 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:26:44.719098 kubelet[3321]: I0129 16:26:44.718791 3321 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:26:44.719157 containerd[1737]: time="2025-01-29T16:26:44.718548488Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:26:45.532279 systemd[1]: Created slice kubepods-besteffort-podfb784d1f_7c8b_4c47_8a51_d0fd1681d96c.slice - libcontainer container kubepods-besteffort-podfb784d1f_7c8b_4c47_8a51_d0fd1681d96c.slice. Jan 29 16:26:45.548394 kubelet[3321]: W0129 16:26:45.548353 3321 reflector.go:561] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230.0.0-a-6998ca2965" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4230.0.0-a-6998ca2965' and this object Jan 29 16:26:45.548581 kubelet[3321]: E0129 16:26:45.548413 3321 reflector.go:158] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4230.0.0-a-6998ca2965\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'ci-4230.0.0-a-6998ca2965' and this object" logger="UnhandledError" Jan 29 16:26:45.548581 kubelet[3321]: W0129 16:26:45.548484 3321 reflector.go:561] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-4230.0.0-a-6998ca2965" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4230.0.0-a-6998ca2965' and this object Jan 29 16:26:45.548581 kubelet[3321]: E0129 16:26:45.548503 3321 reflector.go:158] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-flannel-cfg\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-flannel-cfg\" is forbidden: User \"system:node:ci-4230.0.0-a-6998ca2965\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'ci-4230.0.0-a-6998ca2965' and this object" logger="UnhandledError" Jan 29 16:26:45.557176 systemd[1]: Created slice kubepods-burstable-pod0f71d2a2_834f_4f1b_9622_f052f493a072.slice - libcontainer container kubepods-burstable-pod0f71d2a2_834f_4f1b_9622_f052f493a072.slice. Jan 29 16:26:45.582282 kubelet[3321]: I0129 16:26:45.582242 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb784d1f-7c8b-4c47-8a51-d0fd1681d96c-xtables-lock\") pod \"kube-proxy-cvd7r\" (UID: \"fb784d1f-7c8b-4c47-8a51-d0fd1681d96c\") " pod="kube-system/kube-proxy-cvd7r" Jan 29 16:26:45.582427 kubelet[3321]: I0129 16:26:45.582290 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb784d1f-7c8b-4c47-8a51-d0fd1681d96c-lib-modules\") pod \"kube-proxy-cvd7r\" (UID: \"fb784d1f-7c8b-4c47-8a51-d0fd1681d96c\") " pod="kube-system/kube-proxy-cvd7r" Jan 29 16:26:45.582427 kubelet[3321]: I0129 16:26:45.582324 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmf5p\" (UniqueName: \"kubernetes.io/projected/0f71d2a2-834f-4f1b-9622-f052f493a072-kube-api-access-zmf5p\") pod \"kube-flannel-ds-x5pkd\" (UID: \"0f71d2a2-834f-4f1b-9622-f052f493a072\") " pod="kube-flannel/kube-flannel-ds-x5pkd" Jan 29 16:26:45.582427 kubelet[3321]: I0129 16:26:45.582348 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/0f71d2a2-834f-4f1b-9622-f052f493a072-flannel-cfg\") pod \"kube-flannel-ds-x5pkd\" (UID: \"0f71d2a2-834f-4f1b-9622-f052f493a072\") " pod="kube-flannel/kube-flannel-ds-x5pkd" Jan 29 16:26:45.582427 kubelet[3321]: I0129 16:26:45.582366 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0f71d2a2-834f-4f1b-9622-f052f493a072-run\") pod \"kube-flannel-ds-x5pkd\" (UID: \"0f71d2a2-834f-4f1b-9622-f052f493a072\") " pod="kube-flannel/kube-flannel-ds-x5pkd" Jan 29 16:26:45.582427 kubelet[3321]: I0129 16:26:45.582385 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/0f71d2a2-834f-4f1b-9622-f052f493a072-cni-plugin\") pod \"kube-flannel-ds-x5pkd\" (UID: \"0f71d2a2-834f-4f1b-9622-f052f493a072\") " pod="kube-flannel/kube-flannel-ds-x5pkd" Jan 29 16:26:45.582664 kubelet[3321]: I0129 16:26:45.582405 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/0f71d2a2-834f-4f1b-9622-f052f493a072-cni\") pod \"kube-flannel-ds-x5pkd\" (UID: \"0f71d2a2-834f-4f1b-9622-f052f493a072\") " pod="kube-flannel/kube-flannel-ds-x5pkd" Jan 29 16:26:45.582664 kubelet[3321]: I0129 16:26:45.582428 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fb784d1f-7c8b-4c47-8a51-d0fd1681d96c-kube-proxy\") pod \"kube-proxy-cvd7r\" (UID: \"fb784d1f-7c8b-4c47-8a51-d0fd1681d96c\") " pod="kube-system/kube-proxy-cvd7r" Jan 29 16:26:45.582664 kubelet[3321]: I0129 16:26:45.582465 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltjqb\" (UniqueName: \"kubernetes.io/projected/fb784d1f-7c8b-4c47-8a51-d0fd1681d96c-kube-api-access-ltjqb\") pod \"kube-proxy-cvd7r\" (UID: \"fb784d1f-7c8b-4c47-8a51-d0fd1681d96c\") " pod="kube-system/kube-proxy-cvd7r" Jan 29 16:26:45.582664 kubelet[3321]: I0129 16:26:45.582487 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f71d2a2-834f-4f1b-9622-f052f493a072-xtables-lock\") pod \"kube-flannel-ds-x5pkd\" (UID: \"0f71d2a2-834f-4f1b-9622-f052f493a072\") " pod="kube-flannel/kube-flannel-ds-x5pkd" Jan 29 16:26:45.841207 containerd[1737]: time="2025-01-29T16:26:45.841049787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvd7r,Uid:fb784d1f-7c8b-4c47-8a51-d0fd1681d96c,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:45.887237 containerd[1737]: time="2025-01-29T16:26:45.886891874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:45.887237 containerd[1737]: time="2025-01-29T16:26:45.886937376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:45.887237 containerd[1737]: time="2025-01-29T16:26:45.886956076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:45.887237 containerd[1737]: time="2025-01-29T16:26:45.887037079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:45.911596 systemd[1]: run-containerd-runc-k8s.io-2779dcbc7efc602c04606917b76eeee501ed7d42111373f23d0610b765084a83-runc.iUSkuU.mount: Deactivated successfully. Jan 29 16:26:45.923626 systemd[1]: Started cri-containerd-2779dcbc7efc602c04606917b76eeee501ed7d42111373f23d0610b765084a83.scope - libcontainer container 2779dcbc7efc602c04606917b76eeee501ed7d42111373f23d0610b765084a83. Jan 29 16:26:45.947599 containerd[1737]: time="2025-01-29T16:26:45.947487407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvd7r,Uid:fb784d1f-7c8b-4c47-8a51-d0fd1681d96c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2779dcbc7efc602c04606917b76eeee501ed7d42111373f23d0610b765084a83\"" Jan 29 16:26:45.951321 containerd[1737]: time="2025-01-29T16:26:45.951278622Z" level=info msg="CreateContainer within sandbox \"2779dcbc7efc602c04606917b76eeee501ed7d42111373f23d0610b765084a83\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:26:45.988795 containerd[1737]: time="2025-01-29T16:26:45.988743455Z" level=info msg="CreateContainer within sandbox \"2779dcbc7efc602c04606917b76eeee501ed7d42111373f23d0610b765084a83\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5a2539898b4ed84b9875f1da502636e3b51fb7c606696e2f60b5c388f8740e2f\"" Jan 29 16:26:45.989490 containerd[1737]: time="2025-01-29T16:26:45.989436676Z" level=info msg="StartContainer for \"5a2539898b4ed84b9875f1da502636e3b51fb7c606696e2f60b5c388f8740e2f\"" Jan 29 16:26:46.018649 systemd[1]: Started cri-containerd-5a2539898b4ed84b9875f1da502636e3b51fb7c606696e2f60b5c388f8740e2f.scope - libcontainer container 5a2539898b4ed84b9875f1da502636e3b51fb7c606696e2f60b5c388f8740e2f. Jan 29 16:26:46.053268 containerd[1737]: time="2025-01-29T16:26:46.053200205Z" level=info msg="StartContainer for \"5a2539898b4ed84b9875f1da502636e3b51fb7c606696e2f60b5c388f8740e2f\" returns successfully" Jan 29 16:26:46.683635 kubelet[3321]: E0129 16:26:46.683566 3321 configmap.go:193] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:26:46.684250 kubelet[3321]: E0129 16:26:46.683701 3321 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f71d2a2-834f-4f1b-9622-f052f493a072-flannel-cfg podName:0f71d2a2-834f-4f1b-9622-f052f493a072 nodeName:}" failed. No retries permitted until 2025-01-29 16:26:47.183673976 +0000 UTC m=+7.633695456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/0f71d2a2-834f-4f1b-9622-f052f493a072-flannel-cfg") pod "kube-flannel-ds-x5pkd" (UID: "0f71d2a2-834f-4f1b-9622-f052f493a072") : failed to sync configmap cache: timed out waiting for the condition Jan 29 16:26:46.691169 kubelet[3321]: E0129 16:26:46.690229 3321 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:26:46.691169 kubelet[3321]: E0129 16:26:46.690281 3321 projected.go:194] Error preparing data for projected volume kube-api-access-zmf5p for pod kube-flannel/kube-flannel-ds-x5pkd: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:26:46.691169 kubelet[3321]: E0129 16:26:46.690379 3321 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f71d2a2-834f-4f1b-9622-f052f493a072-kube-api-access-zmf5p podName:0f71d2a2-834f-4f1b-9622-f052f493a072 nodeName:}" failed. No retries permitted until 2025-01-29 16:26:47.190361378 +0000 UTC m=+7.640382858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zmf5p" (UniqueName: "kubernetes.io/projected/0f71d2a2-834f-4f1b-9622-f052f493a072-kube-api-access-zmf5p") pod "kube-flannel-ds-x5pkd" (UID: "0f71d2a2-834f-4f1b-9622-f052f493a072") : failed to sync configmap cache: timed out waiting for the condition Jan 29 16:26:47.362207 containerd[1737]: time="2025-01-29T16:26:47.362147900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-x5pkd,Uid:0f71d2a2-834f-4f1b-9622-f052f493a072,Namespace:kube-flannel,Attempt:0,}" Jan 29 16:26:47.416718 containerd[1737]: time="2025-01-29T16:26:47.416323138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:47.416718 containerd[1737]: time="2025-01-29T16:26:47.416393640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:47.416718 containerd[1737]: time="2025-01-29T16:26:47.416416241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:47.416718 containerd[1737]: time="2025-01-29T16:26:47.416533345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:47.444625 systemd[1]: Started cri-containerd-dcc3a8338425745265adcee886ed62ce9379874abc34ce2bcf5aa2f9c3c9fcb2.scope - libcontainer container dcc3a8338425745265adcee886ed62ce9379874abc34ce2bcf5aa2f9c3c9fcb2. Jan 29 16:26:47.484478 containerd[1737]: time="2025-01-29T16:26:47.484243893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-x5pkd,Uid:0f71d2a2-834f-4f1b-9622-f052f493a072,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"dcc3a8338425745265adcee886ed62ce9379874abc34ce2bcf5aa2f9c3c9fcb2\"" Jan 29 16:26:47.486896 containerd[1737]: time="2025-01-29T16:26:47.486551563Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 29 16:26:47.596528 kubelet[3321]: I0129 16:26:47.596424 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cvd7r" podStartSLOduration=2.596398885 podStartE2EDuration="2.596398885s" podCreationTimestamp="2025-01-29 16:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:26:46.724682017 +0000 UTC m=+7.174703597" watchObservedRunningTime="2025-01-29 16:26:47.596398885 +0000 UTC m=+8.046420365" Jan 29 16:26:47.696648 systemd[1]: run-containerd-runc-k8s.io-dcc3a8338425745265adcee886ed62ce9379874abc34ce2bcf5aa2f9c3c9fcb2-runc.KcwE6e.mount: Deactivated successfully. Jan 29 16:26:49.436783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount580641990.mount: Deactivated successfully. Jan 29 16:26:49.551717 containerd[1737]: time="2025-01-29T16:26:49.551654430Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:49.556101 containerd[1737]: time="2025-01-29T16:26:49.555686152Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852936" Jan 29 16:26:49.562366 containerd[1737]: time="2025-01-29T16:26:49.561999843Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:49.568060 containerd[1737]: time="2025-01-29T16:26:49.568018025Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:49.568947 containerd[1737]: time="2025-01-29T16:26:49.568905752Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.082308988s" Jan 29 16:26:49.569040 containerd[1737]: time="2025-01-29T16:26:49.568952454Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 29 16:26:49.572680 containerd[1737]: time="2025-01-29T16:26:49.572515261Z" level=info msg="CreateContainer within sandbox \"dcc3a8338425745265adcee886ed62ce9379874abc34ce2bcf5aa2f9c3c9fcb2\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 29 16:26:49.619350 containerd[1737]: time="2025-01-29T16:26:49.619300877Z" level=info msg="CreateContainer within sandbox \"dcc3a8338425745265adcee886ed62ce9379874abc34ce2bcf5aa2f9c3c9fcb2\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"057b911cd6d6528809a8ece3172f01cff0bb45a534f9776afe2d5f20ef6f214a\"" Jan 29 16:26:49.620168 containerd[1737]: time="2025-01-29T16:26:49.620104901Z" level=info msg="StartContainer for \"057b911cd6d6528809a8ece3172f01cff0bb45a534f9776afe2d5f20ef6f214a\"" Jan 29 16:26:49.656632 systemd[1]: Started cri-containerd-057b911cd6d6528809a8ece3172f01cff0bb45a534f9776afe2d5f20ef6f214a.scope - libcontainer container 057b911cd6d6528809a8ece3172f01cff0bb45a534f9776afe2d5f20ef6f214a. Jan 29 16:26:49.685903 systemd[1]: cri-containerd-057b911cd6d6528809a8ece3172f01cff0bb45a534f9776afe2d5f20ef6f214a.scope: Deactivated successfully. Jan 29 16:26:49.690977 containerd[1737]: time="2025-01-29T16:26:49.690138119Z" level=info msg="StartContainer for \"057b911cd6d6528809a8ece3172f01cff0bb45a534f9776afe2d5f20ef6f214a\" returns successfully" Jan 29 16:26:49.830789 containerd[1737]: time="2025-01-29T16:26:49.830710472Z" level=info msg="shim disconnected" id=057b911cd6d6528809a8ece3172f01cff0bb45a534f9776afe2d5f20ef6f214a namespace=k8s.io Jan 29 16:26:49.830789 containerd[1737]: time="2025-01-29T16:26:49.830777174Z" level=warning msg="cleaning up after shim disconnected" id=057b911cd6d6528809a8ece3172f01cff0bb45a534f9776afe2d5f20ef6f214a namespace=k8s.io Jan 29 16:26:49.830789 containerd[1737]: time="2025-01-29T16:26:49.830791074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:50.336976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-057b911cd6d6528809a8ece3172f01cff0bb45a534f9776afe2d5f20ef6f214a-rootfs.mount: Deactivated successfully. Jan 29 16:26:50.731205 containerd[1737]: time="2025-01-29T16:26:50.730227081Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 29 16:26:52.708322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1875449429.mount: Deactivated successfully. Jan 29 16:26:53.890159 containerd[1737]: time="2025-01-29T16:26:53.890097551Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:53.894949 containerd[1737]: time="2025-01-29T16:26:53.894872796Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 29 16:26:53.899143 containerd[1737]: time="2025-01-29T16:26:53.899070123Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:53.906475 containerd[1737]: time="2025-01-29T16:26:53.906241739Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:53.908146 containerd[1737]: time="2025-01-29T16:26:53.907283771Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.176988687s" Jan 29 16:26:53.908146 containerd[1737]: time="2025-01-29T16:26:53.907329472Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 29 16:26:53.910862 containerd[1737]: time="2025-01-29T16:26:53.910815778Z" level=info msg="CreateContainer within sandbox \"dcc3a8338425745265adcee886ed62ce9379874abc34ce2bcf5aa2f9c3c9fcb2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 16:26:53.956873 containerd[1737]: time="2025-01-29T16:26:53.956500258Z" level=info msg="CreateContainer within sandbox \"dcc3a8338425745265adcee886ed62ce9379874abc34ce2bcf5aa2f9c3c9fcb2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a1945b7bc76e729d472c0fd1903684443c5e4f7a28ed85067fd406e858821cdb\"" Jan 29 16:26:53.958716 containerd[1737]: time="2025-01-29T16:26:53.958024704Z" level=info msg="StartContainer for \"a1945b7bc76e729d472c0fd1903684443c5e4f7a28ed85067fd406e858821cdb\"" Jan 29 16:26:53.997651 systemd[1]: Started cri-containerd-a1945b7bc76e729d472c0fd1903684443c5e4f7a28ed85067fd406e858821cdb.scope - libcontainer container a1945b7bc76e729d472c0fd1903684443c5e4f7a28ed85067fd406e858821cdb. Jan 29 16:26:54.025091 systemd[1]: cri-containerd-a1945b7bc76e729d472c0fd1903684443c5e4f7a28ed85067fd406e858821cdb.scope: Deactivated successfully. Jan 29 16:26:54.031975 containerd[1737]: time="2025-01-29T16:26:54.031878537Z" level=info msg="StartContainer for \"a1945b7bc76e729d472c0fd1903684443c5e4f7a28ed85067fd406e858821cdb\" returns successfully" Jan 29 16:26:54.056701 kubelet[3321]: I0129 16:26:54.056654 3321 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 16:26:54.065240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1945b7bc76e729d472c0fd1903684443c5e4f7a28ed85067fd406e858821cdb-rootfs.mount: Deactivated successfully. Jan 29 16:26:54.114804 systemd[1]: Created slice kubepods-burstable-poda5bbdbf0_439d_4f18_be8a_4a08b874511f.slice - libcontainer container kubepods-burstable-poda5bbdbf0_439d_4f18_be8a_4a08b874511f.slice. Jan 29 16:26:54.126094 systemd[1]: Created slice kubepods-burstable-poddf8c5911_079e_4815_a06c_11b2ef275c93.slice - libcontainer container kubepods-burstable-poddf8c5911_079e_4815_a06c_11b2ef275c93.slice. Jan 29 16:26:54.137996 kubelet[3321]: I0129 16:26:54.137952 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfbnz\" (UniqueName: \"kubernetes.io/projected/df8c5911-079e-4815-a06c-11b2ef275c93-kube-api-access-jfbnz\") pod \"coredns-6f6b679f8f-xm4zn\" (UID: \"df8c5911-079e-4815-a06c-11b2ef275c93\") " pod="kube-system/coredns-6f6b679f8f-xm4zn" Jan 29 16:26:54.138203 kubelet[3321]: I0129 16:26:54.138001 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5bbdbf0-439d-4f18-be8a-4a08b874511f-config-volume\") pod \"coredns-6f6b679f8f-wrnl8\" (UID: \"a5bbdbf0-439d-4f18-be8a-4a08b874511f\") " pod="kube-system/coredns-6f6b679f8f-wrnl8" Jan 29 16:26:54.138203 kubelet[3321]: I0129 16:26:54.138031 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58kbv\" (UniqueName: \"kubernetes.io/projected/a5bbdbf0-439d-4f18-be8a-4a08b874511f-kube-api-access-58kbv\") pod \"coredns-6f6b679f8f-wrnl8\" (UID: \"a5bbdbf0-439d-4f18-be8a-4a08b874511f\") " pod="kube-system/coredns-6f6b679f8f-wrnl8" Jan 29 16:26:54.138203 kubelet[3321]: I0129 16:26:54.138054 3321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df8c5911-079e-4815-a06c-11b2ef275c93-config-volume\") pod \"coredns-6f6b679f8f-xm4zn\" (UID: \"df8c5911-079e-4815-a06c-11b2ef275c93\") " pod="kube-system/coredns-6f6b679f8f-xm4zn" Jan 29 16:26:54.465762 containerd[1737]: time="2025-01-29T16:26:54.465689449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wrnl8,Uid:a5bbdbf0-439d-4f18-be8a-4a08b874511f,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:54.469574 containerd[1737]: time="2025-01-29T16:26:54.469517564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xm4zn,Uid:df8c5911-079e-4815-a06c-11b2ef275c93,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:54.682237 containerd[1737]: time="2025-01-29T16:26:54.682043388Z" level=info msg="shim disconnected" id=a1945b7bc76e729d472c0fd1903684443c5e4f7a28ed85067fd406e858821cdb namespace=k8s.io Jan 29 16:26:54.682237 containerd[1737]: time="2025-01-29T16:26:54.682235094Z" level=warning msg="cleaning up after shim disconnected" id=a1945b7bc76e729d472c0fd1903684443c5e4f7a28ed85067fd406e858821cdb namespace=k8s.io Jan 29 16:26:54.682237 containerd[1737]: time="2025-01-29T16:26:54.682252294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:54.744395 containerd[1737]: time="2025-01-29T16:26:54.743792454Z" level=info msg="CreateContainer within sandbox \"dcc3a8338425745265adcee886ed62ce9379874abc34ce2bcf5aa2f9c3c9fcb2\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 29 16:26:54.803666 containerd[1737]: time="2025-01-29T16:26:54.803502559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wrnl8,Uid:a5bbdbf0-439d-4f18-be8a-4a08b874511f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b27a3717a19173b2c6883cf9d52e86b4d9174f17ad4a0aeb30f2eadcff267dc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 16:26:54.804300 kubelet[3321]: E0129 16:26:54.803804 3321 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b27a3717a19173b2c6883cf9d52e86b4d9174f17ad4a0aeb30f2eadcff267dc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 16:26:54.804300 kubelet[3321]: E0129 16:26:54.803875 3321 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b27a3717a19173b2c6883cf9d52e86b4d9174f17ad4a0aeb30f2eadcff267dc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-wrnl8" Jan 29 16:26:54.804300 kubelet[3321]: E0129 16:26:54.803891 3321 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b27a3717a19173b2c6883cf9d52e86b4d9174f17ad4a0aeb30f2eadcff267dc\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-wrnl8" Jan 29 16:26:54.804300 kubelet[3321]: E0129 16:26:54.803941 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wrnl8_kube-system(a5bbdbf0-439d-4f18-be8a-4a08b874511f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wrnl8_kube-system(a5bbdbf0-439d-4f18-be8a-4a08b874511f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b27a3717a19173b2c6883cf9d52e86b4d9174f17ad4a0aeb30f2eadcff267dc\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-wrnl8" podUID="a5bbdbf0-439d-4f18-be8a-4a08b874511f" Jan 29 16:26:54.813040 containerd[1737]: time="2025-01-29T16:26:54.812923944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xm4zn,Uid:df8c5911-079e-4815-a06c-11b2ef275c93,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d63cbd5ebe6e6cff0919edb6b670b3d40c794a2477306122bed9b922b692e3ca\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 16:26:54.813155 kubelet[3321]: E0129 16:26:54.813126 3321 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63cbd5ebe6e6cff0919edb6b670b3d40c794a2477306122bed9b922b692e3ca\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 16:26:54.813218 kubelet[3321]: E0129 16:26:54.813176 3321 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63cbd5ebe6e6cff0919edb6b670b3d40c794a2477306122bed9b922b692e3ca\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-xm4zn" Jan 29 16:26:54.813218 kubelet[3321]: E0129 16:26:54.813202 3321 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d63cbd5ebe6e6cff0919edb6b670b3d40c794a2477306122bed9b922b692e3ca\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-xm4zn" Jan 29 16:26:54.813296 kubelet[3321]: E0129 16:26:54.813255 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xm4zn_kube-system(df8c5911-079e-4815-a06c-11b2ef275c93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xm4zn_kube-system(df8c5911-079e-4815-a06c-11b2ef275c93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d63cbd5ebe6e6cff0919edb6b670b3d40c794a2477306122bed9b922b692e3ca\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-xm4zn" podUID="df8c5911-079e-4815-a06c-11b2ef275c93" Jan 29 16:26:54.825084 containerd[1737]: time="2025-01-29T16:26:54.825016409Z" level=info msg="CreateContainer within sandbox \"dcc3a8338425745265adcee886ed62ce9379874abc34ce2bcf5aa2f9c3c9fcb2\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"5d154cca2aa8088255f8406108dd0f36cbab444c3a31dd7522b3f6e321427845\"" Jan 29 16:26:54.825722 containerd[1737]: time="2025-01-29T16:26:54.825684030Z" level=info msg="StartContainer for \"5d154cca2aa8088255f8406108dd0f36cbab444c3a31dd7522b3f6e321427845\"" Jan 29 16:26:54.857668 systemd[1]: Started cri-containerd-5d154cca2aa8088255f8406108dd0f36cbab444c3a31dd7522b3f6e321427845.scope - libcontainer container 5d154cca2aa8088255f8406108dd0f36cbab444c3a31dd7522b3f6e321427845. Jan 29 16:26:54.890924 containerd[1737]: time="2025-01-29T16:26:54.890866700Z" level=info msg="StartContainer for \"5d154cca2aa8088255f8406108dd0f36cbab444c3a31dd7522b3f6e321427845\" returns successfully" Jan 29 16:26:56.030248 systemd-networkd[1337]: flannel.1: Link UP Jan 29 16:26:56.030263 systemd-networkd[1337]: flannel.1: Gained carrier Jan 29 16:26:57.125657 systemd-networkd[1337]: flannel.1: Gained IPv6LL Jan 29 16:27:07.664886 containerd[1737]: time="2025-01-29T16:27:07.664287507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xm4zn,Uid:df8c5911-079e-4815-a06c-11b2ef275c93,Namespace:kube-system,Attempt:0,}" Jan 29 16:27:07.721604 systemd-networkd[1337]: cni0: Link UP Jan 29 16:27:07.722131 systemd-networkd[1337]: cni0: Gained carrier Jan 29 16:27:07.727113 systemd-networkd[1337]: cni0: Lost carrier Jan 29 16:27:07.824325 systemd-networkd[1337]: vethfc4302d2: Link UP Jan 29 16:27:07.832336 kernel: cni0: port 1(vethfc4302d2) entered blocking state Jan 29 16:27:07.832470 kernel: cni0: port 1(vethfc4302d2) entered disabled state Jan 29 16:27:07.832506 kernel: vethfc4302d2: entered allmulticast mode Jan 29 16:27:07.836504 kernel: vethfc4302d2: entered promiscuous mode Jan 29 16:27:07.836578 kernel: cni0: port 1(vethfc4302d2) entered blocking state Jan 29 16:27:07.836613 kernel: cni0: port 1(vethfc4302d2) entered forwarding state Jan 29 16:27:07.838483 kernel: cni0: port 1(vethfc4302d2) entered disabled state Jan 29 16:27:07.847686 kernel: cni0: port 1(vethfc4302d2) entered blocking state Jan 29 16:27:07.847787 kernel: cni0: port 1(vethfc4302d2) entered forwarding state Jan 29 16:27:07.847029 systemd-networkd[1337]: vethfc4302d2: Gained carrier Jan 29 16:27:07.847420 systemd-networkd[1337]: cni0: Gained carrier Jan 29 16:27:07.850047 containerd[1737]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 29 16:27:07.850047 containerd[1737]: delegateAdd: netconf sent to delegate plugin: Jan 29 16:27:07.871312 containerd[1737]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T16:27:07.871226582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:27:07.871312 containerd[1737]: time="2025-01-29T16:27:07.871280283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:27:07.871610 containerd[1737]: time="2025-01-29T16:27:07.871299384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:07.871610 containerd[1737]: time="2025-01-29T16:27:07.871387587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:07.904228 systemd[1]: run-containerd-runc-k8s.io-0722f008db10715a9fa3f78e4eb487f423452e678acfac4f752c74774740016f-runc.C8Bbxs.mount: Deactivated successfully. Jan 29 16:27:07.909586 systemd[1]: Started cri-containerd-0722f008db10715a9fa3f78e4eb487f423452e678acfac4f752c74774740016f.scope - libcontainer container 0722f008db10715a9fa3f78e4eb487f423452e678acfac4f752c74774740016f. Jan 29 16:27:07.949012 containerd[1737]: time="2025-01-29T16:27:07.948943939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xm4zn,Uid:df8c5911-079e-4815-a06c-11b2ef275c93,Namespace:kube-system,Attempt:0,} returns sandbox id \"0722f008db10715a9fa3f78e4eb487f423452e678acfac4f752c74774740016f\"" Jan 29 16:27:07.952204 containerd[1737]: time="2025-01-29T16:27:07.952161936Z" level=info msg="CreateContainer within sandbox \"0722f008db10715a9fa3f78e4eb487f423452e678acfac4f752c74774740016f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:27:07.998575 containerd[1737]: time="2025-01-29T16:27:07.998524742Z" level=info msg="CreateContainer within sandbox \"0722f008db10715a9fa3f78e4eb487f423452e678acfac4f752c74774740016f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9fe6945c7e7d30b6f95f010f95ccaacfd75c8c327cc3c6af4bb567d3277fb50\"" Jan 29 16:27:08.000294 containerd[1737]: time="2025-01-29T16:27:07.999301466Z" level=info msg="StartContainer for \"d9fe6945c7e7d30b6f95f010f95ccaacfd75c8c327cc3c6af4bb567d3277fb50\"" Jan 29 16:27:08.029650 systemd[1]: Started cri-containerd-d9fe6945c7e7d30b6f95f010f95ccaacfd75c8c327cc3c6af4bb567d3277fb50.scope - libcontainer container d9fe6945c7e7d30b6f95f010f95ccaacfd75c8c327cc3c6af4bb567d3277fb50. Jan 29 16:27:08.064797 containerd[1737]: time="2025-01-29T16:27:08.064739350Z" level=info msg="StartContainer for \"d9fe6945c7e7d30b6f95f010f95ccaacfd75c8c327cc3c6af4bb567d3277fb50\" returns successfully" Jan 29 16:27:08.784489 kubelet[3321]: I0129 16:27:08.784408 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-x5pkd" podStartSLOduration=17.361904512 podStartE2EDuration="23.784387073s" podCreationTimestamp="2025-01-29 16:26:45 +0000 UTC" firstStartedPulling="2025-01-29 16:26:47.485927644 +0000 UTC m=+7.935949124" lastFinishedPulling="2025-01-29 16:26:53.908410105 +0000 UTC m=+14.358431685" observedRunningTime="2025-01-29 16:26:55.757770602 +0000 UTC m=+16.207792182" watchObservedRunningTime="2025-01-29 16:27:08.784387073 +0000 UTC m=+29.234408553" Jan 29 16:27:08.801694 kubelet[3321]: I0129 16:27:08.800806 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xm4zn" podStartSLOduration=23.80078367 podStartE2EDuration="23.80078367s" podCreationTimestamp="2025-01-29 16:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:27:08.783763254 +0000 UTC m=+29.233784834" watchObservedRunningTime="2025-01-29 16:27:08.80078367 +0000 UTC m=+29.250805250" Jan 29 16:27:08.965734 systemd-networkd[1337]: cni0: Gained IPv6LL Jan 29 16:27:09.221624 systemd-networkd[1337]: vethfc4302d2: Gained IPv6LL Jan 29 16:27:09.665060 containerd[1737]: time="2025-01-29T16:27:09.664319416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wrnl8,Uid:a5bbdbf0-439d-4f18-be8a-4a08b874511f,Namespace:kube-system,Attempt:0,}" Jan 29 16:27:09.728580 systemd-networkd[1337]: veth921280f7: Link UP Jan 29 16:27:09.735578 kernel: cni0: port 2(veth921280f7) entered blocking state Jan 29 16:27:09.735672 kernel: cni0: port 2(veth921280f7) entered disabled state Jan 29 16:27:09.740114 kernel: veth921280f7: entered allmulticast mode Jan 29 16:27:09.740182 kernel: veth921280f7: entered promiscuous mode Jan 29 16:27:09.747209 kernel: cni0: port 2(veth921280f7) entered blocking state Jan 29 16:27:09.747276 kernel: cni0: port 2(veth921280f7) entered forwarding state Jan 29 16:27:09.747402 systemd-networkd[1337]: veth921280f7: Gained carrier Jan 29 16:27:09.749290 containerd[1737]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000020938), "name":"cbr0", "type":"bridge"} Jan 29 16:27:09.749290 containerd[1737]: delegateAdd: netconf sent to delegate plugin: Jan 29 16:27:09.772048 containerd[1737]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T16:27:09.771960733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:27:09.772048 containerd[1737]: time="2025-01-29T16:27:09.772005634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:27:09.772345 containerd[1737]: time="2025-01-29T16:27:09.772019634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:09.772345 containerd[1737]: time="2025-01-29T16:27:09.772304143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:09.811625 systemd[1]: Started cri-containerd-a3553b2d4f86dc298b781a4e8fcade2ebe50189b75c6a2398d93b58b053d7540.scope - libcontainer container a3553b2d4f86dc298b781a4e8fcade2ebe50189b75c6a2398d93b58b053d7540. Jan 29 16:27:09.855175 containerd[1737]: time="2025-01-29T16:27:09.855135618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wrnl8,Uid:a5bbdbf0-439d-4f18-be8a-4a08b874511f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3553b2d4f86dc298b781a4e8fcade2ebe50189b75c6a2398d93b58b053d7540\"" Jan 29 16:27:09.858287 containerd[1737]: time="2025-01-29T16:27:09.858239611Z" level=info msg="CreateContainer within sandbox \"a3553b2d4f86dc298b781a4e8fcade2ebe50189b75c6a2398d93b58b053d7540\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:27:09.898604 containerd[1737]: time="2025-01-29T16:27:09.898556816Z" level=info msg="CreateContainer within sandbox \"a3553b2d4f86dc298b781a4e8fcade2ebe50189b75c6a2398d93b58b053d7540\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f301d4b0786d149e88d9fb02ad3327f380fce65161da6346a94b4b0c4e1c264\"" Jan 29 16:27:09.899308 containerd[1737]: time="2025-01-29T16:27:09.899193935Z" level=info msg="StartContainer for \"2f301d4b0786d149e88d9fb02ad3327f380fce65161da6346a94b4b0c4e1c264\"" Jan 29 16:27:09.924641 systemd[1]: Started cri-containerd-2f301d4b0786d149e88d9fb02ad3327f380fce65161da6346a94b4b0c4e1c264.scope - libcontainer container 2f301d4b0786d149e88d9fb02ad3327f380fce65161da6346a94b4b0c4e1c264. Jan 29 16:27:09.956114 containerd[1737]: time="2025-01-29T16:27:09.956058334Z" level=info msg="StartContainer for \"2f301d4b0786d149e88d9fb02ad3327f380fce65161da6346a94b4b0c4e1c264\" returns successfully" Jan 29 16:27:10.789327 kubelet[3321]: I0129 16:27:10.788679 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wrnl8" podStartSLOduration=25.788657614999998 podStartE2EDuration="25.788657615s" podCreationTimestamp="2025-01-29 16:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:27:10.788538812 +0000 UTC m=+31.238560292" watchObservedRunningTime="2025-01-29 16:27:10.788657615 +0000 UTC m=+31.238679195" Jan 29 16:27:11.077647 systemd-networkd[1337]: veth921280f7: Gained IPv6LL Jan 29 16:28:34.005802 systemd[1]: Started sshd@5-10.200.8.22:22-10.200.16.10:54542.service - OpenSSH per-connection server daemon (10.200.16.10:54542). Jan 29 16:28:34.654316 sshd[4574]: Accepted publickey for core from 10.200.16.10 port 54542 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:28:34.655912 sshd-session[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:34.660895 systemd-logind[1723]: New session 8 of user core. Jan 29 16:28:34.672669 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:28:35.187118 sshd[4576]: Connection closed by 10.200.16.10 port 54542 Jan 29 16:28:35.188026 sshd-session[4574]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:35.192638 systemd[1]: sshd@5-10.200.8.22:22-10.200.16.10:54542.service: Deactivated successfully. Jan 29 16:28:35.195022 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:28:35.195937 systemd-logind[1723]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:28:35.197096 systemd-logind[1723]: Removed session 8. Jan 29 16:28:40.312404 systemd[1]: Started sshd@6-10.200.8.22:22-10.200.16.10:52830.service - OpenSSH per-connection server daemon (10.200.16.10:52830). Jan 29 16:28:40.959168 sshd[4612]: Accepted publickey for core from 10.200.16.10 port 52830 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:28:40.960957 sshd-session[4612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:40.966086 systemd-logind[1723]: New session 9 of user core. Jan 29 16:28:40.972629 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:28:41.482609 sshd[4614]: Connection closed by 10.200.16.10 port 52830 Jan 29 16:28:41.483513 sshd-session[4612]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:41.487425 systemd[1]: sshd@6-10.200.8.22:22-10.200.16.10:52830.service: Deactivated successfully. Jan 29 16:28:41.489738 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:28:41.490576 systemd-logind[1723]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:28:41.491611 systemd-logind[1723]: Removed session 9. Jan 29 16:28:46.603400 systemd[1]: Started sshd@7-10.200.8.22:22-10.200.16.10:51866.service - OpenSSH per-connection server daemon (10.200.16.10:51866). Jan 29 16:28:47.250214 sshd[4672]: Accepted publickey for core from 10.200.16.10 port 51866 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:28:47.251709 sshd-session[4672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:47.256840 systemd-logind[1723]: New session 10 of user core. Jan 29 16:28:47.263632 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:28:47.779625 sshd[4674]: Connection closed by 10.200.16.10 port 51866 Jan 29 16:28:47.780550 sshd-session[4672]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:47.784034 systemd[1]: sshd@7-10.200.8.22:22-10.200.16.10:51866.service: Deactivated successfully. Jan 29 16:28:47.786357 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:28:47.788199 systemd-logind[1723]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:28:47.789248 systemd-logind[1723]: Removed session 10. Jan 29 16:28:47.904756 systemd[1]: Started sshd@8-10.200.8.22:22-10.200.16.10:51872.service - OpenSSH per-connection server daemon (10.200.16.10:51872). Jan 29 16:28:48.552339 sshd[4687]: Accepted publickey for core from 10.200.16.10 port 51872 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:28:48.553873 sshd-session[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:48.558348 systemd-logind[1723]: New session 11 of user core. Jan 29 16:28:48.567599 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:28:49.111275 sshd[4689]: Connection closed by 10.200.16.10 port 51872 Jan 29 16:28:49.112372 sshd-session[4687]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:49.115550 systemd[1]: sshd@8-10.200.8.22:22-10.200.16.10:51872.service: Deactivated successfully. Jan 29 16:28:49.118004 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:28:49.120016 systemd-logind[1723]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:28:49.121059 systemd-logind[1723]: Removed session 11. Jan 29 16:28:49.232792 systemd[1]: Started sshd@9-10.200.8.22:22-10.200.16.10:51888.service - OpenSSH per-connection server daemon (10.200.16.10:51888). Jan 29 16:28:49.881492 sshd[4699]: Accepted publickey for core from 10.200.16.10 port 51888 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:28:49.883323 sshd-session[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:49.890023 systemd-logind[1723]: New session 12 of user core. Jan 29 16:28:49.893636 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:28:50.402743 sshd[4701]: Connection closed by 10.200.16.10 port 51888 Jan 29 16:28:50.403635 sshd-session[4699]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:50.408996 systemd-logind[1723]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:28:50.409748 systemd[1]: sshd@9-10.200.8.22:22-10.200.16.10:51888.service: Deactivated successfully. Jan 29 16:28:50.412457 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:28:50.413996 systemd-logind[1723]: Removed session 12. Jan 29 16:28:55.524776 systemd[1]: Started sshd@10-10.200.8.22:22-10.200.16.10:51900.service - OpenSSH per-connection server daemon (10.200.16.10:51900). Jan 29 16:28:56.171281 sshd[4734]: Accepted publickey for core from 10.200.16.10 port 51900 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:28:56.172941 sshd-session[4734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:56.177999 systemd-logind[1723]: New session 13 of user core. Jan 29 16:28:56.187706 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:28:56.691958 sshd[4736]: Connection closed by 10.200.16.10 port 51900 Jan 29 16:28:56.695996 sshd-session[4734]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:56.700026 systemd-logind[1723]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:28:56.700753 systemd[1]: sshd@10-10.200.8.22:22-10.200.16.10:51900.service: Deactivated successfully. Jan 29 16:28:56.703557 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:28:56.704636 systemd-logind[1723]: Removed session 13. Jan 29 16:28:56.816826 systemd[1]: Started sshd@11-10.200.8.22:22-10.200.16.10:39722.service - OpenSSH per-connection server daemon (10.200.16.10:39722). Jan 29 16:28:57.467939 sshd[4769]: Accepted publickey for core from 10.200.16.10 port 39722 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:28:57.469609 sshd-session[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:57.474466 systemd-logind[1723]: New session 14 of user core. Jan 29 16:28:57.477738 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:28:58.059276 sshd[4771]: Connection closed by 10.200.16.10 port 39722 Jan 29 16:28:58.060358 sshd-session[4769]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:58.064311 systemd[1]: sshd@11-10.200.8.22:22-10.200.16.10:39722.service: Deactivated successfully. Jan 29 16:28:58.067338 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:28:58.069362 systemd-logind[1723]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:28:58.070692 systemd-logind[1723]: Removed session 14. Jan 29 16:28:58.186584 systemd[1]: Started sshd@12-10.200.8.22:22-10.200.16.10:39732.service - OpenSSH per-connection server daemon (10.200.16.10:39732). Jan 29 16:28:58.833169 sshd[4780]: Accepted publickey for core from 10.200.16.10 port 39732 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:28:58.835272 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:58.841305 systemd-logind[1723]: New session 15 of user core. Jan 29 16:28:58.846631 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:29:00.596511 sshd[4782]: Connection closed by 10.200.16.10 port 39732 Jan 29 16:29:00.598037 sshd-session[4780]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:00.602152 systemd[1]: sshd@12-10.200.8.22:22-10.200.16.10:39732.service: Deactivated successfully. Jan 29 16:29:00.605287 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:29:00.607616 systemd-logind[1723]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:29:00.608615 systemd-logind[1723]: Removed session 15. Jan 29 16:29:00.715753 systemd[1]: Started sshd@13-10.200.8.22:22-10.200.16.10:39738.service - OpenSSH per-connection server daemon (10.200.16.10:39738). Jan 29 16:29:01.363472 sshd[4799]: Accepted publickey for core from 10.200.16.10 port 39738 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:29:01.364030 sshd-session[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:01.370174 systemd-logind[1723]: New session 16 of user core. Jan 29 16:29:01.375622 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:29:01.983006 sshd[4807]: Connection closed by 10.200.16.10 port 39738 Jan 29 16:29:01.983814 sshd-session[4799]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:01.986910 systemd[1]: sshd@13-10.200.8.22:22-10.200.16.10:39738.service: Deactivated successfully. Jan 29 16:29:01.989396 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:29:01.991962 systemd-logind[1723]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:29:01.993434 systemd-logind[1723]: Removed session 16. Jan 29 16:29:02.104607 systemd[1]: Started sshd@14-10.200.8.22:22-10.200.16.10:39750.service - OpenSSH per-connection server daemon (10.200.16.10:39750). Jan 29 16:29:02.755776 sshd[4832]: Accepted publickey for core from 10.200.16.10 port 39750 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:29:02.757462 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:02.762526 systemd-logind[1723]: New session 17 of user core. Jan 29 16:29:02.775667 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:29:03.282800 sshd[4834]: Connection closed by 10.200.16.10 port 39750 Jan 29 16:29:03.283677 sshd-session[4832]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:03.288518 systemd-logind[1723]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:29:03.289359 systemd[1]: sshd@14-10.200.8.22:22-10.200.16.10:39750.service: Deactivated successfully. Jan 29 16:29:03.291855 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:29:03.292927 systemd-logind[1723]: Removed session 17. Jan 29 16:29:08.408035 systemd[1]: Started sshd@15-10.200.8.22:22-10.200.16.10:53100.service - OpenSSH per-connection server daemon (10.200.16.10:53100). Jan 29 16:29:09.057781 sshd[4871]: Accepted publickey for core from 10.200.16.10 port 53100 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:29:09.059675 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:09.065768 systemd-logind[1723]: New session 18 of user core. Jan 29 16:29:09.073643 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:29:09.577112 sshd[4873]: Connection closed by 10.200.16.10 port 53100 Jan 29 16:29:09.578036 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:09.581817 systemd[1]: sshd@15-10.200.8.22:22-10.200.16.10:53100.service: Deactivated successfully. Jan 29 16:29:09.584666 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:29:09.586436 systemd-logind[1723]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:29:09.587658 systemd-logind[1723]: Removed session 18. Jan 29 16:29:14.701801 systemd[1]: Started sshd@16-10.200.8.22:22-10.200.16.10:53110.service - OpenSSH per-connection server daemon (10.200.16.10:53110). Jan 29 16:29:15.352522 sshd[4906]: Accepted publickey for core from 10.200.16.10 port 53110 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:29:15.353980 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:15.358723 systemd-logind[1723]: New session 19 of user core. Jan 29 16:29:15.366607 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:29:15.865649 sshd[4908]: Connection closed by 10.200.16.10 port 53110 Jan 29 16:29:15.866513 sshd-session[4906]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:15.870130 systemd[1]: sshd@16-10.200.8.22:22-10.200.16.10:53110.service: Deactivated successfully. Jan 29 16:29:15.872902 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:29:15.874775 systemd-logind[1723]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:29:15.875845 systemd-logind[1723]: Removed session 19. Jan 29 16:29:20.990870 systemd[1]: Started sshd@17-10.200.8.22:22-10.200.16.10:36886.service - OpenSSH per-connection server daemon (10.200.16.10:36886). Jan 29 16:29:21.640756 sshd[4943]: Accepted publickey for core from 10.200.16.10 port 36886 ssh2: RSA SHA256:KLuF2qNQ9wi2xXD22Uhdt/1W+BDmKRtVMszRfYnk3Ok Jan 29 16:29:21.642325 sshd-session[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:21.647513 systemd-logind[1723]: New session 20 of user core. Jan 29 16:29:21.655642 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:29:22.166873 sshd[4951]: Connection closed by 10.200.16.10 port 36886 Jan 29 16:29:22.167830 sshd-session[4943]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:22.171520 systemd[1]: sshd@17-10.200.8.22:22-10.200.16.10:36886.service: Deactivated successfully. Jan 29 16:29:22.174314 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:29:22.176422 systemd-logind[1723]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:29:22.177972 systemd-logind[1723]: Removed session 20.