Jan 23 18:39:42.634885 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 15:50:57 -00 2026 Jan 23 18:39:42.634915 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:39:42.634926 kernel: BIOS-provided physical RAM map: Jan 23 18:39:42.634932 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:39:42.634937 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 23 18:39:42.634941 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 23 18:39:42.634946 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 23 18:39:42.634950 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 23 18:39:42.634955 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 23 18:39:42.634963 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 23 18:39:42.634970 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 23 18:39:42.634976 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 23 18:39:42.634982 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 23 18:39:42.634988 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 23 18:39:42.634994 kernel: NX (Execute Disable) protection: active Jan 23 18:39:42.635000 kernel: APIC: Static calls initialized Jan 23 18:39:42.635004 kernel: efi: EFI v2.7 by Microsoft Jan 23 18:39:42.635009 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3e9ac698 RNG=0x3ffd2018 Jan 23 18:39:42.635019 kernel: random: crng init done Jan 23 18:39:42.635025 kernel: secureboot: Secure boot disabled Jan 23 18:39:42.635031 kernel: SMBIOS 3.1.0 present. Jan 23 18:39:42.635035 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 23 18:39:42.635039 kernel: DMI: Memory slots populated: 2/2 Jan 23 18:39:42.635043 kernel: Hypervisor detected: Microsoft Hyper-V Jan 23 18:39:42.635049 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 23 18:39:42.635061 kernel: Hyper-V: Nested features: 0x3e0101 Jan 23 18:39:42.635068 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 23 18:39:42.635074 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 23 18:39:42.635079 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 23 18:39:42.635083 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 23 18:39:42.635087 kernel: tsc: Detected 2300.000 MHz processor Jan 23 18:39:42.635091 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:39:42.635101 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:39:42.635110 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 23 18:39:42.635121 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 18:39:42.635127 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:39:42.635131 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 23 18:39:42.635136 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 23 18:39:42.635141 kernel: Using GB pages for direct mapping Jan 23 18:39:42.635149 kernel: ACPI: Early table checksum verification disabled Jan 23 18:39:42.635162 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 23 18:39:42.635169 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:39:42.635175 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:39:42.635180 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 18:39:42.635185 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 23 18:39:42.635190 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:39:42.635200 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:39:42.635208 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:39:42.635215 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 23 18:39:42.635222 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 23 18:39:42.635229 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:39:42.635234 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 23 18:39:42.635239 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 23 18:39:42.635245 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 23 18:39:42.635253 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 23 18:39:42.635260 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 23 18:39:42.635267 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 23 18:39:42.635273 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 23 18:39:42.635277 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 23 18:39:42.635283 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 23 18:39:42.635289 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 18:39:42.635297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 23 18:39:42.635305 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 23 18:39:42.635313 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 23 18:39:42.635319 kernel: Zone ranges: Jan 23 18:39:42.635325 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:39:42.635331 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 18:39:42.635335 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 23 18:39:42.635341 kernel: Device empty Jan 23 18:39:42.635350 kernel: Movable zone start for each node Jan 23 18:39:42.635358 kernel: Early memory node ranges Jan 23 18:39:42.635365 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 18:39:42.635369 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 23 18:39:42.635375 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 23 18:39:42.635380 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 23 18:39:42.635388 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 23 18:39:42.635396 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 23 18:39:42.635403 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:39:42.635408 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 18:39:42.635413 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 23 18:39:42.635419 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 23 18:39:42.635427 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 23 18:39:42.635435 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 23 18:39:42.635442 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:39:42.635448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:39:42.635452 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:39:42.635457 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 23 18:39:42.635464 kernel: TSC deadline timer available Jan 23 18:39:42.635478 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:39:42.635485 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:39:42.635490 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:39:42.635495 kernel: CPU topo: Max. threads per core: 2 Jan 23 18:39:42.635500 kernel: CPU topo: Num. cores per package: 1 Jan 23 18:39:42.635505 kernel: CPU topo: Num. threads per package: 2 Jan 23 18:39:42.635513 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 18:39:42.635523 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 23 18:39:42.635529 kernel: Booting paravirtualized kernel on Hyper-V Jan 23 18:39:42.635534 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:39:42.635539 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 18:39:42.635543 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 18:39:42.635551 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 18:39:42.635559 kernel: pcpu-alloc: [0] 0 1 Jan 23 18:39:42.635569 kernel: Hyper-V: PV spinlocks enabled Jan 23 18:39:42.635574 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:39:42.635580 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:39:42.635585 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 23 18:39:42.635592 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:39:42.635600 kernel: Fallback order for Node 0: 0 Jan 23 18:39:42.635610 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 23 18:39:42.635616 kernel: Policy zone: Normal Jan 23 18:39:42.635621 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:39:42.635625 kernel: software IO TLB: area num 2. Jan 23 18:39:42.635630 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 18:39:42.635638 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:39:42.635646 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:39:42.635654 kernel: Dynamic Preempt: voluntary Jan 23 18:39:42.635664 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:39:42.635670 kernel: rcu: RCU event tracing is enabled. Jan 23 18:39:42.635680 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 18:39:42.635689 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:39:42.635698 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:39:42.635706 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:39:42.635715 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:39:42.635723 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 18:39:42.635729 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:39:42.635735 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:39:42.635741 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:39:42.635749 kernel: Using NULL legacy PIC Jan 23 18:39:42.635758 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 23 18:39:42.635769 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:39:42.635777 kernel: Console: colour dummy device 80x25 Jan 23 18:39:42.635782 kernel: printk: legacy console [tty1] enabled Jan 23 18:39:42.635799 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:39:42.635808 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 23 18:39:42.635817 kernel: ACPI: Core revision 20240827 Jan 23 18:39:42.635825 kernel: Failed to register legacy timer interrupt Jan 23 18:39:42.635837 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:39:42.635843 kernel: x2apic enabled Jan 23 18:39:42.635848 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:39:42.635853 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 18:39:42.635858 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 18:39:42.635866 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 23 18:39:42.635875 kernel: Hyper-V: Using IPI hypercalls Jan 23 18:39:42.635883 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 23 18:39:42.635889 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 23 18:39:42.635894 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 23 18:39:42.635899 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 23 18:39:42.635908 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 23 18:39:42.635916 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 23 18:39:42.635923 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jan 23 18:39:42.635930 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jan 23 18:39:42.635935 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:39:42.635941 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 18:39:42.635949 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 18:39:42.635958 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:39:42.635965 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 18:39:42.635972 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 18:39:42.635977 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 23 18:39:42.635983 kernel: RETBleed: Vulnerable Jan 23 18:39:42.635989 kernel: Speculative Store Bypass: Vulnerable Jan 23 18:39:42.635997 kernel: active return thunk: its_return_thunk Jan 23 18:39:42.636004 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 18:39:42.636011 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:39:42.636018 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:39:42.636026 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:39:42.636034 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 18:39:42.636041 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 18:39:42.636048 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 18:39:42.636058 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 23 18:39:42.636065 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 23 18:39:42.636073 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 23 18:39:42.636081 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:39:42.636088 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 18:39:42.636097 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 18:39:42.636106 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 18:39:42.636114 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 23 18:39:42.636124 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 23 18:39:42.636132 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 23 18:39:42.636141 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 23 18:39:42.636150 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:39:42.636159 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:39:42.636167 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:39:42.636175 kernel: landlock: Up and running. Jan 23 18:39:42.636183 kernel: SELinux: Initializing. Jan 23 18:39:42.636192 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 18:39:42.636201 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 18:39:42.636209 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 23 18:39:42.636217 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 23 18:39:42.636226 kernel: signal: max sigframe size: 11952 Jan 23 18:39:42.636237 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:39:42.636248 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:39:42.636257 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:39:42.636265 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 18:39:42.636273 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:39:42.636283 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:39:42.636291 kernel: .... node #0, CPUs: #1 Jan 23 18:39:42.636299 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 18:39:42.636310 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jan 23 18:39:42.636320 kernel: Memory: 8093408K/8383228K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 283604K reserved, 0K cma-reserved) Jan 23 18:39:42.636329 kernel: devtmpfs: initialized Jan 23 18:39:42.636337 kernel: x86/mm: Memory block size: 128MB Jan 23 18:39:42.636346 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 23 18:39:42.636354 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:39:42.636365 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 18:39:42.636374 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:39:42.636382 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:39:42.636392 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:39:42.636400 kernel: audit: type=2000 audit(1769193578.080:1): state=initialized audit_enabled=0 res=1 Jan 23 18:39:42.636409 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:39:42.636417 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:39:42.636426 kernel: cpuidle: using governor menu Jan 23 18:39:42.636438 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:39:42.636446 kernel: dca service started, version 1.12.1 Jan 23 18:39:42.636455 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 23 18:39:42.636463 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 23 18:39:42.636471 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:39:42.636480 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:39:42.636489 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:39:42.636497 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:39:42.636506 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:39:42.636513 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:39:42.636521 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:39:42.636529 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:39:42.636537 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:39:42.636544 kernel: ACPI: Interpreter enabled Jan 23 18:39:42.636554 kernel: ACPI: PM: (supports S0 S5) Jan 23 18:39:42.636563 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:39:42.636572 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:39:42.636580 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 23 18:39:42.636589 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 23 18:39:42.636597 kernel: iommu: Default domain type: Translated Jan 23 18:39:42.636606 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:39:42.636619 kernel: efivars: Registered efivars operations Jan 23 18:39:42.636629 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:39:42.636638 kernel: PCI: System does not support PCI Jan 23 18:39:42.636647 kernel: vgaarb: loaded Jan 23 18:39:42.636656 kernel: clocksource: Switched to clocksource tsc-early Jan 23 18:39:42.636664 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:39:42.636673 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:39:42.636684 kernel: pnp: PnP ACPI init Jan 23 18:39:42.636693 kernel: pnp: PnP ACPI: found 3 devices Jan 23 18:39:42.636702 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:39:42.636712 kernel: NET: Registered PF_INET protocol family Jan 23 18:39:42.636720 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 18:39:42.636730 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 23 18:39:42.636740 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:39:42.636753 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:39:42.636763 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 23 18:39:42.636772 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 23 18:39:42.636780 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 18:39:42.636807 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 18:39:42.636815 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:39:42.636824 kernel: NET: Registered PF_XDP protocol family Jan 23 18:39:42.636835 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:39:42.636844 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 18:39:42.636853 kernel: software IO TLB: mapped [mem 0x000000003a9ac000-0x000000003e9ac000] (64MB) Jan 23 18:39:42.636861 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 23 18:39:42.636870 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 23 18:39:42.636878 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jan 23 18:39:42.636886 kernel: clocksource: Switched to clocksource tsc Jan 23 18:39:42.636898 kernel: Initialise system trusted keyrings Jan 23 18:39:42.636907 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 23 18:39:42.636916 kernel: Key type asymmetric registered Jan 23 18:39:42.636924 kernel: Asymmetric key parser 'x509' registered Jan 23 18:39:42.636932 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:39:42.636941 kernel: io scheduler mq-deadline registered Jan 23 18:39:42.636949 kernel: io scheduler kyber registered Jan 23 18:39:42.636961 kernel: io scheduler bfq registered Jan 23 18:39:42.636970 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:39:42.636979 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:39:42.636987 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:39:42.636995 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 23 18:39:42.637004 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:39:42.637012 kernel: i8042: PNP: No PS/2 controller found. Jan 23 18:39:42.637171 kernel: rtc_cmos 00:02: registered as rtc0 Jan 23 18:39:42.637631 kernel: rtc_cmos 00:02: setting system clock to 2026-01-23T18:39:39 UTC (1769193579) Jan 23 18:39:42.638611 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 23 18:39:42.638673 kernel: intel_pstate: Intel P-state driver initializing Jan 23 18:39:42.638682 kernel: efifb: probing for efifb Jan 23 18:39:42.638691 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 18:39:42.638748 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 18:39:42.638757 kernel: efifb: scrolling: redraw Jan 23 18:39:42.638822 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 18:39:42.638830 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 18:39:42.638885 kernel: fb0: EFI VGA frame buffer device Jan 23 18:39:42.638893 kernel: pstore: Using crash dump compression: deflate Jan 23 18:39:42.638958 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 18:39:42.638969 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:39:42.639023 kernel: Segment Routing with IPv6 Jan 23 18:39:42.639031 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:39:42.639039 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:39:42.639092 kernel: Key type dns_resolver registered Jan 23 18:39:42.639101 kernel: IPI shorthand broadcast: enabled Jan 23 18:39:42.639156 kernel: sched_clock: Marking stable (1891027721, 88105896)->(2287437911, -308304294) Jan 23 18:39:42.639165 kernel: registered taskstats version 1 Jan 23 18:39:42.639221 kernel: Loading compiled-in X.509 certificates Jan 23 18:39:42.639231 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed4528912f8413ae803010e63385bcf7ed197cf1' Jan 23 18:39:42.639239 kernel: Demotion targets for Node 0: null Jan 23 18:39:42.639295 kernel: Key type .fscrypt registered Jan 23 18:39:42.639303 kernel: Key type fscrypt-provisioning registered Jan 23 18:39:42.639356 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:39:42.639365 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:39:42.639375 kernel: ima: No architecture policies found Jan 23 18:39:42.639437 kernel: clk: Disabling unused clocks Jan 23 18:39:42.639490 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 23 18:39:42.639498 kernel: Write protecting the kernel read-only data: 47104k Jan 23 18:39:42.639506 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 23 18:39:42.639559 kernel: Run /init as init process Jan 23 18:39:42.641919 kernel: with arguments: Jan 23 18:39:42.641934 kernel: /init Jan 23 18:39:42.641943 kernel: with environment: Jan 23 18:39:42.641951 kernel: HOME=/ Jan 23 18:39:42.641960 kernel: TERM=linux Jan 23 18:39:42.641969 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 18:39:42.641978 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 18:39:42.641987 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 18:39:42.641995 kernel: PTP clock support registered Jan 23 18:39:42.642006 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 18:39:42.642015 kernel: SCSI subsystem initialized Jan 23 18:39:42.642024 kernel: hv_vmbus: registering driver hv_utils Jan 23 18:39:42.642033 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 18:39:42.642041 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 18:39:42.642050 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 18:39:42.642059 kernel: hv_vmbus: registering driver hv_pci Jan 23 18:39:42.642217 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 23 18:39:42.642323 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 23 18:39:42.642456 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 23 18:39:42.642572 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 18:39:42.643740 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 23 18:39:42.644256 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 23 18:39:42.644491 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 18:39:42.646890 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 23 18:39:42.646911 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 18:39:42.647062 kernel: scsi host0: storvsc_host_t Jan 23 18:39:42.647198 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 18:39:42.647211 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 18:39:42.647221 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 18:39:42.647231 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 23 18:39:42.647345 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 18:39:42.647357 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 18:39:42.647368 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 23 18:39:42.647463 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 23 18:39:42.647594 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 23 18:39:42.647754 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 18:39:42.647772 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 18:39:42.647929 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 18:39:42.647943 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 18:39:42.648098 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 18:39:42.648111 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:39:42.648122 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:39:42.648133 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:39:42.648143 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 23 18:39:42.648169 kernel: raid6: avx512x4 gen() 46989 MB/s Jan 23 18:39:42.648180 kernel: raid6: avx512x2 gen() 45323 MB/s Jan 23 18:39:42.648191 kernel: raid6: avx512x1 gen() 27537 MB/s Jan 23 18:39:42.648201 kernel: raid6: avx2x4 gen() 39985 MB/s Jan 23 18:39:42.648211 kernel: raid6: avx2x2 gen() 43110 MB/s Jan 23 18:39:42.648222 kernel: raid6: avx2x1 gen() 32323 MB/s Jan 23 18:39:42.648232 kernel: raid6: using algorithm avx512x4 gen() 46989 MB/s Jan 23 18:39:42.648244 kernel: raid6: .... xor() 8040 MB/s, rmw enabled Jan 23 18:39:42.648254 kernel: raid6: using avx512x2 recovery algorithm Jan 23 18:39:42.648264 kernel: xor: automatically using best checksumming function avx Jan 23 18:39:42.648274 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:39:42.648285 kernel: BTRFS: device fsid ae5f9861-c401-42b4-99c9-2e3fe0b343c2 devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (970) Jan 23 18:39:42.648296 kernel: BTRFS info (device dm-0): first mount of filesystem ae5f9861-c401-42b4-99c9-2e3fe0b343c2 Jan 23 18:39:42.648306 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:39:42.648318 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 18:39:42.648328 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:39:42.648338 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:39:42.648349 kernel: loop: module loaded Jan 23 18:39:42.648359 kernel: loop0: detected capacity change from 0 to 100560 Jan 23 18:39:42.648369 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:39:42.648381 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:39:42.648397 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:39:42.648409 systemd[1]: Detected virtualization microsoft. Jan 23 18:39:42.648419 systemd[1]: Detected architecture x86-64. Jan 23 18:39:42.648430 systemd[1]: Running in initrd. Jan 23 18:39:42.648441 systemd[1]: No hostname configured, using default hostname. Jan 23 18:39:42.648453 systemd[1]: Hostname set to . Jan 23 18:39:42.648465 systemd[1]: Initializing machine ID from random generator. Jan 23 18:39:42.648478 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:39:42.648489 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:39:42.648499 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:39:42.648509 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:39:42.648522 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:39:42.648535 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:39:42.648546 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:39:42.648557 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:39:42.648569 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:39:42.648581 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:39:42.648592 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:39:42.648603 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:39:42.648613 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:39:42.648626 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:39:42.648636 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:39:42.648648 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:39:42.648659 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:39:42.648669 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:39:42.648679 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:39:42.648689 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:39:42.648699 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:39:42.648710 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:39:42.648722 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:39:42.648731 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:39:42.648742 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:39:42.648752 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:39:42.648763 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:39:42.648773 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:39:42.648800 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:39:42.648813 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:39:42.650820 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:39:42.650832 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:39:42.650843 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:39:42.650856 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:39:42.650889 systemd-journald[1106]: Collecting audit messages is enabled. Jan 23 18:39:42.650912 kernel: audit: type=1130 audit(1769193582.630:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.650925 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:39:42.650936 kernel: audit: type=1130 audit(1769193582.636:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.650945 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:39:42.650955 kernel: audit: type=1130 audit(1769193582.642:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.650966 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:39:42.650977 systemd-journald[1106]: Journal started Jan 23 18:39:42.651000 systemd-journald[1106]: Runtime Journal (/run/log/journal/7039dbc7936344b695aecd31cdd6d6cf) is 8M, max 158.5M, 150.5M free. Jan 23 18:39:42.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.653813 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:39:42.658331 kernel: audit: type=1130 audit(1769193582.654:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.662208 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:39:42.735803 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:39:42.738712 systemd-modules-load[1110]: Inserted module 'br_netfilter' Jan 23 18:39:42.739193 kernel: Bridge firewalling registered Jan 23 18:39:42.740191 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:39:42.745053 kernel: audit: type=1130 audit(1769193582.739:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.743241 systemd-tmpfiles[1119]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:39:42.744862 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:39:42.746872 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:39:42.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.750938 kernel: audit: type=1130 audit(1769193582.746:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.752936 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:39:42.765518 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:39:42.771063 kernel: audit: type=1130 audit(1769193582.765:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.784026 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:39:42.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.789826 kernel: audit: type=1130 audit(1769193582.783:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.810410 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:39:42.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.814001 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:39:42.810000 audit: BPF prog-id=6 op=LOAD Jan 23 18:39:42.815831 kernel: audit: type=1130 audit(1769193582.810:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.829238 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:39:42.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.840967 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:39:42.861480 systemd-resolved[1132]: Positive Trust Anchors: Jan 23 18:39:42.861822 systemd-resolved[1132]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:39:42.861828 systemd-resolved[1132]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 18:39:42.862343 systemd-resolved[1132]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:39:42.882853 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:39:42.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.888185 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:39:42.896422 systemd-resolved[1132]: Defaulting to hostname 'linux'. Jan 23 18:39:42.898014 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:39:42.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:42.898488 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:39:42.911089 dracut-cmdline[1146]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:39:43.017819 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:39:43.082808 kernel: iscsi: registered transport (tcp) Jan 23 18:39:43.127037 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:39:43.127093 kernel: QLogic iSCSI HBA Driver Jan 23 18:39:43.171380 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:39:43.187233 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:39:43.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:43.191351 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:39:43.223488 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:39:43.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:43.227203 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:39:43.231909 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:39:43.264149 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:39:43.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:43.266000 audit: BPF prog-id=7 op=LOAD Jan 23 18:39:43.267000 audit: BPF prog-id=8 op=LOAD Jan 23 18:39:43.268893 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:39:43.300448 systemd-udevd[1388]: Using default interface naming scheme 'v257'. Jan 23 18:39:43.312858 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:39:43.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:43.317094 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:39:43.325307 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:39:43.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:43.332000 audit: BPF prog-id=9 op=LOAD Jan 23 18:39:43.334898 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:39:43.345729 dracut-pre-trigger[1477]: rd.md=0: removing MD RAID activation Jan 23 18:39:43.370251 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:39:43.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:43.376901 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:39:43.388246 systemd-networkd[1485]: lo: Link UP Jan 23 18:39:43.388252 systemd-networkd[1485]: lo: Gained carrier Jan 23 18:39:43.389153 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:39:43.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:43.395526 systemd[1]: Reached target network.target - Network. Jan 23 18:39:43.420714 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:39:43.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:43.429120 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:39:43.496017 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#112 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 18:39:43.514061 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:39:43.522243 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 18:39:43.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:43.514127 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:39:43.520877 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:39:43.533091 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:39:43.533128 kernel: hv_netvsc f8615163-0000-1000-2000-6045bde16520 (unnamed net_device) (uninitialized): VF slot 1 added Jan 23 18:39:43.534898 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:39:43.563481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:39:43.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:43.573975 systemd-networkd[1485]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:39:43.573982 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:39:43.574642 systemd-networkd[1485]: eth0: Link UP Jan 23 18:39:43.574777 systemd-networkd[1485]: eth0: Gained carrier Jan 23 18:39:43.574806 systemd-networkd[1485]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:39:43.592799 kernel: AES CTR mode by8 optimization enabled Jan 23 18:39:43.599916 systemd-networkd[1485]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 23 18:39:43.688804 kernel: nvme nvme0: using unchecked data buffer Jan 23 18:39:43.769475 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 23 18:39:43.774906 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:39:43.868874 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 23 18:39:43.881545 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 23 18:39:43.902942 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 23 18:39:43.979272 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:39:43.984907 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 23 18:39:43.984937 kernel: audit: type=1130 audit(1769193583.978:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:43.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:43.985205 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:39:43.986566 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:39:43.988861 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:39:44.001923 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:39:44.022870 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:39:44.028887 kernel: audit: type=1130 audit(1769193584.022:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:44.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:44.552803 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 23 18:39:44.556926 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 23 18:39:44.557117 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 23 18:39:44.558463 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 18:39:44.562838 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 23 18:39:44.566901 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 23 18:39:44.570936 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 23 18:39:44.572840 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 23 18:39:44.586421 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 18:39:44.586635 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 23 18:39:44.590900 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 23 18:39:44.608881 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 23 18:39:44.618803 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 23 18:39:44.622734 kernel: hv_netvsc f8615163-0000-1000-2000-6045bde16520 eth0: VF registering: eth1 Jan 23 18:39:44.622920 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 23 18:39:44.626480 systemd-networkd[1485]: eth1: Interface name change detected, renamed to enP30832s1. Jan 23 18:39:44.629066 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 23 18:39:44.723820 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 23 18:39:44.727327 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 18:39:44.727556 kernel: hv_netvsc f8615163-0000-1000-2000-6045bde16520 eth0: Data path switched to VF: enP30832s1 Jan 23 18:39:44.727876 systemd-networkd[1485]: enP30832s1: Link UP Jan 23 18:39:44.728062 systemd-networkd[1485]: enP30832s1: Gained carrier Jan 23 18:39:44.832991 systemd-networkd[1485]: eth0: Gained IPv6LL Jan 23 18:39:45.019661 disk-uuid[1662]: Warning: The kernel is still using the old partition table. Jan 23 18:39:45.019661 disk-uuid[1662]: The new table will be used at the next reboot or after you Jan 23 18:39:45.019661 disk-uuid[1662]: run partprobe(8) or kpartx(8) Jan 23 18:39:45.019661 disk-uuid[1662]: The operation has completed successfully. Jan 23 18:39:45.026352 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:39:45.026471 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:39:45.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:45.036912 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:39:45.043112 kernel: audit: type=1130 audit(1769193585.032:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:45.043140 kernel: audit: type=1131 audit(1769193585.032:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:45.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:45.089802 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1708) Jan 23 18:39:45.089840 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:39:45.092227 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:39:45.117982 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 18:39:45.118032 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 18:39:45.118942 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 18:39:45.124820 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:39:45.125357 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:39:45.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:45.129741 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:39:45.133975 kernel: audit: type=1130 audit(1769193585.128:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:46.057523 ignition[1727]: Ignition 2.24.0 Jan 23 18:39:46.057535 ignition[1727]: Stage: fetch-offline Jan 23 18:39:46.060330 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:39:46.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:46.067804 kernel: audit: type=1130 audit(1769193586.062:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:46.057669 ignition[1727]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:39:46.067023 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 18:39:46.057677 ignition[1727]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:39:46.057773 ignition[1727]: parsed url from cmdline: "" Jan 23 18:39:46.057775 ignition[1727]: no config URL provided Jan 23 18:39:46.057780 ignition[1727]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:39:46.057803 ignition[1727]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:39:46.057809 ignition[1727]: failed to fetch config: resource requires networking Jan 23 18:39:46.059157 ignition[1727]: Ignition finished successfully Jan 23 18:39:46.088573 ignition[1734]: Ignition 2.24.0 Jan 23 18:39:46.088582 ignition[1734]: Stage: fetch Jan 23 18:39:46.088824 ignition[1734]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:39:46.088832 ignition[1734]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:39:46.088926 ignition[1734]: parsed url from cmdline: "" Jan 23 18:39:46.088928 ignition[1734]: no config URL provided Jan 23 18:39:46.088933 ignition[1734]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:39:46.088938 ignition[1734]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:39:46.088957 ignition[1734]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 18:39:46.157921 ignition[1734]: GET result: OK Jan 23 18:39:46.158020 ignition[1734]: config has been read from IMDS userdata Jan 23 18:39:46.158040 ignition[1734]: parsing config with SHA512: 48dee73e622c2d0b3063112d3655dc342e6aaa12397f0c659885d4af7b7cb809244322d4925220bc03fd6ab3239c42c96b97f828e5a90aace72d15c5bed648eb Jan 23 18:39:46.164030 unknown[1734]: fetched base config from "system" Jan 23 18:39:46.164041 unknown[1734]: fetched base config from "system" Jan 23 18:39:46.164367 ignition[1734]: fetch: fetch complete Jan 23 18:39:46.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:46.164045 unknown[1734]: fetched user config from "azure" Jan 23 18:39:46.164371 ignition[1734]: fetch: fetch passed Jan 23 18:39:46.166881 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 18:39:46.164407 ignition[1734]: Ignition finished successfully Jan 23 18:39:46.177506 kernel: audit: type=1130 audit(1769193586.170:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:46.172898 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:39:46.205326 ignition[1740]: Ignition 2.24.0 Jan 23 18:39:46.205337 ignition[1740]: Stage: kargs Jan 23 18:39:46.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:46.207773 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:39:46.216885 kernel: audit: type=1130 audit(1769193586.210:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:46.205560 ignition[1740]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:39:46.214934 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:39:46.205568 ignition[1740]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:39:46.206300 ignition[1740]: kargs: kargs passed Jan 23 18:39:46.206332 ignition[1740]: Ignition finished successfully Jan 23 18:39:46.236527 ignition[1746]: Ignition 2.24.0 Jan 23 18:39:46.236537 ignition[1746]: Stage: disks Jan 23 18:39:46.245877 kernel: audit: type=1130 audit(1769193586.240:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:46.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:46.238820 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:39:46.236775 ignition[1746]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:39:46.241545 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:39:46.236802 ignition[1746]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:39:46.250989 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:39:46.237646 ignition[1746]: disks: disks passed Jan 23 18:39:46.253570 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:39:46.237676 ignition[1746]: Ignition finished successfully Jan 23 18:39:46.256957 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:39:46.263460 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:39:46.270099 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:39:46.348235 systemd-fsck[1754]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 23 18:39:46.351671 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:39:46.361209 kernel: audit: type=1130 audit(1769193586.354:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:46.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:46.356931 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:39:46.624810 kernel: EXT4-fs (nvme0n1p9): mounted filesystem eebf2bdd-2461-4b18-9f37-721daf86511d r/w with ordered data mode. Quota mode: none. Jan 23 18:39:46.625069 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:39:46.627476 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:39:46.669020 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:39:46.673148 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:39:46.687760 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 18:39:46.689687 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:39:46.689726 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:39:46.703601 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1763) Jan 23 18:39:46.703628 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:39:46.696967 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:39:46.709250 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:39:46.709205 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:39:46.718232 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 18:39:46.718255 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 18:39:46.718266 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 18:39:46.719455 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:39:47.152408 coreos-metadata[1765]: Jan 23 18:39:47.152 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 18:39:47.156079 coreos-metadata[1765]: Jan 23 18:39:47.156 INFO Fetch successful Jan 23 18:39:47.157342 coreos-metadata[1765]: Jan 23 18:39:47.157 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 18:39:47.166715 coreos-metadata[1765]: Jan 23 18:39:47.166 INFO Fetch successful Jan 23 18:39:47.180908 coreos-metadata[1765]: Jan 23 18:39:47.180 INFO wrote hostname ci-4547.1.0-a-1dec04cda0 to /sysroot/etc/hostname Jan 23 18:39:47.182692 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 18:39:47.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:48.095434 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:39:48.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:48.098692 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:39:48.113094 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:39:48.136871 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:39:48.141900 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:39:48.153418 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:39:48.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:48.161431 ignition[1866]: INFO : Ignition 2.24.0 Jan 23 18:39:48.161431 ignition[1866]: INFO : Stage: mount Jan 23 18:39:48.163976 ignition[1866]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:39:48.163976 ignition[1866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:39:48.163976 ignition[1866]: INFO : mount: mount passed Jan 23 18:39:48.163976 ignition[1866]: INFO : Ignition finished successfully Jan 23 18:39:48.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:48.165198 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:39:48.168691 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:39:48.182891 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:39:48.206837 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1878) Jan 23 18:39:48.208860 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:39:48.208939 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:39:48.214851 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 18:39:48.214887 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 18:39:48.215849 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 18:39:48.217566 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:39:48.240173 ignition[1895]: INFO : Ignition 2.24.0 Jan 23 18:39:48.240173 ignition[1895]: INFO : Stage: files Jan 23 18:39:48.243664 ignition[1895]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:39:48.243664 ignition[1895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:39:48.243664 ignition[1895]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:39:48.269139 ignition[1895]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:39:48.269139 ignition[1895]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:39:48.316899 ignition[1895]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:39:48.319853 ignition[1895]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:39:48.319853 ignition[1895]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:39:48.317191 unknown[1895]: wrote ssh authorized keys file for user: core Jan 23 18:39:48.338775 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 18:39:48.341152 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 23 18:39:48.396633 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:39:48.469337 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 18:39:48.473854 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:39:48.473854 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:39:48.473854 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:39:48.473854 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:39:48.473854 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:39:48.473854 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:39:48.473854 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:39:48.473854 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:39:48.497817 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:39:48.497817 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:39:48.497817 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:39:48.497817 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:39:48.497817 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:39:48.497817 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 18:39:49.147273 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 18:39:50.666638 ignition[1895]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:39:50.666638 ignition[1895]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 18:39:50.705202 ignition[1895]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:39:50.713003 ignition[1895]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:39:50.713003 ignition[1895]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 18:39:50.713003 ignition[1895]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:39:50.737061 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 23 18:39:50.737131 kernel: audit: type=1130 audit(1769193590.719:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.737173 ignition[1895]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:39:50.737173 ignition[1895]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:39:50.737173 ignition[1895]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:39:50.737173 ignition[1895]: INFO : files: files passed Jan 23 18:39:50.737173 ignition[1895]: INFO : Ignition finished successfully Jan 23 18:39:50.716823 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:39:50.720817 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:39:50.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.732479 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:39:50.766887 kernel: audit: type=1130 audit(1769193590.755:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.766913 kernel: audit: type=1131 audit(1769193590.755:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.752774 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:39:50.752873 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:39:50.771736 initrd-setup-root-after-ignition[1927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:39:50.771736 initrd-setup-root-after-ignition[1927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:39:50.783060 kernel: audit: type=1130 audit(1769193590.774:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.783125 initrd-setup-root-after-ignition[1931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:39:50.774265 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:39:50.776145 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:39:50.786944 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:39:50.822039 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:39:50.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.822124 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:39:50.833865 kernel: audit: type=1130 audit(1769193590.821:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.833889 kernel: audit: type=1131 audit(1769193590.821:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.822609 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:39:50.832839 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:39:50.836818 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:39:50.837375 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:39:50.863144 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:39:50.871928 kernel: audit: type=1130 audit(1769193590.862:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.865899 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:39:50.890272 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:39:50.890603 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:39:50.896548 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:39:50.901926 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:39:50.905683 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:39:50.913858 kernel: audit: type=1131 audit(1769193590.907:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.905845 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:39:50.913977 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:39:50.917943 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:39:50.922969 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:39:50.925915 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:39:50.927980 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:39:50.930407 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:39:50.934012 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:39:50.937927 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:39:50.940133 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:39:50.942915 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:39:50.946915 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:39:50.953877 kernel: audit: type=1131 audit(1769193590.949:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.948942 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:39:50.949062 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:39:50.954886 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:39:50.958137 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:39:50.969509 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:39:50.971677 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:39:50.977777 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:39:50.979151 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:39:50.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.988274 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:39:50.991505 kernel: audit: type=1131 audit(1769193590.982:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.988391 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:39:50.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:50.991883 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:39:50.991976 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:39:50.993988 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 18:39:50.994086 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 18:39:51.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.003740 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:39:51.005554 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:39:51.005696 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:39:51.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.019947 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:39:51.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.021427 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:39:51.021547 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:39:51.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.024428 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:39:51.024532 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:39:51.025175 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:39:51.025257 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:39:51.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.047935 ignition[1951]: INFO : Ignition 2.24.0 Jan 23 18:39:51.047935 ignition[1951]: INFO : Stage: umount Jan 23 18:39:51.047935 ignition[1951]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:39:51.047935 ignition[1951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:39:51.047935 ignition[1951]: INFO : umount: umount passed Jan 23 18:39:51.047935 ignition[1951]: INFO : Ignition finished successfully Jan 23 18:39:51.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.033237 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:39:51.033497 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:39:51.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.045694 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:39:51.045820 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:39:51.048051 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:39:51.048096 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:39:51.053163 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:39:51.053212 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:39:51.059031 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 18:39:51.059076 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 18:39:51.062625 systemd[1]: Stopped target network.target - Network. Jan 23 18:39:51.064393 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:39:51.064447 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:39:51.068870 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:39:51.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.072825 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:39:51.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.076831 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:39:51.078390 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:39:51.083633 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:39:51.088298 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:39:51.088334 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:39:51.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.090597 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:39:51.125000 audit: BPF prog-id=6 op=UNLOAD Jan 23 18:39:51.090626 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:39:51.093853 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 23 18:39:51.129000 audit: BPF prog-id=9 op=UNLOAD Jan 23 18:39:51.093873 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:39:51.097141 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:39:51.097217 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:39:51.100747 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:39:51.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.100804 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:39:51.104328 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:39:51.113900 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:39:51.119691 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:39:51.120240 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:39:51.120334 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:39:51.122020 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:39:51.122104 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:39:51.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.124333 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:39:51.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.124751 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:39:51.125172 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:39:51.127157 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:39:51.134454 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:39:51.135271 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:39:51.141842 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:39:51.141887 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:39:51.142307 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:39:51.142337 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:39:51.143696 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:39:51.166848 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:39:51.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.167028 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:39:51.173680 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:39:51.173714 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:39:51.179872 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:39:51.179902 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:39:51.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.185755 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:39:51.185814 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:39:51.192051 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:39:51.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.192119 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:39:51.197883 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:39:51.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.198647 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:39:51.206248 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:39:51.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.209315 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:39:51.209372 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:39:51.209580 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:39:51.209615 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:39:51.210471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:39:51.234908 kernel: hv_netvsc f8615163-0000-1000-2000-6045bde16520 eth0: Data path switched from VF: enP30832s1 Jan 23 18:39:51.236326 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 18:39:51.210507 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:39:51.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.229934 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:39:51.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.230007 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:39:51.239145 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:39:51.239227 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:39:51.528060 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:39:51.528163 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:39:51.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.532146 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:39:51.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:51.535863 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:39:51.535926 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:39:51.540918 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:39:51.554817 systemd[1]: Switching root. Jan 23 18:39:51.641510 systemd-journald[1106]: Journal stopped Jan 23 18:39:55.329467 systemd-journald[1106]: Received SIGTERM from PID 1 (systemd). Jan 23 18:39:55.329494 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:39:55.329508 kernel: SELinux: policy capability open_perms=1 Jan 23 18:39:55.329518 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:39:55.329527 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:39:55.329536 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:39:55.329546 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:39:55.329555 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:39:55.329565 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:39:55.329574 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:39:55.329583 systemd[1]: Successfully loaded SELinux policy in 124.133ms. Jan 23 18:39:55.329593 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.069ms. Jan 23 18:39:55.329604 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:39:55.329617 systemd[1]: Detected virtualization microsoft. Jan 23 18:39:55.329627 systemd[1]: Detected architecture x86-64. Jan 23 18:39:55.329637 systemd[1]: Detected first boot. Jan 23 18:39:55.329647 systemd[1]: Hostname set to . Jan 23 18:39:55.329658 systemd[1]: Initializing machine ID from random generator. Jan 23 18:39:55.329668 zram_generator::config[1994]: No configuration found. Jan 23 18:39:55.329678 kernel: Guest personality initialized and is inactive Jan 23 18:39:55.329688 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 23 18:39:55.329697 kernel: Initialized host personality Jan 23 18:39:55.329706 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:39:55.329715 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:39:55.329727 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:39:55.329736 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:39:55.329746 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:39:55.329759 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:39:55.329769 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:39:55.329779 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:39:55.329802 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:39:55.329812 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:39:55.329820 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:39:55.329830 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:39:55.329840 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:39:55.329849 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:39:55.329860 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:39:55.329870 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:39:55.329880 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:39:55.329891 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:39:55.329905 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:39:55.329915 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:39:55.329927 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:39:55.329937 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:39:55.329946 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:39:55.329954 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:39:55.329964 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:39:55.329975 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:39:55.329998 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:39:55.330009 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:39:55.330018 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 23 18:39:55.330027 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:39:55.330037 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:39:55.330046 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:39:55.330055 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:39:55.330067 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:39:55.330077 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:39:55.330087 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 23 18:39:55.330097 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:39:55.330302 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 23 18:39:55.330313 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 23 18:39:55.330369 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:39:55.330379 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:39:55.330435 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:39:55.330445 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:39:55.330518 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:39:55.330576 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:39:55.330586 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:39:55.330643 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:39:55.330652 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:39:55.330711 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:39:55.330766 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:39:55.330778 systemd[1]: Reached target machines.target - Containers. Jan 23 18:39:55.330859 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:39:55.330872 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:39:55.330882 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:39:55.330894 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:39:55.330906 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:39:55.330917 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:39:55.330937 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:39:55.330952 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:39:55.330964 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:39:55.330978 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:39:55.330989 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:39:55.331001 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:39:55.331016 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:39:55.331032 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:39:55.331045 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:39:55.331058 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:39:55.331068 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:39:55.331080 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:39:55.331092 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:39:55.331106 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:39:55.331119 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:39:55.331129 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:39:55.331140 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:39:55.331150 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:39:55.331162 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:39:55.331174 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:39:55.331187 kernel: fuse: init (API version 7.41) Jan 23 18:39:55.331224 systemd-journald[2077]: Collecting audit messages is enabled. Jan 23 18:39:55.331249 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:39:55.331265 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:39:55.331279 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:39:55.331296 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:39:55.331310 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:39:55.331320 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:39:55.331334 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:39:55.331346 systemd-journald[2077]: Journal started Jan 23 18:39:55.331372 systemd-journald[2077]: Runtime Journal (/run/log/journal/5559e4e8f1fd461c8214e7c7a4dee2c5) is 8M, max 158.5M, 150.5M free. Jan 23 18:39:55.040000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 23 18:39:55.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.219000 audit: BPF prog-id=14 op=UNLOAD Jan 23 18:39:55.219000 audit: BPF prog-id=13 op=UNLOAD Jan 23 18:39:55.220000 audit: BPF prog-id=15 op=LOAD Jan 23 18:39:55.220000 audit: BPF prog-id=16 op=LOAD Jan 23 18:39:55.220000 audit: BPF prog-id=17 op=LOAD Jan 23 18:39:55.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.323000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 23 18:39:55.323000 audit[2077]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc098a5400 a2=4000 a3=0 items=0 ppid=1 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:39:55.323000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 23 18:39:55.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:54.906903 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:39:55.333972 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:39:55.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:54.915238 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 18:39:54.915581 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:39:55.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.338181 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:39:55.338601 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:39:55.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.343199 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:39:55.343364 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:39:55.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.345378 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:39:55.345514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:39:55.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.349137 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:39:55.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.351366 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:39:55.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.354109 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:39:55.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.356721 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:39:55.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.368403 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:39:55.373930 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 23 18:39:55.380399 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:39:55.384507 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:39:55.386376 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:39:55.386407 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:39:55.390658 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:39:55.394437 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:39:55.395902 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:39:55.400906 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:39:55.406609 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:39:55.408328 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:39:55.410374 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:39:55.412880 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:39:55.418890 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:39:55.421914 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:39:55.425109 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:39:55.428045 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:39:55.441389 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:39:55.446982 kernel: ACPI: bus type drm_connector registered Jan 23 18:39:55.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.445029 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:39:55.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.448329 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:39:55.451959 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:39:55.452103 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:39:55.456626 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:39:55.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.460741 systemd-journald[2077]: Time spent on flushing to /var/log/journal/5559e4e8f1fd461c8214e7c7a4dee2c5 is 22.078ms for 1122 entries. Jan 23 18:39:55.460741 systemd-journald[2077]: System Journal (/var/log/journal/5559e4e8f1fd461c8214e7c7a4dee2c5) is 8M, max 2.2G, 2.2G free. Jan 23 18:39:55.588711 systemd-journald[2077]: Received client request to flush runtime journal. Jan 23 18:39:55.588763 kernel: loop1: detected capacity change from 0 to 50784 Jan 23 18:39:55.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.464008 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:39:55.466621 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:39:55.589582 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:39:55.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.595240 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:39:55.635994 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:39:55.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.688662 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:39:55.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.689000 audit: BPF prog-id=18 op=LOAD Jan 23 18:39:55.690000 audit: BPF prog-id=19 op=LOAD Jan 23 18:39:55.690000 audit: BPF prog-id=20 op=LOAD Jan 23 18:39:55.691340 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 23 18:39:55.693000 audit: BPF prog-id=21 op=LOAD Jan 23 18:39:55.695909 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:39:55.698959 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:39:55.705000 audit: BPF prog-id=22 op=LOAD Jan 23 18:39:55.705000 audit: BPF prog-id=23 op=LOAD Jan 23 18:39:55.705000 audit: BPF prog-id=24 op=LOAD Jan 23 18:39:55.709000 audit: BPF prog-id=25 op=LOAD Jan 23 18:39:55.709000 audit: BPF prog-id=26 op=LOAD Jan 23 18:39:55.709000 audit: BPF prog-id=27 op=LOAD Jan 23 18:39:55.707894 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 23 18:39:55.711510 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:39:55.759664 systemd-nsresourced[2156]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 23 18:39:55.762128 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 23 18:39:55.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.767454 kernel: kauditd_printk_skb: 97 callbacks suppressed Jan 23 18:39:55.767704 kernel: audit: type=1130 audit(1769193595.764:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.773360 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:39:55.778813 kernel: audit: type=1130 audit(1769193595.774:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.778559 systemd-tmpfiles[2155]: ACLs are not supported, ignoring. Jan 23 18:39:55.778574 systemd-tmpfiles[2155]: ACLs are not supported, ignoring. Jan 23 18:39:55.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.787876 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:39:55.795821 kernel: audit: type=1130 audit(1769193595.789:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.857965 systemd-oomd[2153]: No swap; memory pressure usage will be degraded Jan 23 18:39:55.860137 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 23 18:39:55.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.866804 kernel: audit: type=1130 audit(1769193595.861:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:55.906350 systemd-resolved[2154]: Positive Trust Anchors: Jan 23 18:39:55.906365 systemd-resolved[2154]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:39:55.906368 systemd-resolved[2154]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 18:39:55.906396 systemd-resolved[2154]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:39:55.916729 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:39:55.926805 kernel: loop2: detected capacity change from 0 to 111560 Jan 23 18:39:56.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:56.016205 systemd-resolved[2154]: Using system hostname 'ci-4547.1.0-a-1dec04cda0'. Jan 23 18:39:56.017404 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:39:56.019860 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:39:56.024819 kernel: audit: type=1130 audit(1769193596.018:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:56.093804 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:39:56.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:56.095000 audit: BPF prog-id=8 op=UNLOAD Jan 23 18:39:56.101677 kernel: audit: type=1130 audit(1769193596.095:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:56.101850 kernel: audit: type=1334 audit(1769193596.095:153): prog-id=8 op=UNLOAD Jan 23 18:39:56.101874 kernel: audit: type=1334 audit(1769193596.095:154): prog-id=7 op=UNLOAD Jan 23 18:39:56.095000 audit: BPF prog-id=7 op=UNLOAD Jan 23 18:39:56.101663 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:39:56.104336 kernel: audit: type=1334 audit(1769193596.096:155): prog-id=28 op=LOAD Jan 23 18:39:56.104430 kernel: audit: type=1334 audit(1769193596.096:156): prog-id=29 op=LOAD Jan 23 18:39:56.096000 audit: BPF prog-id=28 op=LOAD Jan 23 18:39:56.096000 audit: BPF prog-id=29 op=LOAD Jan 23 18:39:56.128941 systemd-udevd[2176]: Using default interface naming scheme 'v257'. Jan 23 18:39:56.271364 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:39:56.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:56.274000 audit: BPF prog-id=30 op=LOAD Jan 23 18:39:56.277523 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:39:56.300990 kernel: loop3: detected capacity change from 0 to 224512 Jan 23 18:39:56.307887 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:39:56.363810 kernel: loop4: detected capacity change from 0 to 27728 Jan 23 18:39:56.376105 systemd-networkd[2182]: lo: Link UP Jan 23 18:39:56.376115 systemd-networkd[2182]: lo: Gained carrier Jan 23 18:39:56.378122 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:39:56.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:56.380485 systemd-networkd[2182]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:39:56.380602 systemd[1]: Reached target network.target - Network. Jan 23 18:39:56.384708 systemd-networkd[2182]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:39:56.391806 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 23 18:39:56.386809 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:39:56.389417 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:39:56.395804 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 18:39:56.398836 kernel: hv_netvsc f8615163-0000-1000-2000-6045bde16520 eth0: Data path switched to VF: enP30832s1 Jan 23 18:39:56.402641 systemd-networkd[2182]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:39:56.402702 systemd-networkd[2182]: enP30832s1: Link UP Jan 23 18:39:56.403150 systemd-networkd[2182]: eth0: Link UP Jan 23 18:39:56.403153 systemd-networkd[2182]: eth0: Gained carrier Jan 23 18:39:56.403165 systemd-networkd[2182]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:39:56.404800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#100 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 18:39:56.407300 systemd-networkd[2182]: enP30832s1: Gained carrier Jan 23 18:39:56.407796 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:39:56.412884 systemd-networkd[2182]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 23 18:39:56.420834 kernel: hv_vmbus: registering driver hv_balloon Jan 23 18:39:56.422809 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 18:39:56.428012 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 18:39:56.435839 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 18:39:56.435893 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 18:39:56.437247 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:39:56.438993 kernel: Console: switching to colour dummy device 80x25 Jan 23 18:39:56.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:56.442816 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 18:39:56.532293 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:39:56.606601 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:39:56.607939 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:39:56.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:56.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:56.612883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:39:56.659842 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 23 18:39:56.694803 kernel: loop5: detected capacity change from 0 to 50784 Jan 23 18:39:56.706860 kernel: loop6: detected capacity change from 0 to 111560 Jan 23 18:39:56.708134 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 23 18:39:56.710907 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:39:56.724813 kernel: loop7: detected capacity change from 0 to 224512 Jan 23 18:39:56.735865 kernel: loop1: detected capacity change from 0 to 27728 Jan 23 18:39:56.745310 (sd-merge)[2262]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 23 18:39:56.749919 (sd-merge)[2262]: Merged extensions into '/usr'. Jan 23 18:39:56.752559 systemd[1]: Reload requested from client PID 2134 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:39:56.752570 systemd[1]: Reloading... Jan 23 18:39:56.809850 zram_generator::config[2300]: No configuration found. Jan 23 18:39:56.989336 systemd[1]: Reloading finished in 236 ms. Jan 23 18:39:57.016409 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:39:57.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.019218 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:39:57.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.023053 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:39:57.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.031543 systemd[1]: Starting ensure-sysext.service... Jan 23 18:39:57.034764 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:39:57.038000 audit: BPF prog-id=31 op=LOAD Jan 23 18:39:57.047000 audit: BPF prog-id=21 op=UNLOAD Jan 23 18:39:57.047000 audit: BPF prog-id=32 op=LOAD Jan 23 18:39:57.047000 audit: BPF prog-id=33 op=LOAD Jan 23 18:39:57.047000 audit: BPF prog-id=28 op=UNLOAD Jan 23 18:39:57.047000 audit: BPF prog-id=29 op=UNLOAD Jan 23 18:39:57.048000 audit: BPF prog-id=34 op=LOAD Jan 23 18:39:57.048000 audit: BPF prog-id=25 op=UNLOAD Jan 23 18:39:57.049000 audit: BPF prog-id=35 op=LOAD Jan 23 18:39:57.049000 audit: BPF prog-id=36 op=LOAD Jan 23 18:39:57.049000 audit: BPF prog-id=26 op=UNLOAD Jan 23 18:39:57.049000 audit: BPF prog-id=27 op=UNLOAD Jan 23 18:39:57.049000 audit: BPF prog-id=37 op=LOAD Jan 23 18:39:57.049000 audit: BPF prog-id=30 op=UNLOAD Jan 23 18:39:57.050000 audit: BPF prog-id=38 op=LOAD Jan 23 18:39:57.050000 audit: BPF prog-id=15 op=UNLOAD Jan 23 18:39:57.050000 audit: BPF prog-id=39 op=LOAD Jan 23 18:39:57.050000 audit: BPF prog-id=40 op=LOAD Jan 23 18:39:57.050000 audit: BPF prog-id=16 op=UNLOAD Jan 23 18:39:57.050000 audit: BPF prog-id=17 op=UNLOAD Jan 23 18:39:57.050000 audit: BPF prog-id=41 op=LOAD Jan 23 18:39:57.050000 audit: BPF prog-id=18 op=UNLOAD Jan 23 18:39:57.050000 audit: BPF prog-id=42 op=LOAD Jan 23 18:39:57.050000 audit: BPF prog-id=43 op=LOAD Jan 23 18:39:57.050000 audit: BPF prog-id=19 op=UNLOAD Jan 23 18:39:57.050000 audit: BPF prog-id=20 op=UNLOAD Jan 23 18:39:57.050000 audit: BPF prog-id=44 op=LOAD Jan 23 18:39:57.050000 audit: BPF prog-id=22 op=UNLOAD Jan 23 18:39:57.050000 audit: BPF prog-id=45 op=LOAD Jan 23 18:39:57.050000 audit: BPF prog-id=46 op=LOAD Jan 23 18:39:57.050000 audit: BPF prog-id=23 op=UNLOAD Jan 23 18:39:57.050000 audit: BPF prog-id=24 op=UNLOAD Jan 23 18:39:57.057188 systemd[1]: Reload requested from client PID 2361 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:39:57.057259 systemd[1]: Reloading... Jan 23 18:39:57.060599 systemd-tmpfiles[2362]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:39:57.060622 systemd-tmpfiles[2362]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:39:57.060848 systemd-tmpfiles[2362]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:39:57.061749 systemd-tmpfiles[2362]: ACLs are not supported, ignoring. Jan 23 18:39:57.061819 systemd-tmpfiles[2362]: ACLs are not supported, ignoring. Jan 23 18:39:57.112071 systemd-tmpfiles[2362]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:39:57.112079 systemd-tmpfiles[2362]: Skipping /boot Jan 23 18:39:57.120449 systemd-tmpfiles[2362]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:39:57.120462 systemd-tmpfiles[2362]: Skipping /boot Jan 23 18:39:57.121810 zram_generator::config[2399]: No configuration found. Jan 23 18:39:57.282487 systemd[1]: Reloading finished in 224 ms. Jan 23 18:39:57.294000 audit: BPF prog-id=47 op=LOAD Jan 23 18:39:57.294000 audit: BPF prog-id=38 op=UNLOAD Jan 23 18:39:57.294000 audit: BPF prog-id=48 op=LOAD Jan 23 18:39:57.294000 audit: BPF prog-id=49 op=LOAD Jan 23 18:39:57.294000 audit: BPF prog-id=39 op=UNLOAD Jan 23 18:39:57.294000 audit: BPF prog-id=40 op=UNLOAD Jan 23 18:39:57.295000 audit: BPF prog-id=50 op=LOAD Jan 23 18:39:57.295000 audit: BPF prog-id=37 op=UNLOAD Jan 23 18:39:57.296000 audit: BPF prog-id=51 op=LOAD Jan 23 18:39:57.296000 audit: BPF prog-id=44 op=UNLOAD Jan 23 18:39:57.296000 audit: BPF prog-id=52 op=LOAD Jan 23 18:39:57.296000 audit: BPF prog-id=53 op=LOAD Jan 23 18:39:57.296000 audit: BPF prog-id=45 op=UNLOAD Jan 23 18:39:57.296000 audit: BPF prog-id=46 op=UNLOAD Jan 23 18:39:57.296000 audit: BPF prog-id=54 op=LOAD Jan 23 18:39:57.296000 audit: BPF prog-id=55 op=LOAD Jan 23 18:39:57.296000 audit: BPF prog-id=32 op=UNLOAD Jan 23 18:39:57.296000 audit: BPF prog-id=33 op=UNLOAD Jan 23 18:39:57.297000 audit: BPF prog-id=56 op=LOAD Jan 23 18:39:57.297000 audit: BPF prog-id=34 op=UNLOAD Jan 23 18:39:57.297000 audit: BPF prog-id=57 op=LOAD Jan 23 18:39:57.297000 audit: BPF prog-id=58 op=LOAD Jan 23 18:39:57.297000 audit: BPF prog-id=35 op=UNLOAD Jan 23 18:39:57.297000 audit: BPF prog-id=36 op=UNLOAD Jan 23 18:39:57.298000 audit: BPF prog-id=59 op=LOAD Jan 23 18:39:57.298000 audit: BPF prog-id=41 op=UNLOAD Jan 23 18:39:57.298000 audit: BPF prog-id=60 op=LOAD Jan 23 18:39:57.298000 audit: BPF prog-id=61 op=LOAD Jan 23 18:39:57.298000 audit: BPF prog-id=42 op=UNLOAD Jan 23 18:39:57.298000 audit: BPF prog-id=43 op=UNLOAD Jan 23 18:39:57.303000 audit: BPF prog-id=62 op=LOAD Jan 23 18:39:57.303000 audit: BPF prog-id=31 op=UNLOAD Jan 23 18:39:57.306161 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:39:57.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.313775 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:39:57.317998 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:39:57.322975 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:39:57.326977 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:39:57.338738 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:39:57.344760 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:39:57.345196 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:39:57.348773 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:39:57.352231 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:39:57.355039 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:39:57.356919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:39:57.357079 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:39:57.357163 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:39:57.357242 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:39:57.360808 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:39:57.360969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:39:57.361133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:39:57.361269 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:39:57.361363 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:39:57.361454 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:39:57.362000 audit[2460]: SYSTEM_BOOT pid=2460 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.367579 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:39:57.367750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:39:57.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.374061 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:39:57.375764 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:39:57.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.379557 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:39:57.379843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:39:57.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.385599 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:39:57.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.390744 systemd[1]: Finished ensure-sysext.service. Jan 23 18:39:57.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.393622 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:39:57.393781 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:39:57.394545 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:39:57.398846 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:39:57.402027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:39:57.402337 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:39:57.402369 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:39:57.402404 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:39:57.402437 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:39:57.403898 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:39:57.409274 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:39:57.409442 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:39:57.412050 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:39:57.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.412224 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:39:57.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.413852 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:39:57.507878 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:39:57.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:39:57.609000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 23 18:39:57.609000 audit[2493]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe60fefc60 a2=420 a3=0 items=0 ppid=2456 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:39:57.609000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:39:57.610307 augenrules[2493]: No rules Jan 23 18:39:57.611154 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:39:57.611382 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:39:57.696916 systemd-networkd[2182]: eth0: Gained IPv6LL Jan 23 18:39:57.698775 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:39:57.700578 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:39:57.939702 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:39:57.941399 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:40:01.758377 ldconfig[2458]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:40:01.767758 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:40:01.771981 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:40:01.792763 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:40:01.796333 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:40:01.797562 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:40:01.800889 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:40:01.803830 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:40:01.806938 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:40:01.809894 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:40:01.812851 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 23 18:40:01.815875 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 23 18:40:01.817023 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:40:01.818316 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:40:01.818334 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:40:01.819189 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:40:01.837896 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:40:01.842863 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:40:01.848275 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:40:01.849759 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:40:01.851327 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:40:01.855104 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:40:01.870036 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:40:01.875334 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:40:01.878553 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:40:01.879587 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:40:01.881864 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:40:01.881888 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:40:01.883317 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 18:40:01.888256 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:40:01.893593 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 18:40:01.900235 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:40:01.906633 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:40:01.912943 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:40:01.916427 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:40:01.919879 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:40:01.920962 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:40:01.923876 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 23 18:40:01.925902 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 18:40:01.927431 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 18:40:01.934165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:40:01.938280 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:40:01.943135 KVP[2517]: KVP starting; pid is:2517 Jan 23 18:40:01.943415 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:40:01.946030 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 18:40:01.951979 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:40:01.954012 kernel: hv_utils: KVP IC version 4.0 Jan 23 18:40:01.953877 KVP[2517]: KVP LIC Version: 3.1 Jan 23 18:40:01.955661 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:40:01.961381 jq[2511]: false Jan 23 18:40:01.965666 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:40:01.968913 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:40:01.973193 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:40:01.975082 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:40:01.980935 chronyd[2506]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 18:40:01.981670 extend-filesystems[2515]: Found /dev/nvme0n1p6 Jan 23 18:40:01.982431 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:40:01.993220 google_oslogin_nss_cache[2516]: oslogin_cache_refresh[2516]: Refreshing passwd entry cache Jan 23 18:40:01.989353 oslogin_cache_refresh[2516]: Refreshing passwd entry cache Jan 23 18:40:01.994156 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:40:01.996249 chronyd[2506]: Timezone right/UTC failed leap second check, ignoring Jan 23 18:40:01.996516 chronyd[2506]: Loaded seccomp filter (level 2) Jan 23 18:40:01.998037 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 18:40:02.002002 extend-filesystems[2515]: Found /dev/nvme0n1p9 Jan 23 18:40:02.001242 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:40:02.007597 extend-filesystems[2515]: Checking size of /dev/nvme0n1p9 Jan 23 18:40:02.001430 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:40:02.004296 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:40:02.008127 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:40:02.012633 google_oslogin_nss_cache[2516]: oslogin_cache_refresh[2516]: Failure getting users, quitting Jan 23 18:40:02.012633 google_oslogin_nss_cache[2516]: oslogin_cache_refresh[2516]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:40:02.012633 google_oslogin_nss_cache[2516]: oslogin_cache_refresh[2516]: Refreshing group entry cache Jan 23 18:40:02.012725 jq[2531]: true Jan 23 18:40:02.011950 oslogin_cache_refresh[2516]: Failure getting users, quitting Jan 23 18:40:02.011965 oslogin_cache_refresh[2516]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:40:02.012007 oslogin_cache_refresh[2516]: Refreshing group entry cache Jan 23 18:40:02.038076 jq[2552]: true Jan 23 18:40:02.037398 oslogin_cache_refresh[2516]: Failure getting groups, quitting Jan 23 18:40:02.038284 google_oslogin_nss_cache[2516]: oslogin_cache_refresh[2516]: Failure getting groups, quitting Jan 23 18:40:02.038284 google_oslogin_nss_cache[2516]: oslogin_cache_refresh[2516]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:40:02.038335 extend-filesystems[2515]: Resized partition /dev/nvme0n1p9 Jan 23 18:40:02.037406 oslogin_cache_refresh[2516]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:40:02.043262 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:40:02.044076 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:40:02.046231 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:40:02.047026 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:40:02.064059 extend-filesystems[2568]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:40:02.070962 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Jan 23 18:40:02.071009 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Jan 23 18:40:02.078863 update_engine[2530]: I20260123 18:40:02.076442 2530 main.cc:92] Flatcar Update Engine starting Jan 23 18:40:02.096424 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:40:02.109338 extend-filesystems[2568]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 18:40:02.109338 extend-filesystems[2568]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 23 18:40:02.109338 extend-filesystems[2568]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Jan 23 18:40:02.126613 extend-filesystems[2515]: Resized filesystem in /dev/nvme0n1p9 Jan 23 18:40:02.112131 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:40:02.112610 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:40:02.129817 tar[2542]: linux-amd64/LICENSE Jan 23 18:40:02.129817 tar[2542]: linux-amd64/helm Jan 23 18:40:02.181472 systemd-logind[2527]: New seat seat0. Jan 23 18:40:02.188038 systemd-logind[2527]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 23 18:40:02.188221 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:40:02.191956 bash[2594]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:40:02.198208 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:40:02.212216 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 18:40:02.247872 dbus-daemon[2509]: [system] SELinux support is enabled Jan 23 18:40:02.248103 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:40:02.254510 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:40:02.254616 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:40:02.258916 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:40:02.258936 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:40:02.264960 dbus-daemon[2509]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 18:40:02.266810 update_engine[2530]: I20260123 18:40:02.266641 2530 update_check_scheduler.cc:74] Next update check in 8m1s Jan 23 18:40:02.267427 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:40:02.279566 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:40:02.384031 coreos-metadata[2508]: Jan 23 18:40:02.383 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 18:40:02.390739 coreos-metadata[2508]: Jan 23 18:40:02.390 INFO Fetch successful Jan 23 18:40:02.392022 coreos-metadata[2508]: Jan 23 18:40:02.391 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 18:40:02.396454 coreos-metadata[2508]: Jan 23 18:40:02.396 INFO Fetch successful Jan 23 18:40:02.396969 coreos-metadata[2508]: Jan 23 18:40:02.396 INFO Fetching http://168.63.129.16/machine/2e1fb2e0-bf70-43d6-92a7-0f12fcecbd76/5b96c5b9%2D0f5c%2D4f1e%2D8c9c%2D4d6c133ac514.%5Fci%2D4547.1.0%2Da%2D1dec04cda0?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 18:40:02.398986 coreos-metadata[2508]: Jan 23 18:40:02.398 INFO Fetch successful Jan 23 18:40:02.398986 coreos-metadata[2508]: Jan 23 18:40:02.398 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 18:40:02.414566 coreos-metadata[2508]: Jan 23 18:40:02.412 INFO Fetch successful Jan 23 18:40:02.500820 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 18:40:02.508863 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:40:02.571122 sshd_keygen[2533]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:40:02.594860 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:40:02.600987 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:40:02.603841 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 18:40:02.629941 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:40:02.630184 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:40:02.634011 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:40:02.653909 locksmithd[2620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:40:02.660265 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 18:40:02.663609 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:40:02.673045 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:40:02.678590 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:40:02.681169 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:40:02.814612 tar[2542]: linux-amd64/README.md Jan 23 18:40:02.829242 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 18:40:02.979839 containerd[2553]: time="2026-01-23T18:40:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:40:02.980391 containerd[2553]: time="2026-01-23T18:40:02.980366173Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 23 18:40:02.990800 containerd[2553]: time="2026-01-23T18:40:02.990259839Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.215µs" Jan 23 18:40:02.990800 containerd[2553]: time="2026-01-23T18:40:02.990288107Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:40:02.990800 containerd[2553]: time="2026-01-23T18:40:02.990322554Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:40:02.990800 containerd[2553]: time="2026-01-23T18:40:02.990334800Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:40:02.990800 containerd[2553]: time="2026-01-23T18:40:02.990442828Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:40:02.990800 containerd[2553]: time="2026-01-23T18:40:02.990454697Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:40:02.990800 containerd[2553]: time="2026-01-23T18:40:02.990495467Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:40:02.990800 containerd[2553]: time="2026-01-23T18:40:02.990504560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:40:02.990800 containerd[2553]: time="2026-01-23T18:40:02.990682767Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:40:02.990800 containerd[2553]: time="2026-01-23T18:40:02.990692824Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:40:02.990800 containerd[2553]: time="2026-01-23T18:40:02.990701948Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:40:02.990800 containerd[2553]: time="2026-01-23T18:40:02.990709966Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 18:40:02.991097 containerd[2553]: time="2026-01-23T18:40:02.991081064Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 18:40:02.991144 containerd[2553]: time="2026-01-23T18:40:02.991097518Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:40:02.991166 containerd[2553]: time="2026-01-23T18:40:02.991157360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:40:02.992422 containerd[2553]: time="2026-01-23T18:40:02.991295544Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:40:02.992422 containerd[2553]: time="2026-01-23T18:40:02.991320300Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:40:02.992422 containerd[2553]: time="2026-01-23T18:40:02.991330063Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:40:02.992422 containerd[2553]: time="2026-01-23T18:40:02.991364097Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:40:02.992422 containerd[2553]: time="2026-01-23T18:40:02.991600752Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:40:02.992422 containerd[2553]: time="2026-01-23T18:40:02.991638542Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:40:03.003954 containerd[2553]: time="2026-01-23T18:40:03.003676494Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:40:03.003954 containerd[2553]: time="2026-01-23T18:40:03.003737497Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 18:40:03.004122 containerd[2553]: time="2026-01-23T18:40:03.004099764Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 18:40:03.004122 containerd[2553]: time="2026-01-23T18:40:03.004115964Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:40:03.004169 containerd[2553]: time="2026-01-23T18:40:03.004130072Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:40:03.004169 containerd[2553]: time="2026-01-23T18:40:03.004140791Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:40:03.004169 containerd[2553]: time="2026-01-23T18:40:03.004151134Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:40:03.004169 containerd[2553]: time="2026-01-23T18:40:03.004160224Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:40:03.004238 containerd[2553]: time="2026-01-23T18:40:03.004171955Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:40:03.004238 containerd[2553]: time="2026-01-23T18:40:03.004182946Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:40:03.004238 containerd[2553]: time="2026-01-23T18:40:03.004192969Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:40:03.004238 containerd[2553]: time="2026-01-23T18:40:03.004203179Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:40:03.004238 containerd[2553]: time="2026-01-23T18:40:03.004212674Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:40:03.004238 containerd[2553]: time="2026-01-23T18:40:03.004229077Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:40:03.004347 containerd[2553]: time="2026-01-23T18:40:03.004321585Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:40:03.004347 containerd[2553]: time="2026-01-23T18:40:03.004337443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:40:03.004383 containerd[2553]: time="2026-01-23T18:40:03.004350595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:40:03.004383 containerd[2553]: time="2026-01-23T18:40:03.004360398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:40:03.004383 containerd[2553]: time="2026-01-23T18:40:03.004370244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:40:03.004383 containerd[2553]: time="2026-01-23T18:40:03.004378804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:40:03.004461 containerd[2553]: time="2026-01-23T18:40:03.004394249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:40:03.004461 containerd[2553]: time="2026-01-23T18:40:03.004407820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:40:03.004461 containerd[2553]: time="2026-01-23T18:40:03.004419189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:40:03.004461 containerd[2553]: time="2026-01-23T18:40:03.004427992Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:40:03.004461 containerd[2553]: time="2026-01-23T18:40:03.004436280Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:40:03.004461 containerd[2553]: time="2026-01-23T18:40:03.004456165Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:40:03.004559 containerd[2553]: time="2026-01-23T18:40:03.004495093Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:40:03.004559 containerd[2553]: time="2026-01-23T18:40:03.004505620Z" level=info msg="Start snapshots syncer" Jan 23 18:40:03.004559 containerd[2553]: time="2026-01-23T18:40:03.004522885Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:40:03.005482 containerd[2553]: time="2026-01-23T18:40:03.004732930Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:40:03.005482 containerd[2553]: time="2026-01-23T18:40:03.004778080Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:40:03.005656 containerd[2553]: time="2026-01-23T18:40:03.004831870Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:40:03.005656 containerd[2553]: time="2026-01-23T18:40:03.004898378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:40:03.005656 containerd[2553]: time="2026-01-23T18:40:03.004927051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:40:03.005656 containerd[2553]: time="2026-01-23T18:40:03.004937815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:40:03.005656 containerd[2553]: time="2026-01-23T18:40:03.004948571Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:40:03.005656 containerd[2553]: time="2026-01-23T18:40:03.004959316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:40:03.005656 containerd[2553]: time="2026-01-23T18:40:03.004968824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:40:03.005656 containerd[2553]: time="2026-01-23T18:40:03.004978598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:40:03.005656 containerd[2553]: time="2026-01-23T18:40:03.004987124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:40:03.005656 containerd[2553]: time="2026-01-23T18:40:03.004995800Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:40:03.005656 containerd[2553]: time="2026-01-23T18:40:03.005013258Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:40:03.005656 containerd[2553]: time="2026-01-23T18:40:03.005023356Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:40:03.005656 containerd[2553]: time="2026-01-23T18:40:03.005030823Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:40:03.005901 containerd[2553]: time="2026-01-23T18:40:03.005038398Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:40:03.005901 containerd[2553]: time="2026-01-23T18:40:03.005045517Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:40:03.005901 containerd[2553]: time="2026-01-23T18:40:03.005054890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:40:03.005901 containerd[2553]: time="2026-01-23T18:40:03.005064655Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:40:03.005901 containerd[2553]: time="2026-01-23T18:40:03.005074400Z" level=info msg="runtime interface created" Jan 23 18:40:03.005901 containerd[2553]: time="2026-01-23T18:40:03.005079588Z" level=info msg="created NRI interface" Jan 23 18:40:03.005901 containerd[2553]: time="2026-01-23T18:40:03.005090832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:40:03.005901 containerd[2553]: time="2026-01-23T18:40:03.005100179Z" level=info msg="Connect containerd service" Jan 23 18:40:03.005901 containerd[2553]: time="2026-01-23T18:40:03.005117804Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:40:03.005901 containerd[2553]: time="2026-01-23T18:40:03.005889804Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:40:03.323431 containerd[2553]: time="2026-01-23T18:40:03.322909678Z" level=info msg="Start subscribing containerd event" Jan 23 18:40:03.323431 containerd[2553]: time="2026-01-23T18:40:03.322987674Z" level=info msg="Start recovering state" Jan 23 18:40:03.323431 containerd[2553]: time="2026-01-23T18:40:03.323089567Z" level=info msg="Start event monitor" Jan 23 18:40:03.323431 containerd[2553]: time="2026-01-23T18:40:03.323101302Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:40:03.323431 containerd[2553]: time="2026-01-23T18:40:03.323111558Z" level=info msg="Start streaming server" Jan 23 18:40:03.323431 containerd[2553]: time="2026-01-23T18:40:03.323129271Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:40:03.323431 containerd[2553]: time="2026-01-23T18:40:03.323136088Z" level=info msg="runtime interface starting up..." Jan 23 18:40:03.323431 containerd[2553]: time="2026-01-23T18:40:03.323141625Z" level=info msg="starting plugins..." Jan 23 18:40:03.323431 containerd[2553]: time="2026-01-23T18:40:03.323152789Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:40:03.324439 containerd[2553]: time="2026-01-23T18:40:03.323906652Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:40:03.324439 containerd[2553]: time="2026-01-23T18:40:03.323947649Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:40:03.324194 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:40:03.329806 containerd[2553]: time="2026-01-23T18:40:03.328863533Z" level=info msg="containerd successfully booted in 0.350073s" Jan 23 18:40:03.385616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:40:03.388916 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:40:03.393849 systemd[1]: Startup finished in 3.946s (kernel) + 10.627s (initrd) + 10.760s (userspace) = 25.334s. Jan 23 18:40:03.398029 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:40:03.919566 kubelet[2684]: E0123 18:40:03.919531 2684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:40:03.922997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:40:03.923131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:40:03.923556 systemd[1]: kubelet.service: Consumed 845ms CPU time, 264.5M memory peak. Jan 23 18:40:03.992911 waagent[2658]: 2026-01-23T18:40:03.992850Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 23 18:40:03.999117 waagent[2658]: 2026-01-23T18:40:03.993496Z INFO Daemon Daemon OS: flatcar 4547.1.0 Jan 23 18:40:03.999117 waagent[2658]: 2026-01-23T18:40:03.993830Z INFO Daemon Daemon Python: 3.11.13 Jan 23 18:40:03.999117 waagent[2658]: 2026-01-23T18:40:03.994291Z INFO Daemon Daemon Run daemon Jan 23 18:40:03.999117 waagent[2658]: 2026-01-23T18:40:03.994459Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4547.1.0' Jan 23 18:40:03.999117 waagent[2658]: 2026-01-23T18:40:03.994619Z INFO Daemon Daemon Using waagent for provisioning Jan 23 18:40:03.999117 waagent[2658]: 2026-01-23T18:40:03.994758Z INFO Daemon Daemon Activate resource disk Jan 23 18:40:03.999117 waagent[2658]: 2026-01-23T18:40:03.994875Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 18:40:03.999117 waagent[2658]: 2026-01-23T18:40:03.996304Z INFO Daemon Daemon Found device: None Jan 23 18:40:03.999117 waagent[2658]: 2026-01-23T18:40:03.996711Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 18:40:03.999117 waagent[2658]: 2026-01-23T18:40:03.996772Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 18:40:03.999117 waagent[2658]: 2026-01-23T18:40:03.997399Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 18:40:03.999117 waagent[2658]: 2026-01-23T18:40:03.997882Z INFO Daemon Daemon Running default provisioning handler Jan 23 18:40:04.013679 waagent[2658]: 2026-01-23T18:40:04.013632Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 18:40:04.014127 waagent[2658]: 2026-01-23T18:40:04.014090Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 18:40:04.014210 waagent[2658]: 2026-01-23T18:40:04.014189Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 18:40:04.014260 waagent[2658]: 2026-01-23T18:40:04.014243Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 18:40:04.016381 login[2660]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:40:04.017274 login[2661]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:40:04.033000 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:40:04.033764 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:40:04.038704 systemd-logind[2527]: New session 2 of user core. Jan 23 18:40:04.042406 systemd-logind[2527]: New session 1 of user core. Jan 23 18:40:04.063020 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:40:04.065261 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:40:04.080260 (systemd)[2708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:40:04.081901 systemd-logind[2527]: New session 3 of user core. Jan 23 18:40:04.100808 waagent[2658]: 2026-01-23T18:40:04.098217Z INFO Daemon Daemon Successfully mounted dvd Jan 23 18:40:04.121263 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 18:40:04.124829 waagent[2658]: 2026-01-23T18:40:04.124377Z INFO Daemon Daemon Detect protocol endpoint Jan 23 18:40:04.124935 waagent[2658]: 2026-01-23T18:40:04.124903Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 18:40:04.125170 waagent[2658]: 2026-01-23T18:40:04.125149Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 18:40:04.125371 waagent[2658]: 2026-01-23T18:40:04.125356Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 18:40:04.125695 waagent[2658]: 2026-01-23T18:40:04.125678Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 18:40:04.125915 waagent[2658]: 2026-01-23T18:40:04.125894Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 18:40:04.135569 waagent[2658]: 2026-01-23T18:40:04.135393Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 18:40:04.137594 waagent[2658]: 2026-01-23T18:40:04.137571Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 18:40:04.139559 waagent[2658]: 2026-01-23T18:40:04.139475Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 18:40:04.220953 systemd[2708]: Queued start job for default target default.target. Jan 23 18:40:04.227971 systemd[2708]: Created slice app.slice - User Application Slice. Jan 23 18:40:04.228167 systemd[2708]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 23 18:40:04.228183 systemd[2708]: Reached target paths.target - Paths. Jan 23 18:40:04.228221 systemd[2708]: Reached target timers.target - Timers. Jan 23 18:40:04.229905 systemd[2708]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:40:04.232919 systemd[2708]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 23 18:40:04.243746 systemd[2708]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 23 18:40:04.244708 waagent[2658]: 2026-01-23T18:40:04.243773Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 18:40:04.244708 waagent[2658]: 2026-01-23T18:40:04.244272Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 18:40:04.247763 waagent[2658]: 2026-01-23T18:40:04.247736Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 18:40:04.250417 systemd[2708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:40:04.250506 systemd[2708]: Reached target sockets.target - Sockets. Jan 23 18:40:04.250541 systemd[2708]: Reached target basic.target - Basic System. Jan 23 18:40:04.250568 systemd[2708]: Reached target default.target - Main User Target. Jan 23 18:40:04.250589 systemd[2708]: Startup finished in 165ms. Jan 23 18:40:04.250725 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:40:04.253987 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:40:04.255021 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 18:40:04.260978 waagent[2658]: 2026-01-23T18:40:04.260948Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 23 18:40:04.263279 waagent[2658]: 2026-01-23T18:40:04.263242Z INFO Daemon Jan 23 18:40:04.264655 waagent[2658]: 2026-01-23T18:40:04.264378Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: c2f2c147-5479-43a6-ae11-541cb7110507 eTag: 16240273927685262936 source: Fabric] Jan 23 18:40:04.267887 waagent[2658]: 2026-01-23T18:40:04.267851Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 18:40:04.270012 waagent[2658]: 2026-01-23T18:40:04.269972Z INFO Daemon Jan 23 18:40:04.270660 waagent[2658]: 2026-01-23T18:40:04.270602Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 18:40:04.286003 waagent[2658]: 2026-01-23T18:40:04.285981Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 18:40:04.383148 waagent[2658]: 2026-01-23T18:40:04.383106Z INFO Daemon Downloaded certificate {'thumbprint': '6923508F4B702833457652E1EF7A801F6CA91566', 'hasPrivateKey': True} Jan 23 18:40:04.385199 waagent[2658]: 2026-01-23T18:40:04.385166Z INFO Daemon Fetch goal state completed Jan 23 18:40:04.398762 waagent[2658]: 2026-01-23T18:40:04.398732Z INFO Daemon Daemon Starting provisioning Jan 23 18:40:04.399449 waagent[2658]: 2026-01-23T18:40:04.399236Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 18:40:04.400135 waagent[2658]: 2026-01-23T18:40:04.399962Z INFO Daemon Daemon Set hostname [ci-4547.1.0-a-1dec04cda0] Jan 23 18:40:04.417414 waagent[2658]: 2026-01-23T18:40:04.417381Z INFO Daemon Daemon Publish hostname [ci-4547.1.0-a-1dec04cda0] Jan 23 18:40:04.418544 waagent[2658]: 2026-01-23T18:40:04.418477Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 18:40:04.419699 waagent[2658]: 2026-01-23T18:40:04.419673Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 18:40:04.425614 systemd-networkd[2182]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:40:04.425621 systemd-networkd[2182]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:40:04.425669 systemd-networkd[2182]: eth0: DHCP lease lost Jan 23 18:40:04.441880 waagent[2658]: 2026-01-23T18:40:04.441839Z INFO Daemon Daemon Create user account if not exists Jan 23 18:40:04.442937 waagent[2658]: 2026-01-23T18:40:04.442906Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 18:40:04.443294 waagent[2658]: 2026-01-23T18:40:04.443121Z INFO Daemon Daemon Configure sudoer Jan 23 18:40:04.448420 waagent[2658]: 2026-01-23T18:40:04.448384Z INFO Daemon Daemon Configure sshd Jan 23 18:40:04.450841 systemd-networkd[2182]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 23 18:40:04.451842 waagent[2658]: 2026-01-23T18:40:04.451804Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 18:40:04.452260 waagent[2658]: 2026-01-23T18:40:04.452093Z INFO Daemon Daemon Deploy ssh public key. Jan 23 18:40:05.566470 waagent[2658]: 2026-01-23T18:40:05.566420Z INFO Daemon Daemon Provisioning complete Jan 23 18:40:05.581094 waagent[2658]: 2026-01-23T18:40:05.581056Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 18:40:05.582355 waagent[2658]: 2026-01-23T18:40:05.582326Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 18:40:05.584267 waagent[2658]: 2026-01-23T18:40:05.584195Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 23 18:40:05.677064 waagent[2754]: 2026-01-23T18:40:05.676994Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 23 18:40:05.677313 waagent[2754]: 2026-01-23T18:40:05.677083Z INFO ExtHandler ExtHandler OS: flatcar 4547.1.0 Jan 23 18:40:05.677313 waagent[2754]: 2026-01-23T18:40:05.677119Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 23 18:40:05.677313 waagent[2754]: 2026-01-23T18:40:05.677153Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 23 18:40:05.708463 waagent[2754]: 2026-01-23T18:40:05.708412Z INFO ExtHandler ExtHandler Distro: flatcar-4547.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 23 18:40:05.708595 waagent[2754]: 2026-01-23T18:40:05.708567Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 18:40:05.708643 waagent[2754]: 2026-01-23T18:40:05.708623Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 18:40:05.713093 waagent[2754]: 2026-01-23T18:40:05.713048Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 18:40:05.720449 waagent[2754]: 2026-01-23T18:40:05.720418Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 23 18:40:05.720727 waagent[2754]: 2026-01-23T18:40:05.720703Z INFO ExtHandler Jan 23 18:40:05.720762 waagent[2754]: 2026-01-23T18:40:05.720749Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 19c65fee-1492-4099-aaf9-c983519c862f eTag: 16240273927685262936 source: Fabric] Jan 23 18:40:05.720969 waagent[2754]: 2026-01-23T18:40:05.720942Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 18:40:05.721288 waagent[2754]: 2026-01-23T18:40:05.721261Z INFO ExtHandler Jan 23 18:40:05.721337 waagent[2754]: 2026-01-23T18:40:05.721299Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 18:40:05.728726 waagent[2754]: 2026-01-23T18:40:05.728698Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 18:40:05.810189 waagent[2754]: 2026-01-23T18:40:05.810144Z INFO ExtHandler Downloaded certificate {'thumbprint': '6923508F4B702833457652E1EF7A801F6CA91566', 'hasPrivateKey': True} Jan 23 18:40:05.810468 waagent[2754]: 2026-01-23T18:40:05.810443Z INFO ExtHandler Fetch goal state completed Jan 23 18:40:05.826605 waagent[2754]: 2026-01-23T18:40:05.826531Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.5-dev (Library: OpenSSL 3.5.5-dev ) Jan 23 18:40:05.830469 waagent[2754]: 2026-01-23T18:40:05.830421Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2754 Jan 23 18:40:05.830566 waagent[2754]: 2026-01-23T18:40:05.830540Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 18:40:05.830771 waagent[2754]: 2026-01-23T18:40:05.830752Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 23 18:40:05.831707 waagent[2754]: 2026-01-23T18:40:05.831669Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4547.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 18:40:05.832005 waagent[2754]: 2026-01-23T18:40:05.831977Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4547.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 23 18:40:05.832105 waagent[2754]: 2026-01-23T18:40:05.832083Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 23 18:40:05.832450 waagent[2754]: 2026-01-23T18:40:05.832425Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 18:40:05.870128 waagent[2754]: 2026-01-23T18:40:05.870101Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 18:40:05.870259 waagent[2754]: 2026-01-23T18:40:05.870234Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 18:40:05.875717 waagent[2754]: 2026-01-23T18:40:05.875412Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 18:40:05.880097 systemd[1]: Reload requested from client PID 2769 ('systemctl') (unit waagent.service)... Jan 23 18:40:05.880109 systemd[1]: Reloading... Jan 23 18:40:05.958809 zram_generator::config[2811]: No configuration found. Jan 23 18:40:06.116539 systemd[1]: Reloading finished in 236 ms. Jan 23 18:40:06.146885 waagent[2754]: 2026-01-23T18:40:06.143979Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 18:40:06.146885 waagent[2754]: 2026-01-23T18:40:06.144112Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 18:40:06.159810 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#97 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 18:40:06.852682 waagent[2754]: 2026-01-23T18:40:06.852614Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 18:40:06.852977 waagent[2754]: 2026-01-23T18:40:06.852953Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 23 18:40:06.853640 waagent[2754]: 2026-01-23T18:40:06.853608Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 18:40:06.853708 waagent[2754]: 2026-01-23T18:40:06.853641Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 18:40:06.853997 waagent[2754]: 2026-01-23T18:40:06.853970Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 18:40:06.854058 waagent[2754]: 2026-01-23T18:40:06.854035Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 18:40:06.854115 waagent[2754]: 2026-01-23T18:40:06.854099Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 18:40:06.854218 waagent[2754]: 2026-01-23T18:40:06.854201Z INFO EnvHandler ExtHandler Configure routes Jan 23 18:40:06.854266 waagent[2754]: 2026-01-23T18:40:06.854245Z INFO EnvHandler ExtHandler Gateway:None Jan 23 18:40:06.854298 waagent[2754]: 2026-01-23T18:40:06.854284Z INFO EnvHandler ExtHandler Routes:None Jan 23 18:40:06.854436 waagent[2754]: 2026-01-23T18:40:06.854419Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 18:40:06.854651 waagent[2754]: 2026-01-23T18:40:06.854626Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 18:40:06.854898 waagent[2754]: 2026-01-23T18:40:06.854840Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 18:40:06.854930 waagent[2754]: 2026-01-23T18:40:06.854888Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 18:40:06.855185 waagent[2754]: 2026-01-23T18:40:06.855159Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 18:40:06.855285 waagent[2754]: 2026-01-23T18:40:06.855268Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 18:40:06.855332 waagent[2754]: 2026-01-23T18:40:06.855311Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 18:40:06.855968 waagent[2754]: 2026-01-23T18:40:06.855937Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 18:40:06.855968 waagent[2754]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 18:40:06.855968 waagent[2754]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 18:40:06.855968 waagent[2754]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 18:40:06.855968 waagent[2754]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 18:40:06.855968 waagent[2754]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 18:40:06.855968 waagent[2754]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 18:40:06.873319 waagent[2754]: 2026-01-23T18:40:06.873285Z INFO ExtHandler ExtHandler Jan 23 18:40:06.873377 waagent[2754]: 2026-01-23T18:40:06.873339Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b0e4b5e7-bc30-4d9a-a1b1-8c5828171b75 correlation d180f5c2-399f-440b-8d17-12edc5c073ed created: 2026-01-23T18:39:18.724289Z] Jan 23 18:40:06.873598 waagent[2754]: 2026-01-23T18:40:06.873571Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 18:40:06.873954 waagent[2754]: 2026-01-23T18:40:06.873933Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 23 18:40:06.903034 waagent[2754]: 2026-01-23T18:40:06.902995Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 23 18:40:06.903034 waagent[2754]: Try `iptables -h' or 'iptables --help' for more information.) Jan 23 18:40:06.903294 waagent[2754]: 2026-01-23T18:40:06.903270Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CAFEBEEB-C8A9-4706-99A2-B33922076E1B;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 23 18:40:06.916476 waagent[2754]: 2026-01-23T18:40:06.916436Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 18:40:06.916476 waagent[2754]: Executing ['ip', '-a', '-o', 'link']: Jan 23 18:40:06.916476 waagent[2754]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 18:40:06.916476 waagent[2754]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e1:65:20 brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx6045bde16520 Jan 23 18:40:06.916476 waagent[2754]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e1:65:20 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 23 18:40:06.916476 waagent[2754]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 18:40:06.916476 waagent[2754]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 18:40:06.916476 waagent[2754]: 2: eth0 inet 10.200.8.22/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 18:40:06.916476 waagent[2754]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 18:40:06.916476 waagent[2754]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 18:40:06.916476 waagent[2754]: 2: eth0 inet6 fe80::6245:bdff:fee1:6520/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 18:40:06.944382 waagent[2754]: 2026-01-23T18:40:06.944341Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 23 18:40:06.944382 waagent[2754]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:40:06.944382 waagent[2754]: pkts bytes target prot opt in out source destination Jan 23 18:40:06.944382 waagent[2754]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:40:06.944382 waagent[2754]: pkts bytes target prot opt in out source destination Jan 23 18:40:06.944382 waagent[2754]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:40:06.944382 waagent[2754]: pkts bytes target prot opt in out source destination Jan 23 18:40:06.944382 waagent[2754]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 18:40:06.944382 waagent[2754]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 18:40:06.944382 waagent[2754]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 18:40:06.947031 waagent[2754]: 2026-01-23T18:40:06.946995Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 18:40:06.947031 waagent[2754]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:40:06.947031 waagent[2754]: pkts bytes target prot opt in out source destination Jan 23 18:40:06.947031 waagent[2754]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:40:06.947031 waagent[2754]: pkts bytes target prot opt in out source destination Jan 23 18:40:06.947031 waagent[2754]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:40:06.947031 waagent[2754]: pkts bytes target prot opt in out source destination Jan 23 18:40:06.947031 waagent[2754]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 18:40:06.947031 waagent[2754]: 7 762 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 18:40:06.947031 waagent[2754]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 18:40:14.173958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:40:14.175379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:40:14.561764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:40:14.565070 (kubelet)[2915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:40:14.598992 kubelet[2915]: E0123 18:40:14.598872 2915 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:40:14.601637 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:40:14.601754 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:40:14.602066 systemd[1]: kubelet.service: Consumed 125ms CPU time, 110.7M memory peak. Jan 23 18:40:24.748024 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:40:24.749354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:40:25.212849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:40:25.218053 (kubelet)[2930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:40:25.249323 kubelet[2930]: E0123 18:40:25.249300 2930 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:40:25.250808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:40:25.250923 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:40:25.251243 systemd[1]: kubelet.service: Consumed 115ms CPU time, 109.9M memory peak. Jan 23 18:40:25.780919 chronyd[2506]: Selected source PHC0 Jan 23 18:40:35.123098 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:40:35.124141 systemd[1]: Started sshd@0-10.200.8.22:22-10.200.16.10:44674.service - OpenSSH per-connection server daemon (10.200.16.10:44674). Jan 23 18:40:35.497869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 18:40:35.499133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:40:35.806585 sshd[2938]: Accepted publickey for core from 10.200.16.10 port 44674 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:40:35.807650 sshd-session[2938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:40:35.811964 systemd-logind[2527]: New session 4 of user core. Jan 23 18:40:35.819953 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:40:35.954877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:40:35.960948 (kubelet)[2951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:40:35.993081 kubelet[2951]: E0123 18:40:35.993048 2951 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:40:35.994404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:40:35.994530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:40:35.994856 systemd[1]: kubelet.service: Consumed 117ms CPU time, 110.6M memory peak. Jan 23 18:40:36.233295 systemd[1]: Started sshd@1-10.200.8.22:22-10.200.16.10:44676.service - OpenSSH per-connection server daemon (10.200.16.10:44676). Jan 23 18:40:36.785522 sshd[2960]: Accepted publickey for core from 10.200.16.10 port 44676 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:40:36.786659 sshd-session[2960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:40:36.791147 systemd-logind[2527]: New session 5 of user core. Jan 23 18:40:36.795953 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:40:37.099688 sshd[2964]: Connection closed by 10.200.16.10 port 44676 Jan 23 18:40:37.100955 sshd-session[2960]: pam_unix(sshd:session): session closed for user core Jan 23 18:40:37.103666 systemd[1]: sshd@1-10.200.8.22:22-10.200.16.10:44676.service: Deactivated successfully. Jan 23 18:40:37.105942 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:40:37.106772 systemd-logind[2527]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:40:37.107486 systemd-logind[2527]: Removed session 5. Jan 23 18:40:37.220090 systemd[1]: Started sshd@2-10.200.8.22:22-10.200.16.10:44692.service - OpenSSH per-connection server daemon (10.200.16.10:44692). Jan 23 18:40:37.771458 sshd[2970]: Accepted publickey for core from 10.200.16.10 port 44692 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:40:37.772393 sshd-session[2970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:40:37.776974 systemd-logind[2527]: New session 6 of user core. Jan 23 18:40:37.782944 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:40:38.081521 sshd[2974]: Connection closed by 10.200.16.10 port 44692 Jan 23 18:40:38.081960 sshd-session[2970]: pam_unix(sshd:session): session closed for user core Jan 23 18:40:38.084960 systemd[1]: sshd@2-10.200.8.22:22-10.200.16.10:44692.service: Deactivated successfully. Jan 23 18:40:38.086382 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:40:38.087896 systemd-logind[2527]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:40:38.088475 systemd-logind[2527]: Removed session 6. Jan 23 18:40:38.194878 systemd[1]: Started sshd@3-10.200.8.22:22-10.200.16.10:44704.service - OpenSSH per-connection server daemon (10.200.16.10:44704). Jan 23 18:40:38.747632 sshd[2980]: Accepted publickey for core from 10.200.16.10 port 44704 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:40:38.748716 sshd-session[2980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:40:38.752230 systemd-logind[2527]: New session 7 of user core. Jan 23 18:40:38.760942 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:40:39.061378 sshd[2984]: Connection closed by 10.200.16.10 port 44704 Jan 23 18:40:39.061931 sshd-session[2980]: pam_unix(sshd:session): session closed for user core Jan 23 18:40:39.064892 systemd-logind[2527]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:40:39.065531 systemd[1]: sshd@3-10.200.8.22:22-10.200.16.10:44704.service: Deactivated successfully. Jan 23 18:40:39.066968 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:40:39.068312 systemd-logind[2527]: Removed session 7. Jan 23 18:40:39.174927 systemd[1]: Started sshd@4-10.200.8.22:22-10.200.16.10:44718.service - OpenSSH per-connection server daemon (10.200.16.10:44718). Jan 23 18:40:39.728179 sshd[2990]: Accepted publickey for core from 10.200.16.10 port 44718 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:40:39.728641 sshd-session[2990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:40:39.732586 systemd-logind[2527]: New session 8 of user core. Jan 23 18:40:39.742944 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:40:40.053939 sudo[2995]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:40:40.054153 sudo[2995]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:40:41.481697 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 18:40:41.495037 (dockerd)[3015]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 18:40:42.627657 dockerd[3015]: time="2026-01-23T18:40:42.627597091Z" level=info msg="Starting up" Jan 23 18:40:42.628327 dockerd[3015]: time="2026-01-23T18:40:42.628302252Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 18:40:42.638358 dockerd[3015]: time="2026-01-23T18:40:42.638301719Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 18:40:42.717407 dockerd[3015]: time="2026-01-23T18:40:42.717365760Z" level=info msg="Loading containers: start." Jan 23 18:40:42.742846 kernel: Initializing XFRM netlink socket Jan 23 18:40:43.019500 systemd-networkd[2182]: docker0: Link UP Jan 23 18:40:43.032899 dockerd[3015]: time="2026-01-23T18:40:43.032867842Z" level=info msg="Loading containers: done." Jan 23 18:40:43.044256 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1077673032-merged.mount: Deactivated successfully. Jan 23 18:40:43.058544 dockerd[3015]: time="2026-01-23T18:40:43.058512519Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 18:40:43.058651 dockerd[3015]: time="2026-01-23T18:40:43.058579060Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 18:40:43.058651 dockerd[3015]: time="2026-01-23T18:40:43.058639189Z" level=info msg="Initializing buildkit" Jan 23 18:40:43.103381 dockerd[3015]: time="2026-01-23T18:40:43.103348775Z" level=info msg="Completed buildkit initialization" Jan 23 18:40:43.109001 dockerd[3015]: time="2026-01-23T18:40:43.108967100Z" level=info msg="Daemon has completed initialization" Jan 23 18:40:43.109538 dockerd[3015]: time="2026-01-23T18:40:43.109106084Z" level=info msg="API listen on /run/docker.sock" Jan 23 18:40:43.109243 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 18:40:44.068891 containerd[2553]: time="2026-01-23T18:40:44.068772194Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 18:40:44.531861 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 23 18:40:45.122097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054324234.mount: Deactivated successfully. Jan 23 18:40:45.977704 containerd[2553]: time="2026-01-23T18:40:45.977655734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:45.982294 containerd[2553]: time="2026-01-23T18:40:45.982178127Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27980639" Jan 23 18:40:45.984706 containerd[2553]: time="2026-01-23T18:40:45.984681099Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:45.987966 containerd[2553]: time="2026-01-23T18:40:45.987936635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:45.989090 containerd[2553]: time="2026-01-23T18:40:45.988659055Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.919818519s" Jan 23 18:40:45.989090 containerd[2553]: time="2026-01-23T18:40:45.988691007Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 23 18:40:45.989453 containerd[2553]: time="2026-01-23T18:40:45.989433164Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 18:40:45.997820 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 18:40:45.999150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:40:46.493600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:40:46.498969 (kubelet)[3286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:40:46.529377 kubelet[3286]: E0123 18:40:46.529342 3286 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:40:46.530759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:40:46.530891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:40:46.531227 systemd[1]: kubelet.service: Consumed 115ms CPU time, 107.8M memory peak. Jan 23 18:40:47.064197 update_engine[2530]: I20260123 18:40:47.064134 2530 update_attempter.cc:509] Updating boot flags... Jan 23 18:40:47.759870 containerd[2553]: time="2026-01-23T18:40:47.759829982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:47.762145 containerd[2553]: time="2026-01-23T18:40:47.762036398Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 23 18:40:47.764408 containerd[2553]: time="2026-01-23T18:40:47.764387677Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:47.768360 containerd[2553]: time="2026-01-23T18:40:47.768331394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:47.769494 containerd[2553]: time="2026-01-23T18:40:47.769049936Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.779590757s" Jan 23 18:40:47.769494 containerd[2553]: time="2026-01-23T18:40:47.769094814Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 23 18:40:47.769597 containerd[2553]: time="2026-01-23T18:40:47.769580636Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 18:40:49.112334 containerd[2553]: time="2026-01-23T18:40:49.112289827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:49.114594 containerd[2553]: time="2026-01-23T18:40:49.114560667Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 23 18:40:49.117347 containerd[2553]: time="2026-01-23T18:40:49.117316301Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:49.120779 containerd[2553]: time="2026-01-23T18:40:49.120738071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:49.121624 containerd[2553]: time="2026-01-23T18:40:49.121403660Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.351802489s" Jan 23 18:40:49.121624 containerd[2553]: time="2026-01-23T18:40:49.121433805Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 23 18:40:49.122054 containerd[2553]: time="2026-01-23T18:40:49.121951494Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 18:40:49.889559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1813545226.mount: Deactivated successfully. Jan 23 18:40:50.212424 containerd[2553]: time="2026-01-23T18:40:50.212327027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:50.214735 containerd[2553]: time="2026-01-23T18:40:50.214701587Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=0" Jan 23 18:40:50.217338 containerd[2553]: time="2026-01-23T18:40:50.217303187Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:50.220657 containerd[2553]: time="2026-01-23T18:40:50.220624083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:50.221074 containerd[2553]: time="2026-01-23T18:40:50.220877422Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.098754445s" Jan 23 18:40:50.221074 containerd[2553]: time="2026-01-23T18:40:50.220905847Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 18:40:50.221344 containerd[2553]: time="2026-01-23T18:40:50.221326293Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 18:40:50.896096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2087760481.mount: Deactivated successfully. Jan 23 18:40:51.573944 containerd[2553]: time="2026-01-23T18:40:51.573899418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:51.576104 containerd[2553]: time="2026-01-23T18:40:51.575990938Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Jan 23 18:40:51.578447 containerd[2553]: time="2026-01-23T18:40:51.578425946Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:51.581923 containerd[2553]: time="2026-01-23T18:40:51.581895840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:51.582666 containerd[2553]: time="2026-01-23T18:40:51.582480808Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.36113048s" Jan 23 18:40:51.582666 containerd[2553]: time="2026-01-23T18:40:51.582509149Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 23 18:40:51.583175 containerd[2553]: time="2026-01-23T18:40:51.583156067Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 18:40:52.033932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2347259286.mount: Deactivated successfully. Jan 23 18:40:52.068249 containerd[2553]: time="2026-01-23T18:40:52.068211756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:40:52.070483 containerd[2553]: time="2026-01-23T18:40:52.070331237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:40:52.072899 containerd[2553]: time="2026-01-23T18:40:52.072874571Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:40:52.076487 containerd[2553]: time="2026-01-23T18:40:52.076463037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:40:52.077026 containerd[2553]: time="2026-01-23T18:40:52.076817231Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 493.637674ms" Jan 23 18:40:52.077026 containerd[2553]: time="2026-01-23T18:40:52.076843410Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 18:40:52.077257 containerd[2553]: time="2026-01-23T18:40:52.077241207Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 18:40:52.733623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761040926.mount: Deactivated successfully. Jan 23 18:40:54.405521 containerd[2553]: time="2026-01-23T18:40:54.405476498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:54.408571 containerd[2553]: time="2026-01-23T18:40:54.408439046Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55859474" Jan 23 18:40:54.410991 containerd[2553]: time="2026-01-23T18:40:54.410968720Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:54.415175 containerd[2553]: time="2026-01-23T18:40:54.415140755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:40:54.415929 containerd[2553]: time="2026-01-23T18:40:54.415782151Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.338516069s" Jan 23 18:40:54.415929 containerd[2553]: time="2026-01-23T18:40:54.415818408Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 23 18:40:56.748036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 18:40:56.751995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:40:56.872561 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:40:56.872631 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:40:56.872971 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:40:56.877007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:40:56.897248 systemd[1]: Reload requested from client PID 3474 ('systemctl') (unit session-8.scope)... Jan 23 18:40:56.897356 systemd[1]: Reloading... Jan 23 18:40:56.990806 zram_generator::config[3533]: No configuration found. Jan 23 18:40:57.158210 systemd[1]: Reloading finished in 260 ms. Jan 23 18:40:57.268104 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:40:57.268178 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:40:57.268464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:40:57.268510 systemd[1]: kubelet.service: Consumed 59ms CPU time, 64.7M memory peak. Jan 23 18:40:57.270311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:40:57.804685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:40:57.811113 (kubelet)[3591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:40:57.842340 kubelet[3591]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:40:57.842520 kubelet[3591]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:40:57.842545 kubelet[3591]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:40:57.842623 kubelet[3591]: I0123 18:40:57.842610 3591 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:40:58.010224 kubelet[3591]: I0123 18:40:58.010192 3591 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 18:40:58.010224 kubelet[3591]: I0123 18:40:58.010211 3591 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:40:58.010432 kubelet[3591]: I0123 18:40:58.010416 3591 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 18:40:58.035409 kubelet[3591]: E0123 18:40:58.035382 3591 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:40:58.036547 kubelet[3591]: I0123 18:40:58.036406 3591 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:40:58.043065 kubelet[3591]: I0123 18:40:58.043049 3591 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:40:58.045958 kubelet[3591]: I0123 18:40:58.045937 3591 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:40:58.046161 kubelet[3591]: I0123 18:40:58.046129 3591 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:40:58.046390 kubelet[3591]: I0123 18:40:58.046162 3591 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.1.0-a-1dec04cda0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:40:58.047312 kubelet[3591]: I0123 18:40:58.047285 3591 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:40:58.047397 kubelet[3591]: I0123 18:40:58.047382 3591 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 18:40:58.047486 kubelet[3591]: I0123 18:40:58.047476 3591 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:40:58.050744 kubelet[3591]: I0123 18:40:58.050726 3591 kubelet.go:446] "Attempting to sync node with API server" Jan 23 18:40:58.050918 kubelet[3591]: I0123 18:40:58.050756 3591 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:40:58.050918 kubelet[3591]: I0123 18:40:58.050777 3591 kubelet.go:352] "Adding apiserver pod source" Jan 23 18:40:58.050918 kubelet[3591]: I0123 18:40:58.050797 3591 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:40:58.053993 kubelet[3591]: W0123 18:40:58.053473 3591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.1.0-a-1dec04cda0&limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 23 18:40:58.053993 kubelet[3591]: E0123 18:40:58.053524 3591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.1.0-a-1dec04cda0&limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:40:58.054219 kubelet[3591]: W0123 18:40:58.054191 3591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 23 18:40:58.054279 kubelet[3591]: E0123 18:40:58.054267 3591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:40:58.054377 kubelet[3591]: I0123 18:40:58.054369 3591 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 18:40:58.054799 kubelet[3591]: I0123 18:40:58.054777 3591 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 18:40:58.054887 kubelet[3591]: W0123 18:40:58.054881 3591 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:40:58.057542 kubelet[3591]: I0123 18:40:58.057473 3591 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:40:58.057542 kubelet[3591]: I0123 18:40:58.057505 3591 server.go:1287] "Started kubelet" Jan 23 18:40:58.063507 kubelet[3591]: I0123 18:40:58.063244 3591 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:40:58.063507 kubelet[3591]: E0123 18:40:58.062294 3591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.22:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.1.0-a-1dec04cda0.188d70422763f6c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.1.0-a-1dec04cda0,UID:ci-4547.1.0-a-1dec04cda0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.1.0-a-1dec04cda0,},FirstTimestamp:2026-01-23 18:40:58.057488065 +0000 UTC m=+0.243431747,LastTimestamp:2026-01-23 18:40:58.057488065 +0000 UTC m=+0.243431747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.1.0-a-1dec04cda0,}" Jan 23 18:40:58.066372 kubelet[3591]: I0123 18:40:58.066344 3591 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:40:58.067203 kubelet[3591]: I0123 18:40:58.067189 3591 server.go:479] "Adding debug handlers to kubelet server" Jan 23 18:40:58.068028 kubelet[3591]: I0123 18:40:58.068006 3591 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:40:58.068263 kubelet[3591]: E0123 18:40:58.068245 3591 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.1.0-a-1dec04cda0\" not found" Jan 23 18:40:58.070055 kubelet[3591]: I0123 18:40:58.070034 3591 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:40:58.070119 kubelet[3591]: I0123 18:40:58.070077 3591 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:40:58.079238 kubelet[3591]: I0123 18:40:58.079192 3591 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:40:58.079622 kubelet[3591]: I0123 18:40:58.079596 3591 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:40:58.079765 kubelet[3591]: I0123 18:40:58.079749 3591 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:40:58.080131 kubelet[3591]: E0123 18:40:58.080044 3591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-a-1dec04cda0?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="200ms" Jan 23 18:40:58.080413 kubelet[3591]: W0123 18:40:58.080378 3591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 23 18:40:58.080459 kubelet[3591]: E0123 18:40:58.080423 3591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:40:58.081067 kubelet[3591]: I0123 18:40:58.081050 3591 factory.go:221] Registration of the systemd container factory successfully Jan 23 18:40:58.081128 kubelet[3591]: I0123 18:40:58.081113 3591 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:40:58.081835 kubelet[3591]: E0123 18:40:58.081815 3591 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:40:58.081917 kubelet[3591]: I0123 18:40:58.081906 3591 factory.go:221] Registration of the containerd container factory successfully Jan 23 18:40:58.092446 kubelet[3591]: I0123 18:40:58.092433 3591 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:40:58.092669 kubelet[3591]: I0123 18:40:58.092536 3591 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:40:58.092669 kubelet[3591]: I0123 18:40:58.092550 3591 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:40:58.098301 kubelet[3591]: I0123 18:40:58.097587 3591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 18:40:58.099299 kubelet[3591]: I0123 18:40:58.099285 3591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 18:40:58.099372 kubelet[3591]: I0123 18:40:58.099367 3591 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 18:40:58.099418 kubelet[3591]: I0123 18:40:58.099413 3591 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:40:58.099453 kubelet[3591]: I0123 18:40:58.099449 3591 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 18:40:58.099518 kubelet[3591]: E0123 18:40:58.099508 3591 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:40:58.102332 kubelet[3591]: I0123 18:40:58.102308 3591 policy_none.go:49] "None policy: Start" Jan 23 18:40:58.102332 kubelet[3591]: I0123 18:40:58.102335 3591 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:40:58.102411 kubelet[3591]: I0123 18:40:58.102345 3591 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:40:58.102672 kubelet[3591]: W0123 18:40:58.102659 3591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 23 18:40:58.102838 kubelet[3591]: E0123 18:40:58.102827 3591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:40:58.108474 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:40:58.118564 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:40:58.121077 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:40:58.127728 kubelet[3591]: I0123 18:40:58.127405 3591 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 18:40:58.127836 kubelet[3591]: I0123 18:40:58.127828 3591 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:40:58.127956 kubelet[3591]: I0123 18:40:58.127927 3591 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:40:58.128401 kubelet[3591]: I0123 18:40:58.128391 3591 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:40:58.129546 kubelet[3591]: E0123 18:40:58.129527 3591 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:40:58.129673 kubelet[3591]: E0123 18:40:58.129665 3591 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.1.0-a-1dec04cda0\" not found" Jan 23 18:40:58.206467 systemd[1]: Created slice kubepods-burstable-pod53b37f54abc158a20b179719c7b9e3b4.slice - libcontainer container kubepods-burstable-pod53b37f54abc158a20b179719c7b9e3b4.slice. Jan 23 18:40:58.221916 kubelet[3591]: E0123 18:40:58.221893 3591 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-1dec04cda0\" not found" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.224283 systemd[1]: Created slice kubepods-burstable-pod93c29a744376efbbb89f8d763230cdff.slice - libcontainer container kubepods-burstable-pod93c29a744376efbbb89f8d763230cdff.slice. Jan 23 18:40:58.229131 kubelet[3591]: I0123 18:40:58.229115 3591 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.229361 kubelet[3591]: E0123 18:40:58.229346 3591 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.22:6443/api/v1/nodes\": dial tcp 10.200.8.22:6443: connect: connection refused" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.235535 kubelet[3591]: E0123 18:40:58.235520 3591 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-1dec04cda0\" not found" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.237655 systemd[1]: Created slice kubepods-burstable-pod9e0d0b64355a282b26c1f32a7f0bb5d5.slice - libcontainer container kubepods-burstable-pod9e0d0b64355a282b26c1f32a7f0bb5d5.slice. Jan 23 18:40:58.239188 kubelet[3591]: E0123 18:40:58.239173 3591 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-1dec04cda0\" not found" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.271475 kubelet[3591]: I0123 18:40:58.271442 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53b37f54abc158a20b179719c7b9e3b4-k8s-certs\") pod \"kube-apiserver-ci-4547.1.0-a-1dec04cda0\" (UID: \"53b37f54abc158a20b179719c7b9e3b4\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.271610 kubelet[3591]: I0123 18:40:58.271484 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/93c29a744376efbbb89f8d763230cdff-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.1.0-a-1dec04cda0\" (UID: \"93c29a744376efbbb89f8d763230cdff\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.271610 kubelet[3591]: I0123 18:40:58.271503 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/93c29a744376efbbb89f8d763230cdff-kubeconfig\") pod \"kube-controller-manager-ci-4547.1.0-a-1dec04cda0\" (UID: \"93c29a744376efbbb89f8d763230cdff\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.271610 kubelet[3591]: I0123 18:40:58.271519 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e0d0b64355a282b26c1f32a7f0bb5d5-kubeconfig\") pod \"kube-scheduler-ci-4547.1.0-a-1dec04cda0\" (UID: \"9e0d0b64355a282b26c1f32a7f0bb5d5\") " pod="kube-system/kube-scheduler-ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.271610 kubelet[3591]: I0123 18:40:58.271536 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53b37f54abc158a20b179719c7b9e3b4-ca-certs\") pod \"kube-apiserver-ci-4547.1.0-a-1dec04cda0\" (UID: \"53b37f54abc158a20b179719c7b9e3b4\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.271610 kubelet[3591]: I0123 18:40:58.271551 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53b37f54abc158a20b179719c7b9e3b4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.1.0-a-1dec04cda0\" (UID: \"53b37f54abc158a20b179719c7b9e3b4\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.271754 kubelet[3591]: I0123 18:40:58.271566 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93c29a744376efbbb89f8d763230cdff-ca-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-1dec04cda0\" (UID: \"93c29a744376efbbb89f8d763230cdff\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.271754 kubelet[3591]: I0123 18:40:58.271596 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93c29a744376efbbb89f8d763230cdff-k8s-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-1dec04cda0\" (UID: \"93c29a744376efbbb89f8d763230cdff\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.271754 kubelet[3591]: I0123 18:40:58.271613 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93c29a744376efbbb89f8d763230cdff-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.1.0-a-1dec04cda0\" (UID: \"93c29a744376efbbb89f8d763230cdff\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.280755 kubelet[3591]: E0123 18:40:58.280728 3591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-a-1dec04cda0?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="400ms" Jan 23 18:40:58.431035 kubelet[3591]: I0123 18:40:58.430911 3591 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.431805 kubelet[3591]: E0123 18:40:58.431217 3591 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.22:6443/api/v1/nodes\": dial tcp 10.200.8.22:6443: connect: connection refused" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.523461 containerd[2553]: time="2026-01-23T18:40:58.523429239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.1.0-a-1dec04cda0,Uid:53b37f54abc158a20b179719c7b9e3b4,Namespace:kube-system,Attempt:0,}" Jan 23 18:40:58.536871 kubelet[3591]: E0123 18:40:58.536699 3591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.22:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.1.0-a-1dec04cda0.188d70422763f6c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.1.0-a-1dec04cda0,UID:ci-4547.1.0-a-1dec04cda0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.1.0-a-1dec04cda0,},FirstTimestamp:2026-01-23 18:40:58.057488065 +0000 UTC m=+0.243431747,LastTimestamp:2026-01-23 18:40:58.057488065 +0000 UTC m=+0.243431747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.1.0-a-1dec04cda0,}" Jan 23 18:40:58.537018 containerd[2553]: time="2026-01-23T18:40:58.536818537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.1.0-a-1dec04cda0,Uid:93c29a744376efbbb89f8d763230cdff,Namespace:kube-system,Attempt:0,}" Jan 23 18:40:58.540541 containerd[2553]: time="2026-01-23T18:40:58.540516897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.1.0-a-1dec04cda0,Uid:9e0d0b64355a282b26c1f32a7f0bb5d5,Namespace:kube-system,Attempt:0,}" Jan 23 18:40:58.570358 containerd[2553]: time="2026-01-23T18:40:58.570308486Z" level=info msg="connecting to shim 4ad1621824a8c6da64591e81016feff6a64eb891ba31d35139364897d284959e" address="unix:///run/containerd/s/2ebaa4f06b14bd6f13facea8fdd14c5c175e573d2047dedcf28e218a9b7214f0" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:40:58.595587 containerd[2553]: time="2026-01-23T18:40:58.595506794Z" level=info msg="connecting to shim 4e0fb703540d105c605622f114b27fad4d119a8743b2cf354c2d86dfa1df1110" address="unix:///run/containerd/s/7dac9f55ce5498f072fb185754bf5cff724c3abecf4ff1be587a0c0f6aa47d06" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:40:58.596989 systemd[1]: Started cri-containerd-4ad1621824a8c6da64591e81016feff6a64eb891ba31d35139364897d284959e.scope - libcontainer container 4ad1621824a8c6da64591e81016feff6a64eb891ba31d35139364897d284959e. Jan 23 18:40:58.619509 containerd[2553]: time="2026-01-23T18:40:58.619147746Z" level=info msg="connecting to shim b6d0ef84c70ca832c8a072508cd1b5d49e6645d9a1c3fc01adf0af3cb8be8249" address="unix:///run/containerd/s/4e756a51c1fbc708a3480ac0b56d174a7bec03f87ddce6cbb249613c3f8e4a8c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:40:58.632027 systemd[1]: Started cri-containerd-4e0fb703540d105c605622f114b27fad4d119a8743b2cf354c2d86dfa1df1110.scope - libcontainer container 4e0fb703540d105c605622f114b27fad4d119a8743b2cf354c2d86dfa1df1110. Jan 23 18:40:58.656080 systemd[1]: Started cri-containerd-b6d0ef84c70ca832c8a072508cd1b5d49e6645d9a1c3fc01adf0af3cb8be8249.scope - libcontainer container b6d0ef84c70ca832c8a072508cd1b5d49e6645d9a1c3fc01adf0af3cb8be8249. Jan 23 18:40:58.662345 containerd[2553]: time="2026-01-23T18:40:58.662304164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.1.0-a-1dec04cda0,Uid:53b37f54abc158a20b179719c7b9e3b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ad1621824a8c6da64591e81016feff6a64eb891ba31d35139364897d284959e\"" Jan 23 18:40:58.667988 containerd[2553]: time="2026-01-23T18:40:58.667966226Z" level=info msg="CreateContainer within sandbox \"4ad1621824a8c6da64591e81016feff6a64eb891ba31d35139364897d284959e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 18:40:58.682836 kubelet[3591]: E0123 18:40:58.681606 3591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-a-1dec04cda0?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="800ms" Jan 23 18:40:58.685169 containerd[2553]: time="2026-01-23T18:40:58.685149903Z" level=info msg="Container 533b61f52d8453f4b69120da51da7a7cadebca2f7e3d1afb2be86640eec947fa: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:40:58.698752 containerd[2553]: time="2026-01-23T18:40:58.698726829Z" level=info msg="CreateContainer within sandbox \"4ad1621824a8c6da64591e81016feff6a64eb891ba31d35139364897d284959e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"533b61f52d8453f4b69120da51da7a7cadebca2f7e3d1afb2be86640eec947fa\"" Jan 23 18:40:58.699381 containerd[2553]: time="2026-01-23T18:40:58.699244519Z" level=info msg="StartContainer for \"533b61f52d8453f4b69120da51da7a7cadebca2f7e3d1afb2be86640eec947fa\"" Jan 23 18:40:58.700306 containerd[2553]: time="2026-01-23T18:40:58.700279135Z" level=info msg="connecting to shim 533b61f52d8453f4b69120da51da7a7cadebca2f7e3d1afb2be86640eec947fa" address="unix:///run/containerd/s/2ebaa4f06b14bd6f13facea8fdd14c5c175e573d2047dedcf28e218a9b7214f0" protocol=ttrpc version=3 Jan 23 18:40:58.702722 containerd[2553]: time="2026-01-23T18:40:58.702635507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.1.0-a-1dec04cda0,Uid:93c29a744376efbbb89f8d763230cdff,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e0fb703540d105c605622f114b27fad4d119a8743b2cf354c2d86dfa1df1110\"" Jan 23 18:40:58.705255 containerd[2553]: time="2026-01-23T18:40:58.705164556Z" level=info msg="CreateContainer within sandbox \"4e0fb703540d105c605622f114b27fad4d119a8743b2cf354c2d86dfa1df1110\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 18:40:58.721084 systemd[1]: Started cri-containerd-533b61f52d8453f4b69120da51da7a7cadebca2f7e3d1afb2be86640eec947fa.scope - libcontainer container 533b61f52d8453f4b69120da51da7a7cadebca2f7e3d1afb2be86640eec947fa. Jan 23 18:40:58.721496 containerd[2553]: time="2026-01-23T18:40:58.721468486Z" level=info msg="Container c6daabe5171cb08a6e9b490af290c88ad87b2743958f31268b79a7a38e47c114: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:40:58.724289 containerd[2553]: time="2026-01-23T18:40:58.724265701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.1.0-a-1dec04cda0,Uid:9e0d0b64355a282b26c1f32a7f0bb5d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6d0ef84c70ca832c8a072508cd1b5d49e6645d9a1c3fc01adf0af3cb8be8249\"" Jan 23 18:40:58.727543 containerd[2553]: time="2026-01-23T18:40:58.727469382Z" level=info msg="CreateContainer within sandbox \"b6d0ef84c70ca832c8a072508cd1b5d49e6645d9a1c3fc01adf0af3cb8be8249\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 18:40:58.736911 containerd[2553]: time="2026-01-23T18:40:58.736877770Z" level=info msg="CreateContainer within sandbox \"4e0fb703540d105c605622f114b27fad4d119a8743b2cf354c2d86dfa1df1110\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c6daabe5171cb08a6e9b490af290c88ad87b2743958f31268b79a7a38e47c114\"" Jan 23 18:40:58.737847 containerd[2553]: time="2026-01-23T18:40:58.737393867Z" level=info msg="StartContainer for \"c6daabe5171cb08a6e9b490af290c88ad87b2743958f31268b79a7a38e47c114\"" Jan 23 18:40:58.738484 containerd[2553]: time="2026-01-23T18:40:58.738444941Z" level=info msg="connecting to shim c6daabe5171cb08a6e9b490af290c88ad87b2743958f31268b79a7a38e47c114" address="unix:///run/containerd/s/7dac9f55ce5498f072fb185754bf5cff724c3abecf4ff1be587a0c0f6aa47d06" protocol=ttrpc version=3 Jan 23 18:40:58.747566 containerd[2553]: time="2026-01-23T18:40:58.747543050Z" level=info msg="Container e8076f2926ab51625cfac429ebcafc6e252652137a76e0ee89736b5106735a88: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:40:58.758062 systemd[1]: Started cri-containerd-c6daabe5171cb08a6e9b490af290c88ad87b2743958f31268b79a7a38e47c114.scope - libcontainer container c6daabe5171cb08a6e9b490af290c88ad87b2743958f31268b79a7a38e47c114. Jan 23 18:40:58.763643 containerd[2553]: time="2026-01-23T18:40:58.763255293Z" level=info msg="CreateContainer within sandbox \"b6d0ef84c70ca832c8a072508cd1b5d49e6645d9a1c3fc01adf0af3cb8be8249\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e8076f2926ab51625cfac429ebcafc6e252652137a76e0ee89736b5106735a88\"" Jan 23 18:40:58.765197 containerd[2553]: time="2026-01-23T18:40:58.765179740Z" level=info msg="StartContainer for \"e8076f2926ab51625cfac429ebcafc6e252652137a76e0ee89736b5106735a88\"" Jan 23 18:40:58.766216 containerd[2553]: time="2026-01-23T18:40:58.766176290Z" level=info msg="connecting to shim e8076f2926ab51625cfac429ebcafc6e252652137a76e0ee89736b5106735a88" address="unix:///run/containerd/s/4e756a51c1fbc708a3480ac0b56d174a7bec03f87ddce6cbb249613c3f8e4a8c" protocol=ttrpc version=3 Jan 23 18:40:58.777886 containerd[2553]: time="2026-01-23T18:40:58.777857262Z" level=info msg="StartContainer for \"533b61f52d8453f4b69120da51da7a7cadebca2f7e3d1afb2be86640eec947fa\" returns successfully" Jan 23 18:40:58.788097 systemd[1]: Started cri-containerd-e8076f2926ab51625cfac429ebcafc6e252652137a76e0ee89736b5106735a88.scope - libcontainer container e8076f2926ab51625cfac429ebcafc6e252652137a76e0ee89736b5106735a88. Jan 23 18:40:58.840143 kubelet[3591]: I0123 18:40:58.840120 3591 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:58.867278 containerd[2553]: time="2026-01-23T18:40:58.867214232Z" level=info msg="StartContainer for \"c6daabe5171cb08a6e9b490af290c88ad87b2743958f31268b79a7a38e47c114\" returns successfully" Jan 23 18:40:58.868883 containerd[2553]: time="2026-01-23T18:40:58.868839728Z" level=info msg="StartContainer for \"e8076f2926ab51625cfac429ebcafc6e252652137a76e0ee89736b5106735a88\" returns successfully" Jan 23 18:40:59.111803 kubelet[3591]: E0123 18:40:59.111062 3591 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-1dec04cda0\" not found" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:59.122696 kubelet[3591]: E0123 18:40:59.122680 3591 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-1dec04cda0\" not found" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:40:59.129878 kubelet[3591]: E0123 18:40:59.129866 3591 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-1dec04cda0\" not found" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:00.132525 kubelet[3591]: E0123 18:41:00.132038 3591 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-1dec04cda0\" not found" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:00.133470 kubelet[3591]: E0123 18:41:00.133284 3591 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-1dec04cda0\" not found" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:01.022186 kubelet[3591]: E0123 18:41:01.022142 3591 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.1.0-a-1dec04cda0\" not found" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:01.028061 kubelet[3591]: I0123 18:41:01.028015 3591 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:01.054973 kubelet[3591]: I0123 18:41:01.054953 3591 apiserver.go:52] "Watching apiserver" Jan 23 18:41:01.069140 kubelet[3591]: I0123 18:41:01.069118 3591 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:01.070304 kubelet[3591]: I0123 18:41:01.070291 3591 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:41:01.083108 kubelet[3591]: E0123 18:41:01.083054 3591 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.1.0-a-1dec04cda0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:01.083108 kubelet[3591]: I0123 18:41:01.083075 3591 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:01.085468 kubelet[3591]: E0123 18:41:01.085433 3591 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.1.0-a-1dec04cda0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:01.085630 kubelet[3591]: I0123 18:41:01.085451 3591 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:01.087088 kubelet[3591]: E0123 18:41:01.087065 3591 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.1.0-a-1dec04cda0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:02.307099 kubelet[3591]: I0123 18:41:02.307058 3591 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:02.316045 kubelet[3591]: W0123 18:41:02.316007 3591 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 18:41:03.248594 systemd[1]: Reload requested from client PID 3864 ('systemctl') (unit session-8.scope)... Jan 23 18:41:03.248607 systemd[1]: Reloading... Jan 23 18:41:03.347811 zram_generator::config[3917]: No configuration found. Jan 23 18:41:03.547985 systemd[1]: Reloading finished in 299 ms. Jan 23 18:41:03.576732 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:41:03.594544 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:41:03.594822 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:41:03.594875 systemd[1]: kubelet.service: Consumed 512ms CPU time, 128.7M memory peak. Jan 23 18:41:03.596272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:41:04.003805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:41:04.013045 (kubelet)[3981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:41:04.047927 kubelet[3981]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:41:04.047927 kubelet[3981]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:41:04.047927 kubelet[3981]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:41:04.048192 kubelet[3981]: I0123 18:41:04.047972 3981 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:41:04.053548 kubelet[3981]: I0123 18:41:04.053529 3981 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 18:41:04.053548 kubelet[3981]: I0123 18:41:04.053545 3981 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:41:04.053993 kubelet[3981]: I0123 18:41:04.053979 3981 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 18:41:04.056606 kubelet[3981]: I0123 18:41:04.056401 3981 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 18:41:04.058231 kubelet[3981]: I0123 18:41:04.058213 3981 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:41:04.061900 kubelet[3981]: I0123 18:41:04.061879 3981 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:41:04.063768 kubelet[3981]: I0123 18:41:04.063747 3981 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:41:04.063934 kubelet[3981]: I0123 18:41:04.063911 3981 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:41:04.064064 kubelet[3981]: I0123 18:41:04.063934 3981 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.1.0-a-1dec04cda0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:41:04.064152 kubelet[3981]: I0123 18:41:04.064069 3981 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:41:04.064152 kubelet[3981]: I0123 18:41:04.064078 3981 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 18:41:04.064196 kubelet[3981]: I0123 18:41:04.064158 3981 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:41:04.064640 kubelet[3981]: I0123 18:41:04.064625 3981 kubelet.go:446] "Attempting to sync node with API server" Jan 23 18:41:04.064703 kubelet[3981]: I0123 18:41:04.064646 3981 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:41:04.064703 kubelet[3981]: I0123 18:41:04.064668 3981 kubelet.go:352] "Adding apiserver pod source" Jan 23 18:41:04.064703 kubelet[3981]: I0123 18:41:04.064677 3981 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:41:04.069803 kubelet[3981]: I0123 18:41:04.069040 3981 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 18:41:04.069803 kubelet[3981]: I0123 18:41:04.069427 3981 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 18:41:04.069803 kubelet[3981]: I0123 18:41:04.069768 3981 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:41:04.071336 kubelet[3981]: I0123 18:41:04.071314 3981 server.go:1287] "Started kubelet" Jan 23 18:41:04.074260 kubelet[3981]: I0123 18:41:04.074044 3981 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:41:04.081221 kubelet[3981]: I0123 18:41:04.081191 3981 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:41:04.082484 kubelet[3981]: I0123 18:41:04.082463 3981 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:41:04.083969 kubelet[3981]: I0123 18:41:04.083956 3981 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:41:04.084120 kubelet[3981]: E0123 18:41:04.084109 3981 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.1.0-a-1dec04cda0\" not found" Jan 23 18:41:04.086808 kubelet[3981]: I0123 18:41:04.085468 3981 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:41:04.086808 kubelet[3981]: I0123 18:41:04.085557 3981 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:41:04.087024 kubelet[3981]: I0123 18:41:04.087003 3981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 18:41:04.088802 kubelet[3981]: I0123 18:41:04.087879 3981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 18:41:04.088802 kubelet[3981]: I0123 18:41:04.087901 3981 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 18:41:04.088802 kubelet[3981]: I0123 18:41:04.087917 3981 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:41:04.088802 kubelet[3981]: I0123 18:41:04.087922 3981 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 18:41:04.088802 kubelet[3981]: E0123 18:41:04.087954 3981 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:41:04.092717 kubelet[3981]: I0123 18:41:04.091844 3981 server.go:479] "Adding debug handlers to kubelet server" Jan 23 18:41:04.094493 kubelet[3981]: I0123 18:41:04.093594 3981 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:41:04.094707 kubelet[3981]: I0123 18:41:04.094697 3981 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:41:04.097537 kubelet[3981]: I0123 18:41:04.096210 3981 factory.go:221] Registration of the systemd container factory successfully Jan 23 18:41:04.097688 kubelet[3981]: I0123 18:41:04.097670 3981 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:41:04.100089 kubelet[3981]: I0123 18:41:04.100065 3981 factory.go:221] Registration of the containerd container factory successfully Jan 23 18:41:04.101397 kubelet[3981]: E0123 18:41:04.100884 3981 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:41:04.145208 kubelet[3981]: I0123 18:41:04.145182 3981 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:41:04.145208 kubelet[3981]: I0123 18:41:04.145204 3981 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:41:04.145306 kubelet[3981]: I0123 18:41:04.145218 3981 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:41:04.145460 kubelet[3981]: I0123 18:41:04.145367 3981 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 18:41:04.145460 kubelet[3981]: I0123 18:41:04.145377 3981 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 18:41:04.145460 kubelet[3981]: I0123 18:41:04.145392 3981 policy_none.go:49] "None policy: Start" Jan 23 18:41:04.145460 kubelet[3981]: I0123 18:41:04.145402 3981 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:41:04.145460 kubelet[3981]: I0123 18:41:04.145410 3981 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:41:04.145585 kubelet[3981]: I0123 18:41:04.145558 3981 state_mem.go:75] "Updated machine memory state" Jan 23 18:41:04.148441 kubelet[3981]: I0123 18:41:04.148425 3981 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 18:41:04.148588 kubelet[3981]: I0123 18:41:04.148578 3981 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:41:04.148631 kubelet[3981]: I0123 18:41:04.148592 3981 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:41:04.148904 kubelet[3981]: I0123 18:41:04.148892 3981 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:41:04.151133 kubelet[3981]: E0123 18:41:04.150356 3981 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:41:04.188436 kubelet[3981]: I0123 18:41:04.188402 3981 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.188507 kubelet[3981]: I0123 18:41:04.188420 3981 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.188667 kubelet[3981]: I0123 18:41:04.188659 3981 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.195407 kubelet[3981]: W0123 18:41:04.195369 3981 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 18:41:04.198841 kubelet[3981]: W0123 18:41:04.198826 3981 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 18:41:04.199522 kubelet[3981]: W0123 18:41:04.199510 3981 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 18:41:04.199668 kubelet[3981]: E0123 18:41:04.199595 3981 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.1.0-a-1dec04cda0\" already exists" pod="kube-system/kube-apiserver-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.251385 kubelet[3981]: I0123 18:41:04.251365 3981 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.261729 kubelet[3981]: I0123 18:41:04.261187 3981 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.261729 kubelet[3981]: I0123 18:41:04.261243 3981 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.387086 kubelet[3981]: I0123 18:41:04.387055 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53b37f54abc158a20b179719c7b9e3b4-k8s-certs\") pod \"kube-apiserver-ci-4547.1.0-a-1dec04cda0\" (UID: \"53b37f54abc158a20b179719c7b9e3b4\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.387157 kubelet[3981]: I0123 18:41:04.387095 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53b37f54abc158a20b179719c7b9e3b4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.1.0-a-1dec04cda0\" (UID: \"53b37f54abc158a20b179719c7b9e3b4\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.387157 kubelet[3981]: I0123 18:41:04.387124 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/93c29a744376efbbb89f8d763230cdff-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.1.0-a-1dec04cda0\" (UID: \"93c29a744376efbbb89f8d763230cdff\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.387157 kubelet[3981]: I0123 18:41:04.387141 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/93c29a744376efbbb89f8d763230cdff-kubeconfig\") pod \"kube-controller-manager-ci-4547.1.0-a-1dec04cda0\" (UID: \"93c29a744376efbbb89f8d763230cdff\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.387157 kubelet[3981]: I0123 18:41:04.387154 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53b37f54abc158a20b179719c7b9e3b4-ca-certs\") pod \"kube-apiserver-ci-4547.1.0-a-1dec04cda0\" (UID: \"53b37f54abc158a20b179719c7b9e3b4\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.387249 kubelet[3981]: I0123 18:41:04.387171 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93c29a744376efbbb89f8d763230cdff-k8s-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-1dec04cda0\" (UID: \"93c29a744376efbbb89f8d763230cdff\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.387249 kubelet[3981]: I0123 18:41:04.387189 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93c29a744376efbbb89f8d763230cdff-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.1.0-a-1dec04cda0\" (UID: \"93c29a744376efbbb89f8d763230cdff\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.387249 kubelet[3981]: I0123 18:41:04.387214 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e0d0b64355a282b26c1f32a7f0bb5d5-kubeconfig\") pod \"kube-scheduler-ci-4547.1.0-a-1dec04cda0\" (UID: \"9e0d0b64355a282b26c1f32a7f0bb5d5\") " pod="kube-system/kube-scheduler-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:04.387249 kubelet[3981]: I0123 18:41:04.387230 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93c29a744376efbbb89f8d763230cdff-ca-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-1dec04cda0\" (UID: \"93c29a744376efbbb89f8d763230cdff\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:05.049839 sudo[2995]: pam_unix(sudo:session): session closed for user root Jan 23 18:41:05.066190 kubelet[3981]: I0123 18:41:05.066167 3981 apiserver.go:52] "Watching apiserver" Jan 23 18:41:05.085996 kubelet[3981]: I0123 18:41:05.085966 3981 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:41:05.125957 kubelet[3981]: I0123 18:41:05.125888 3981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" podStartSLOduration=1.125861708 podStartE2EDuration="1.125861708s" podCreationTimestamp="2026-01-23 18:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:41:05.12459944 +0000 UTC m=+1.108477634" watchObservedRunningTime="2026-01-23 18:41:05.125861708 +0000 UTC m=+1.109739901" Jan 23 18:41:05.126345 kubelet[3981]: I0123 18:41:05.126243 3981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.1.0-a-1dec04cda0" podStartSLOduration=1.126234168 podStartE2EDuration="1.126234168s" podCreationTimestamp="2026-01-23 18:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:41:05.116517583 +0000 UTC m=+1.100395777" watchObservedRunningTime="2026-01-23 18:41:05.126234168 +0000 UTC m=+1.110112365" Jan 23 18:41:05.126738 kubelet[3981]: I0123 18:41:05.126692 3981 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:05.136234 kubelet[3981]: I0123 18:41:05.136197 3981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.1.0-a-1dec04cda0" podStartSLOduration=3.136186572 podStartE2EDuration="3.136186572s" podCreationTimestamp="2026-01-23 18:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:41:05.135997031 +0000 UTC m=+1.119875236" watchObservedRunningTime="2026-01-23 18:41:05.136186572 +0000 UTC m=+1.120064763" Jan 23 18:41:05.136460 kubelet[3981]: W0123 18:41:05.136370 3981 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 18:41:05.136502 kubelet[3981]: E0123 18:41:05.136466 3981 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.1.0-a-1dec04cda0\" already exists" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-1dec04cda0" Jan 23 18:41:05.154948 sshd[2994]: Connection closed by 10.200.16.10 port 44718 Jan 23 18:41:05.155939 sshd-session[2990]: pam_unix(sshd:session): session closed for user core Jan 23 18:41:05.160961 systemd-logind[2527]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:41:05.161985 systemd[1]: sshd@4-10.200.8.22:22-10.200.16.10:44718.service: Deactivated successfully. Jan 23 18:41:05.166524 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:41:05.166828 systemd[1]: session-8.scope: Consumed 2.830s CPU time, 233M memory peak. Jan 23 18:41:05.169760 systemd-logind[2527]: Removed session 8. Jan 23 18:41:08.049921 kubelet[3981]: I0123 18:41:08.049878 3981 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 18:41:08.050328 containerd[2553]: time="2026-01-23T18:41:08.050213493Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:41:08.050512 kubelet[3981]: I0123 18:41:08.050360 3981 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 18:41:08.728407 systemd[1]: Created slice kubepods-burstable-pod56b33da6_e902_4a5d_85e7_43a7badeaef7.slice - libcontainer container kubepods-burstable-pod56b33da6_e902_4a5d_85e7_43a7badeaef7.slice. Jan 23 18:41:08.736905 systemd[1]: Created slice kubepods-besteffort-podf97099ca_6c4a_4289_8602_de330ca5a01d.slice - libcontainer container kubepods-besteffort-podf97099ca_6c4a_4289_8602_de330ca5a01d.slice. Jan 23 18:41:08.817845 kubelet[3981]: I0123 18:41:08.817818 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f97099ca-6c4a-4289-8602-de330ca5a01d-xtables-lock\") pod \"kube-proxy-w9fnp\" (UID: \"f97099ca-6c4a-4289-8602-de330ca5a01d\") " pod="kube-system/kube-proxy-w9fnp" Jan 23 18:41:08.817845 kubelet[3981]: I0123 18:41:08.817848 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/56b33da6-e902-4a5d-85e7-43a7badeaef7-flannel-cfg\") pod \"kube-flannel-ds-q8vwh\" (UID: \"56b33da6-e902-4a5d-85e7-43a7badeaef7\") " pod="kube-flannel/kube-flannel-ds-q8vwh" Jan 23 18:41:08.817845 kubelet[3981]: I0123 18:41:08.817864 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/56b33da6-e902-4a5d-85e7-43a7badeaef7-run\") pod \"kube-flannel-ds-q8vwh\" (UID: \"56b33da6-e902-4a5d-85e7-43a7badeaef7\") " pod="kube-flannel/kube-flannel-ds-q8vwh" Jan 23 18:41:08.817845 kubelet[3981]: I0123 18:41:08.817879 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/56b33da6-e902-4a5d-85e7-43a7badeaef7-cni\") pod \"kube-flannel-ds-q8vwh\" (UID: \"56b33da6-e902-4a5d-85e7-43a7badeaef7\") " pod="kube-flannel/kube-flannel-ds-q8vwh" Jan 23 18:41:08.818063 kubelet[3981]: I0123 18:41:08.817894 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56b33da6-e902-4a5d-85e7-43a7badeaef7-xtables-lock\") pod \"kube-flannel-ds-q8vwh\" (UID: \"56b33da6-e902-4a5d-85e7-43a7badeaef7\") " pod="kube-flannel/kube-flannel-ds-q8vwh" Jan 23 18:41:08.818063 kubelet[3981]: I0123 18:41:08.817910 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f97099ca-6c4a-4289-8602-de330ca5a01d-kube-proxy\") pod \"kube-proxy-w9fnp\" (UID: \"f97099ca-6c4a-4289-8602-de330ca5a01d\") " pod="kube-system/kube-proxy-w9fnp" Jan 23 18:41:08.818063 kubelet[3981]: I0123 18:41:08.817925 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/56b33da6-e902-4a5d-85e7-43a7badeaef7-cni-plugin\") pod \"kube-flannel-ds-q8vwh\" (UID: \"56b33da6-e902-4a5d-85e7-43a7badeaef7\") " pod="kube-flannel/kube-flannel-ds-q8vwh" Jan 23 18:41:08.818063 kubelet[3981]: I0123 18:41:08.817963 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqv2s\" (UniqueName: \"kubernetes.io/projected/56b33da6-e902-4a5d-85e7-43a7badeaef7-kube-api-access-hqv2s\") pod \"kube-flannel-ds-q8vwh\" (UID: \"56b33da6-e902-4a5d-85e7-43a7badeaef7\") " pod="kube-flannel/kube-flannel-ds-q8vwh" Jan 23 18:41:08.818063 kubelet[3981]: I0123 18:41:08.817979 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f97099ca-6c4a-4289-8602-de330ca5a01d-lib-modules\") pod \"kube-proxy-w9fnp\" (UID: \"f97099ca-6c4a-4289-8602-de330ca5a01d\") " pod="kube-system/kube-proxy-w9fnp" Jan 23 18:41:08.818142 kubelet[3981]: I0123 18:41:08.817994 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9lhk\" (UniqueName: \"kubernetes.io/projected/f97099ca-6c4a-4289-8602-de330ca5a01d-kube-api-access-n9lhk\") pod \"kube-proxy-w9fnp\" (UID: \"f97099ca-6c4a-4289-8602-de330ca5a01d\") " pod="kube-system/kube-proxy-w9fnp" Jan 23 18:41:09.033451 containerd[2553]: time="2026-01-23T18:41:09.033417494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-q8vwh,Uid:56b33da6-e902-4a5d-85e7-43a7badeaef7,Namespace:kube-flannel,Attempt:0,}" Jan 23 18:41:09.049296 containerd[2553]: time="2026-01-23T18:41:09.049263825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w9fnp,Uid:f97099ca-6c4a-4289-8602-de330ca5a01d,Namespace:kube-system,Attempt:0,}" Jan 23 18:41:09.069523 containerd[2553]: time="2026-01-23T18:41:09.069491394Z" level=info msg="connecting to shim d8a983fde6ce1c2d23cdb0efa7e4c6bfe629e5743882eabe4446b832f4d0b65a" address="unix:///run/containerd/s/e71414ee1ccc0b9da1ab59f127ed6e8bb0398bfb55204d4130532d30cd65209a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:41:09.094019 systemd[1]: Started cri-containerd-d8a983fde6ce1c2d23cdb0efa7e4c6bfe629e5743882eabe4446b832f4d0b65a.scope - libcontainer container d8a983fde6ce1c2d23cdb0efa7e4c6bfe629e5743882eabe4446b832f4d0b65a. Jan 23 18:41:09.109179 containerd[2553]: time="2026-01-23T18:41:09.109122029Z" level=info msg="connecting to shim 38014288726081b888639390defbc960da9b1f2b21d41342e2ed4c3c53cd40c1" address="unix:///run/containerd/s/1bd46f5f9ae1a063b35c6c484a8c71f74d5887ff30b4b438af22d4ca42fb2b8a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:41:09.129095 systemd[1]: Started cri-containerd-38014288726081b888639390defbc960da9b1f2b21d41342e2ed4c3c53cd40c1.scope - libcontainer container 38014288726081b888639390defbc960da9b1f2b21d41342e2ed4c3c53cd40c1. Jan 23 18:41:09.158454 containerd[2553]: time="2026-01-23T18:41:09.158426048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-q8vwh,Uid:56b33da6-e902-4a5d-85e7-43a7badeaef7,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"d8a983fde6ce1c2d23cdb0efa7e4c6bfe629e5743882eabe4446b832f4d0b65a\"" Jan 23 18:41:09.162619 containerd[2553]: time="2026-01-23T18:41:09.161931013Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 23 18:41:09.174437 containerd[2553]: time="2026-01-23T18:41:09.174388390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w9fnp,Uid:f97099ca-6c4a-4289-8602-de330ca5a01d,Namespace:kube-system,Attempt:0,} returns sandbox id \"38014288726081b888639390defbc960da9b1f2b21d41342e2ed4c3c53cd40c1\"" Jan 23 18:41:09.176068 containerd[2553]: time="2026-01-23T18:41:09.176043430Z" level=info msg="CreateContainer within sandbox \"38014288726081b888639390defbc960da9b1f2b21d41342e2ed4c3c53cd40c1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:41:09.195067 containerd[2553]: time="2026-01-23T18:41:09.195039790Z" level=info msg="Container 9bb10b70bc09c638239d8474fbdda4fb939eb0650abc566013b9064e4800e443: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:41:09.209804 containerd[2553]: time="2026-01-23T18:41:09.209530478Z" level=info msg="CreateContainer within sandbox \"38014288726081b888639390defbc960da9b1f2b21d41342e2ed4c3c53cd40c1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9bb10b70bc09c638239d8474fbdda4fb939eb0650abc566013b9064e4800e443\"" Jan 23 18:41:09.210187 containerd[2553]: time="2026-01-23T18:41:09.210166819Z" level=info msg="StartContainer for \"9bb10b70bc09c638239d8474fbdda4fb939eb0650abc566013b9064e4800e443\"" Jan 23 18:41:09.211548 containerd[2553]: time="2026-01-23T18:41:09.211518908Z" level=info msg="connecting to shim 9bb10b70bc09c638239d8474fbdda4fb939eb0650abc566013b9064e4800e443" address="unix:///run/containerd/s/1bd46f5f9ae1a063b35c6c484a8c71f74d5887ff30b4b438af22d4ca42fb2b8a" protocol=ttrpc version=3 Jan 23 18:41:09.227932 systemd[1]: Started cri-containerd-9bb10b70bc09c638239d8474fbdda4fb939eb0650abc566013b9064e4800e443.scope - libcontainer container 9bb10b70bc09c638239d8474fbdda4fb939eb0650abc566013b9064e4800e443. Jan 23 18:41:09.278930 containerd[2553]: time="2026-01-23T18:41:09.278904661Z" level=info msg="StartContainer for \"9bb10b70bc09c638239d8474fbdda4fb939eb0650abc566013b9064e4800e443\" returns successfully" Jan 23 18:41:10.145601 kubelet[3981]: I0123 18:41:10.145551 3981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w9fnp" podStartSLOduration=2.145536847 podStartE2EDuration="2.145536847s" podCreationTimestamp="2026-01-23 18:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:41:10.145311532 +0000 UTC m=+6.129189727" watchObservedRunningTime="2026-01-23 18:41:10.145536847 +0000 UTC m=+6.129415041" Jan 23 18:41:11.128749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401365412.mount: Deactivated successfully. Jan 23 18:41:11.193254 containerd[2553]: time="2026-01-23T18:41:11.193220897Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:41:11.195543 containerd[2553]: time="2026-01-23T18:41:11.195509951Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=1011362" Jan 23 18:41:11.198017 containerd[2553]: time="2026-01-23T18:41:11.197979024Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:41:11.201033 containerd[2553]: time="2026-01-23T18:41:11.200993728Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:41:11.201643 containerd[2553]: time="2026-01-23T18:41:11.201363508Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.039351357s" Jan 23 18:41:11.201643 containerd[2553]: time="2026-01-23T18:41:11.201389317Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 23 18:41:11.203238 containerd[2553]: time="2026-01-23T18:41:11.203205398Z" level=info msg="CreateContainer within sandbox \"d8a983fde6ce1c2d23cdb0efa7e4c6bfe629e5743882eabe4446b832f4d0b65a\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 23 18:41:11.223448 containerd[2553]: time="2026-01-23T18:41:11.223401727Z" level=info msg="Container 01863ffabd36d8516598269ddd65b032161289130fa6cfac582a67a03de7a22a: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:41:11.234174 containerd[2553]: time="2026-01-23T18:41:11.234147688Z" level=info msg="CreateContainer within sandbox \"d8a983fde6ce1c2d23cdb0efa7e4c6bfe629e5743882eabe4446b832f4d0b65a\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"01863ffabd36d8516598269ddd65b032161289130fa6cfac582a67a03de7a22a\"" Jan 23 18:41:11.234691 containerd[2553]: time="2026-01-23T18:41:11.234674844Z" level=info msg="StartContainer for \"01863ffabd36d8516598269ddd65b032161289130fa6cfac582a67a03de7a22a\"" Jan 23 18:41:11.235532 containerd[2553]: time="2026-01-23T18:41:11.235488393Z" level=info msg="connecting to shim 01863ffabd36d8516598269ddd65b032161289130fa6cfac582a67a03de7a22a" address="unix:///run/containerd/s/e71414ee1ccc0b9da1ab59f127ed6e8bb0398bfb55204d4130532d30cd65209a" protocol=ttrpc version=3 Jan 23 18:41:11.254944 systemd[1]: Started cri-containerd-01863ffabd36d8516598269ddd65b032161289130fa6cfac582a67a03de7a22a.scope - libcontainer container 01863ffabd36d8516598269ddd65b032161289130fa6cfac582a67a03de7a22a. Jan 23 18:41:11.277296 systemd[1]: cri-containerd-01863ffabd36d8516598269ddd65b032161289130fa6cfac582a67a03de7a22a.scope: Deactivated successfully. Jan 23 18:41:11.279239 containerd[2553]: time="2026-01-23T18:41:11.279171102Z" level=info msg="received container exit event container_id:\"01863ffabd36d8516598269ddd65b032161289130fa6cfac582a67a03de7a22a\" id:\"01863ffabd36d8516598269ddd65b032161289130fa6cfac582a67a03de7a22a\" pid:4312 exited_at:{seconds:1769193671 nanos:278823770}" Jan 23 18:41:11.280323 containerd[2553]: time="2026-01-23T18:41:11.280267367Z" level=info msg="StartContainer for \"01863ffabd36d8516598269ddd65b032161289130fa6cfac582a67a03de7a22a\" returns successfully" Jan 23 18:41:12.072743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01863ffabd36d8516598269ddd65b032161289130fa6cfac582a67a03de7a22a-rootfs.mount: Deactivated successfully. Jan 23 18:41:13.147294 containerd[2553]: time="2026-01-23T18:41:13.146548587Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 23 18:41:15.170782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3558802204.mount: Deactivated successfully. Jan 23 18:41:15.815070 containerd[2553]: time="2026-01-23T18:41:15.815025551Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:41:15.817287 containerd[2553]: time="2026-01-23T18:41:15.817168018Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=0" Jan 23 18:41:15.820176 containerd[2553]: time="2026-01-23T18:41:15.819778714Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:41:15.823962 containerd[2553]: time="2026-01-23T18:41:15.823922712Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:41:15.824825 containerd[2553]: time="2026-01-23T18:41:15.824800984Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.67818405s" Jan 23 18:41:15.824955 containerd[2553]: time="2026-01-23T18:41:15.824900126Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 23 18:41:15.826956 containerd[2553]: time="2026-01-23T18:41:15.826933083Z" level=info msg="CreateContainer within sandbox \"d8a983fde6ce1c2d23cdb0efa7e4c6bfe629e5743882eabe4446b832f4d0b65a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 18:41:15.844433 containerd[2553]: time="2026-01-23T18:41:15.843613856Z" level=info msg="Container a534babecdf28f0d917801e21815ca90f5e6959ea79564b2b37765e2cb4a667a: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:41:15.854807 containerd[2553]: time="2026-01-23T18:41:15.854770552Z" level=info msg="CreateContainer within sandbox \"d8a983fde6ce1c2d23cdb0efa7e4c6bfe629e5743882eabe4446b832f4d0b65a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a534babecdf28f0d917801e21815ca90f5e6959ea79564b2b37765e2cb4a667a\"" Jan 23 18:41:15.855213 containerd[2553]: time="2026-01-23T18:41:15.855189288Z" level=info msg="StartContainer for \"a534babecdf28f0d917801e21815ca90f5e6959ea79564b2b37765e2cb4a667a\"" Jan 23 18:41:15.856209 containerd[2553]: time="2026-01-23T18:41:15.856101722Z" level=info msg="connecting to shim a534babecdf28f0d917801e21815ca90f5e6959ea79564b2b37765e2cb4a667a" address="unix:///run/containerd/s/e71414ee1ccc0b9da1ab59f127ed6e8bb0398bfb55204d4130532d30cd65209a" protocol=ttrpc version=3 Jan 23 18:41:15.874001 systemd[1]: Started cri-containerd-a534babecdf28f0d917801e21815ca90f5e6959ea79564b2b37765e2cb4a667a.scope - libcontainer container a534babecdf28f0d917801e21815ca90f5e6959ea79564b2b37765e2cb4a667a. Jan 23 18:41:15.893649 systemd[1]: cri-containerd-a534babecdf28f0d917801e21815ca90f5e6959ea79564b2b37765e2cb4a667a.scope: Deactivated successfully. Jan 23 18:41:15.898095 containerd[2553]: time="2026-01-23T18:41:15.898065163Z" level=info msg="received container exit event container_id:\"a534babecdf28f0d917801e21815ca90f5e6959ea79564b2b37765e2cb4a667a\" id:\"a534babecdf28f0d917801e21815ca90f5e6959ea79564b2b37765e2cb4a667a\" pid:4384 exited_at:{seconds:1769193675 nanos:894802271}" Jan 23 18:41:15.899392 containerd[2553]: time="2026-01-23T18:41:15.899373638Z" level=info msg="StartContainer for \"a534babecdf28f0d917801e21815ca90f5e6959ea79564b2b37765e2cb4a667a\" returns successfully" Jan 23 18:41:15.921776 kubelet[3981]: I0123 18:41:15.921752 3981 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 18:41:15.958824 systemd[1]: Created slice kubepods-burstable-pod8194249b_63ff_420a_952b_03c19de2176f.slice - libcontainer container kubepods-burstable-pod8194249b_63ff_420a_952b_03c19de2176f.slice. Jan 23 18:41:15.963689 kubelet[3981]: I0123 18:41:15.963667 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8194249b-63ff-420a-952b-03c19de2176f-config-volume\") pod \"coredns-668d6bf9bc-kbqq5\" (UID: \"8194249b-63ff-420a-952b-03c19de2176f\") " pod="kube-system/coredns-668d6bf9bc-kbqq5" Jan 23 18:41:15.963877 kubelet[3981]: I0123 18:41:15.963779 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7h8k\" (UniqueName: \"kubernetes.io/projected/8194249b-63ff-420a-952b-03c19de2176f-kube-api-access-b7h8k\") pod \"coredns-668d6bf9bc-kbqq5\" (UID: \"8194249b-63ff-420a-952b-03c19de2176f\") " pod="kube-system/coredns-668d6bf9bc-kbqq5" Jan 23 18:41:15.964176 systemd[1]: Created slice kubepods-burstable-pod1db3ec6b_3ab7_4795_9b57_33a023d94abc.slice - libcontainer container kubepods-burstable-pod1db3ec6b_3ab7_4795_9b57_33a023d94abc.slice. Jan 23 18:41:16.064669 kubelet[3981]: I0123 18:41:16.064571 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1db3ec6b-3ab7-4795-9b57-33a023d94abc-config-volume\") pod \"coredns-668d6bf9bc-zqdj7\" (UID: \"1db3ec6b-3ab7-4795-9b57-33a023d94abc\") " pod="kube-system/coredns-668d6bf9bc-zqdj7" Jan 23 18:41:16.064799 kubelet[3981]: I0123 18:41:16.064773 3981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq5td\" (UniqueName: \"kubernetes.io/projected/1db3ec6b-3ab7-4795-9b57-33a023d94abc-kube-api-access-gq5td\") pod \"coredns-668d6bf9bc-zqdj7\" (UID: \"1db3ec6b-3ab7-4795-9b57-33a023d94abc\") " pod="kube-system/coredns-668d6bf9bc-zqdj7" Jan 23 18:41:16.098240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a534babecdf28f0d917801e21815ca90f5e6959ea79564b2b37765e2cb4a667a-rootfs.mount: Deactivated successfully. Jan 23 18:41:16.263693 containerd[2553]: time="2026-01-23T18:41:16.263517140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kbqq5,Uid:8194249b-63ff-420a-952b-03c19de2176f,Namespace:kube-system,Attempt:0,}" Jan 23 18:41:16.267181 containerd[2553]: time="2026-01-23T18:41:16.267154902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zqdj7,Uid:1db3ec6b-3ab7-4795-9b57-33a023d94abc,Namespace:kube-system,Attempt:0,}" Jan 23 18:41:16.308004 systemd[1]: run-netns-cni\x2d2cb56c11\x2d3397\x2de2c2\x2d89f4\x2d816268aa8448.mount: Deactivated successfully. Jan 23 18:41:16.313849 containerd[2553]: time="2026-01-23T18:41:16.313801834Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kbqq5,Uid:8194249b-63ff-420a-952b-03c19de2176f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92e78a89437bcb00c36284356647a44365c6096ecb7411cf56895d47cfc6475b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 18:41:16.314033 kubelet[3981]: E0123 18:41:16.314006 3981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92e78a89437bcb00c36284356647a44365c6096ecb7411cf56895d47cfc6475b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 18:41:16.314077 kubelet[3981]: E0123 18:41:16.314058 3981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92e78a89437bcb00c36284356647a44365c6096ecb7411cf56895d47cfc6475b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-kbqq5" Jan 23 18:41:16.314100 kubelet[3981]: E0123 18:41:16.314087 3981 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92e78a89437bcb00c36284356647a44365c6096ecb7411cf56895d47cfc6475b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-kbqq5" Jan 23 18:41:16.314187 kubelet[3981]: E0123 18:41:16.314126 3981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kbqq5_kube-system(8194249b-63ff-420a-952b-03c19de2176f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kbqq5_kube-system(8194249b-63ff-420a-952b-03c19de2176f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92e78a89437bcb00c36284356647a44365c6096ecb7411cf56895d47cfc6475b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-kbqq5" podUID="8194249b-63ff-420a-952b-03c19de2176f" Jan 23 18:41:16.318431 containerd[2553]: time="2026-01-23T18:41:16.318396891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zqdj7,Uid:1db3ec6b-3ab7-4795-9b57-33a023d94abc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df390ee95b9f5046c11d44b274c762fd365d8b17cd5a61a6df3e1aea93f09f29\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 18:41:16.318561 kubelet[3981]: E0123 18:41:16.318533 3981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df390ee95b9f5046c11d44b274c762fd365d8b17cd5a61a6df3e1aea93f09f29\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 18:41:16.318600 kubelet[3981]: E0123 18:41:16.318574 3981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df390ee95b9f5046c11d44b274c762fd365d8b17cd5a61a6df3e1aea93f09f29\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-zqdj7" Jan 23 18:41:16.318600 kubelet[3981]: E0123 18:41:16.318593 3981 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df390ee95b9f5046c11d44b274c762fd365d8b17cd5a61a6df3e1aea93f09f29\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-zqdj7" Jan 23 18:41:16.318642 kubelet[3981]: E0123 18:41:16.318625 3981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zqdj7_kube-system(1db3ec6b-3ab7-4795-9b57-33a023d94abc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zqdj7_kube-system(1db3ec6b-3ab7-4795-9b57-33a023d94abc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df390ee95b9f5046c11d44b274c762fd365d8b17cd5a61a6df3e1aea93f09f29\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-zqdj7" podUID="1db3ec6b-3ab7-4795-9b57-33a023d94abc" Jan 23 18:41:17.097563 systemd[1]: run-netns-cni\x2db36e11d7\x2deda4\x2d35e7\x2d4be9\x2d41dd7214a4a6.mount: Deactivated successfully. Jan 23 18:41:17.157284 containerd[2553]: time="2026-01-23T18:41:17.156246810Z" level=info msg="CreateContainer within sandbox \"d8a983fde6ce1c2d23cdb0efa7e4c6bfe629e5743882eabe4446b832f4d0b65a\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 23 18:41:17.179387 containerd[2553]: time="2026-01-23T18:41:17.179358439Z" level=info msg="Container 570e32c92f1611a0c8628e2e2691033d1cac40509750da55c21ad8db4217b649: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:41:17.190492 containerd[2553]: time="2026-01-23T18:41:17.190460987Z" level=info msg="CreateContainer within sandbox \"d8a983fde6ce1c2d23cdb0efa7e4c6bfe629e5743882eabe4446b832f4d0b65a\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"570e32c92f1611a0c8628e2e2691033d1cac40509750da55c21ad8db4217b649\"" Jan 23 18:41:17.191097 containerd[2553]: time="2026-01-23T18:41:17.190833492Z" level=info msg="StartContainer for \"570e32c92f1611a0c8628e2e2691033d1cac40509750da55c21ad8db4217b649\"" Jan 23 18:41:17.191891 containerd[2553]: time="2026-01-23T18:41:17.191868648Z" level=info msg="connecting to shim 570e32c92f1611a0c8628e2e2691033d1cac40509750da55c21ad8db4217b649" address="unix:///run/containerd/s/e71414ee1ccc0b9da1ab59f127ed6e8bb0398bfb55204d4130532d30cd65209a" protocol=ttrpc version=3 Jan 23 18:41:17.213951 systemd[1]: Started cri-containerd-570e32c92f1611a0c8628e2e2691033d1cac40509750da55c21ad8db4217b649.scope - libcontainer container 570e32c92f1611a0c8628e2e2691033d1cac40509750da55c21ad8db4217b649. Jan 23 18:41:17.237619 containerd[2553]: time="2026-01-23T18:41:17.237600475Z" level=info msg="StartContainer for \"570e32c92f1611a0c8628e2e2691033d1cac40509750da55c21ad8db4217b649\" returns successfully" Jan 23 18:41:18.364218 systemd-networkd[2182]: flannel.1: Link UP Jan 23 18:41:18.364226 systemd-networkd[2182]: flannel.1: Gained carrier Jan 23 18:41:20.192918 systemd-networkd[2182]: flannel.1: Gained IPv6LL Jan 23 18:41:29.089595 containerd[2553]: time="2026-01-23T18:41:29.089505303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zqdj7,Uid:1db3ec6b-3ab7-4795-9b57-33a023d94abc,Namespace:kube-system,Attempt:0,}" Jan 23 18:41:29.118621 systemd-networkd[2182]: cni0: Link UP Jan 23 18:41:29.118627 systemd-networkd[2182]: cni0: Gained carrier Jan 23 18:41:29.120766 systemd-networkd[2182]: cni0: Lost carrier Jan 23 18:41:29.175603 systemd-networkd[2182]: veth2a139157: Link UP Jan 23 18:41:29.177825 kernel: cni0: port 1(veth2a139157) entered blocking state Jan 23 18:41:29.177907 kernel: cni0: port 1(veth2a139157) entered disabled state Jan 23 18:41:29.179115 kernel: veth2a139157: entered allmulticast mode Jan 23 18:41:29.179157 kernel: veth2a139157: entered promiscuous mode Jan 23 18:41:29.184139 kernel: cni0: port 1(veth2a139157) entered blocking state Jan 23 18:41:29.184188 kernel: cni0: port 1(veth2a139157) entered forwarding state Jan 23 18:41:29.184269 systemd-networkd[2182]: veth2a139157: Gained carrier Jan 23 18:41:29.184422 systemd-networkd[2182]: cni0: Gained carrier Jan 23 18:41:29.188781 containerd[2553]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a938), "name":"cbr0", "type":"bridge"} Jan 23 18:41:29.188781 containerd[2553]: delegateAdd: netconf sent to delegate plugin: Jan 23 18:41:29.224608 containerd[2553]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-23T18:41:29.224571749Z" level=info msg="connecting to shim f14215a13262de49cc2d75181f943f19218676a6d41d9af789b12edc68d2d295" address="unix:///run/containerd/s/e838cb12a868f244f97c993cc1d9a6aec0e480db559e1f7f59bc936b8396564b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:41:29.250958 systemd[1]: Started cri-containerd-f14215a13262de49cc2d75181f943f19218676a6d41d9af789b12edc68d2d295.scope - libcontainer container f14215a13262de49cc2d75181f943f19218676a6d41d9af789b12edc68d2d295. Jan 23 18:41:29.285416 containerd[2553]: time="2026-01-23T18:41:29.285385686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zqdj7,Uid:1db3ec6b-3ab7-4795-9b57-33a023d94abc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f14215a13262de49cc2d75181f943f19218676a6d41d9af789b12edc68d2d295\"" Jan 23 18:41:29.287862 containerd[2553]: time="2026-01-23T18:41:29.287262032Z" level=info msg="CreateContainer within sandbox \"f14215a13262de49cc2d75181f943f19218676a6d41d9af789b12edc68d2d295\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:41:29.305929 containerd[2553]: time="2026-01-23T18:41:29.305905447Z" level=info msg="Container 8435ee84158d0fc80a6c028559c505d8a899db554e1e4361d97a5b4b97da01d6: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:41:29.318497 containerd[2553]: time="2026-01-23T18:41:29.318473952Z" level=info msg="CreateContainer within sandbox \"f14215a13262de49cc2d75181f943f19218676a6d41d9af789b12edc68d2d295\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8435ee84158d0fc80a6c028559c505d8a899db554e1e4361d97a5b4b97da01d6\"" Jan 23 18:41:29.318901 containerd[2553]: time="2026-01-23T18:41:29.318853185Z" level=info msg="StartContainer for \"8435ee84158d0fc80a6c028559c505d8a899db554e1e4361d97a5b4b97da01d6\"" Jan 23 18:41:29.319891 containerd[2553]: time="2026-01-23T18:41:29.319849660Z" level=info msg="connecting to shim 8435ee84158d0fc80a6c028559c505d8a899db554e1e4361d97a5b4b97da01d6" address="unix:///run/containerd/s/e838cb12a868f244f97c993cc1d9a6aec0e480db559e1f7f59bc936b8396564b" protocol=ttrpc version=3 Jan 23 18:41:29.331950 systemd[1]: Started cri-containerd-8435ee84158d0fc80a6c028559c505d8a899db554e1e4361d97a5b4b97da01d6.scope - libcontainer container 8435ee84158d0fc80a6c028559c505d8a899db554e1e4361d97a5b4b97da01d6. Jan 23 18:41:29.353348 containerd[2553]: time="2026-01-23T18:41:29.353288264Z" level=info msg="StartContainer for \"8435ee84158d0fc80a6c028559c505d8a899db554e1e4361d97a5b4b97da01d6\" returns successfully" Jan 23 18:41:30.088962 containerd[2553]: time="2026-01-23T18:41:30.088877878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kbqq5,Uid:8194249b-63ff-420a-952b-03c19de2176f,Namespace:kube-system,Attempt:0,}" Jan 23 18:41:30.119252 systemd-networkd[2182]: vethd4dff41a: Link UP Jan 23 18:41:30.122617 kernel: cni0: port 2(vethd4dff41a) entered blocking state Jan 23 18:41:30.122687 kernel: cni0: port 2(vethd4dff41a) entered disabled state Jan 23 18:41:30.123776 kernel: vethd4dff41a: entered allmulticast mode Jan 23 18:41:30.126150 kernel: vethd4dff41a: entered promiscuous mode Jan 23 18:41:30.131302 kernel: cni0: port 2(vethd4dff41a) entered blocking state Jan 23 18:41:30.131358 kernel: cni0: port 2(vethd4dff41a) entered forwarding state Jan 23 18:41:30.131473 systemd-networkd[2182]: vethd4dff41a: Gained carrier Jan 23 18:41:30.132845 containerd[2553]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 23 18:41:30.132845 containerd[2553]: delegateAdd: netconf sent to delegate plugin: Jan 23 18:41:30.171317 containerd[2553]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-23T18:41:30.170883881Z" level=info msg="connecting to shim abd6fd97a802b4e25a76b724f2c5779c88049723591e14af7f3c497227732db2" address="unix:///run/containerd/s/b2f5fa2fd6f11bbd1bf984e091c062116d42b569dbb3722e4231750bb354fa3a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:41:30.194568 kubelet[3981]: I0123 18:41:30.194512 3981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zqdj7" podStartSLOduration=21.194497638 podStartE2EDuration="21.194497638s" podCreationTimestamp="2026-01-23 18:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:41:30.193750467 +0000 UTC m=+26.177628657" watchObservedRunningTime="2026-01-23 18:41:30.194497638 +0000 UTC m=+26.178375833" Jan 23 18:41:30.194847 kubelet[3981]: I0123 18:41:30.194681 3981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-q8vwh" podStartSLOduration=15.529692258 podStartE2EDuration="22.194675237s" podCreationTimestamp="2026-01-23 18:41:08 +0000 UTC" firstStartedPulling="2026-01-23 18:41:09.160576396 +0000 UTC m=+5.144454581" lastFinishedPulling="2026-01-23 18:41:15.825559371 +0000 UTC m=+11.809437560" observedRunningTime="2026-01-23 18:41:18.169129441 +0000 UTC m=+14.153007634" watchObservedRunningTime="2026-01-23 18:41:30.194675237 +0000 UTC m=+26.178553434" Jan 23 18:41:30.198103 systemd[1]: Started cri-containerd-abd6fd97a802b4e25a76b724f2c5779c88049723591e14af7f3c497227732db2.scope - libcontainer container abd6fd97a802b4e25a76b724f2c5779c88049723591e14af7f3c497227732db2. Jan 23 18:41:30.245923 containerd[2553]: time="2026-01-23T18:41:30.245894714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kbqq5,Uid:8194249b-63ff-420a-952b-03c19de2176f,Namespace:kube-system,Attempt:0,} returns sandbox id \"abd6fd97a802b4e25a76b724f2c5779c88049723591e14af7f3c497227732db2\"" Jan 23 18:41:30.248693 containerd[2553]: time="2026-01-23T18:41:30.247861562Z" level=info msg="CreateContainer within sandbox \"abd6fd97a802b4e25a76b724f2c5779c88049723591e14af7f3c497227732db2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:41:30.269352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206191401.mount: Deactivated successfully. Jan 23 18:41:30.269571 containerd[2553]: time="2026-01-23T18:41:30.269335604Z" level=info msg="Container 070ad3e94784a9e8ffc33086044640bff19160c14661e5b36a9c335bbe8cd8b8: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:41:30.283347 containerd[2553]: time="2026-01-23T18:41:30.283326087Z" level=info msg="CreateContainer within sandbox \"abd6fd97a802b4e25a76b724f2c5779c88049723591e14af7f3c497227732db2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"070ad3e94784a9e8ffc33086044640bff19160c14661e5b36a9c335bbe8cd8b8\"" Jan 23 18:41:30.283772 containerd[2553]: time="2026-01-23T18:41:30.283704267Z" level=info msg="StartContainer for \"070ad3e94784a9e8ffc33086044640bff19160c14661e5b36a9c335bbe8cd8b8\"" Jan 23 18:41:30.284563 containerd[2553]: time="2026-01-23T18:41:30.284540045Z" level=info msg="connecting to shim 070ad3e94784a9e8ffc33086044640bff19160c14661e5b36a9c335bbe8cd8b8" address="unix:///run/containerd/s/b2f5fa2fd6f11bbd1bf984e091c062116d42b569dbb3722e4231750bb354fa3a" protocol=ttrpc version=3 Jan 23 18:41:30.302915 systemd[1]: Started cri-containerd-070ad3e94784a9e8ffc33086044640bff19160c14661e5b36a9c335bbe8cd8b8.scope - libcontainer container 070ad3e94784a9e8ffc33086044640bff19160c14661e5b36a9c335bbe8cd8b8. Jan 23 18:41:30.325574 containerd[2553]: time="2026-01-23T18:41:30.325535329Z" level=info msg="StartContainer for \"070ad3e94784a9e8ffc33086044640bff19160c14661e5b36a9c335bbe8cd8b8\" returns successfully" Jan 23 18:41:30.496892 systemd-networkd[2182]: veth2a139157: Gained IPv6LL Jan 23 18:41:30.944989 systemd-networkd[2182]: cni0: Gained IPv6LL Jan 23 18:41:31.210213 kubelet[3981]: I0123 18:41:31.209995 3981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kbqq5" podStartSLOduration=22.209978499 podStartE2EDuration="22.209978499s" podCreationTimestamp="2026-01-23 18:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:41:31.198622863 +0000 UTC m=+27.182501059" watchObservedRunningTime="2026-01-23 18:41:31.209978499 +0000 UTC m=+27.193856692" Jan 23 18:41:32.032944 systemd-networkd[2182]: vethd4dff41a: Gained IPv6LL Jan 23 18:41:39.629699 waagent[2754]: 2026-01-23T18:41:39.629654Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 18:41:39.645087 waagent[2754]: 2026-01-23T18:41:39.645054Z INFO ExtHandler Jan 23 18:41:39.645184 waagent[2754]: 2026-01-23T18:41:39.645128Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 89bb12e0-f858-41c5-971a-ead9b26167cd eTag: 5824419164944158901 source: Fabric] Jan 23 18:41:39.645368 waagent[2754]: 2026-01-23T18:41:39.645345Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 18:41:39.645720 waagent[2754]: 2026-01-23T18:41:39.645690Z INFO ExtHandler Jan 23 18:41:39.645772 waagent[2754]: 2026-01-23T18:41:39.645732Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 18:41:39.696137 waagent[2754]: 2026-01-23T18:41:39.696113Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 18:41:39.790725 waagent[2754]: 2026-01-23T18:41:39.790665Z INFO ExtHandler Downloaded certificate {'thumbprint': '6923508F4B702833457652E1EF7A801F6CA91566', 'hasPrivateKey': True} Jan 23 18:41:39.791075 waagent[2754]: 2026-01-23T18:41:39.791047Z INFO ExtHandler Fetch goal state completed Jan 23 18:41:39.791320 waagent[2754]: 2026-01-23T18:41:39.791294Z INFO ExtHandler ExtHandler Jan 23 18:41:39.791361 waagent[2754]: 2026-01-23T18:41:39.791346Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: c87c2375-3cb3-42ea-9a13-f083b8199bbf correlation d180f5c2-399f-440b-8d17-12edc5c073ed created: 2026-01-23T18:41:32.255707Z] Jan 23 18:41:39.791548 waagent[2754]: 2026-01-23T18:41:39.791527Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 18:41:39.791956 waagent[2754]: 2026-01-23T18:41:39.791933Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 23 18:42:32.849671 systemd[1]: Started sshd@5-10.200.8.22:22-10.200.16.10:34752.service - OpenSSH per-connection server daemon (10.200.16.10:34752). Jan 23 18:42:33.409152 sshd[5125]: Accepted publickey for core from 10.200.16.10 port 34752 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:42:33.410418 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:33.414309 systemd-logind[2527]: New session 9 of user core. Jan 23 18:42:33.421950 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:42:33.772025 sshd[5129]: Connection closed by 10.200.16.10 port 34752 Jan 23 18:42:33.772499 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:33.776016 systemd[1]: sshd@5-10.200.8.22:22-10.200.16.10:34752.service: Deactivated successfully. Jan 23 18:42:33.777658 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:42:33.778857 systemd-logind[2527]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:42:33.779503 systemd-logind[2527]: Removed session 9. Jan 23 18:42:38.889498 systemd[1]: Started sshd@6-10.200.8.22:22-10.200.16.10:34756.service - OpenSSH per-connection server daemon (10.200.16.10:34756). Jan 23 18:42:39.444336 sshd[5184]: Accepted publickey for core from 10.200.16.10 port 34756 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:42:39.445618 sshd-session[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:39.450967 systemd-logind[2527]: New session 10 of user core. Jan 23 18:42:39.457954 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:42:39.800376 sshd[5190]: Connection closed by 10.200.16.10 port 34756 Jan 23 18:42:39.800947 sshd-session[5184]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:39.803756 systemd[1]: sshd@6-10.200.8.22:22-10.200.16.10:34756.service: Deactivated successfully. Jan 23 18:42:39.805491 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:42:39.806148 systemd-logind[2527]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:42:39.808062 systemd-logind[2527]: Removed session 10. Jan 23 18:42:44.926708 systemd[1]: Started sshd@7-10.200.8.22:22-10.200.16.10:37288.service - OpenSSH per-connection server daemon (10.200.16.10:37288). Jan 23 18:42:45.482578 sshd[5224]: Accepted publickey for core from 10.200.16.10 port 37288 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:42:45.483801 sshd-session[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:45.487300 systemd-logind[2527]: New session 11 of user core. Jan 23 18:42:45.496937 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:42:45.838882 sshd[5228]: Connection closed by 10.200.16.10 port 37288 Jan 23 18:42:45.839944 sshd-session[5224]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:45.842606 systemd[1]: sshd@7-10.200.8.22:22-10.200.16.10:37288.service: Deactivated successfully. Jan 23 18:42:45.844323 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:42:45.846151 systemd-logind[2527]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:42:45.847061 systemd-logind[2527]: Removed session 11. Jan 23 18:42:45.953468 systemd[1]: Started sshd@8-10.200.8.22:22-10.200.16.10:37304.service - OpenSSH per-connection server daemon (10.200.16.10:37304). Jan 23 18:42:46.519019 sshd[5241]: Accepted publickey for core from 10.200.16.10 port 37304 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:42:46.520300 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:46.524668 systemd-logind[2527]: New session 12 of user core. Jan 23 18:42:46.530929 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:42:46.905532 sshd[5245]: Connection closed by 10.200.16.10 port 37304 Jan 23 18:42:46.907014 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:46.909490 systemd[1]: sshd@8-10.200.8.22:22-10.200.16.10:37304.service: Deactivated successfully. Jan 23 18:42:46.911038 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:42:46.912824 systemd-logind[2527]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:42:46.913447 systemd-logind[2527]: Removed session 12. Jan 23 18:42:47.021517 systemd[1]: Started sshd@9-10.200.8.22:22-10.200.16.10:37306.service - OpenSSH per-connection server daemon (10.200.16.10:37306). Jan 23 18:42:47.572124 sshd[5255]: Accepted publickey for core from 10.200.16.10 port 37306 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:42:47.573077 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:47.577238 systemd-logind[2527]: New session 13 of user core. Jan 23 18:42:47.585951 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:42:47.927407 sshd[5259]: Connection closed by 10.200.16.10 port 37306 Jan 23 18:42:47.928123 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:47.931367 systemd[1]: sshd@9-10.200.8.22:22-10.200.16.10:37306.service: Deactivated successfully. Jan 23 18:42:47.933088 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:42:47.933763 systemd-logind[2527]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:42:47.934909 systemd-logind[2527]: Removed session 13. Jan 23 18:42:53.052673 systemd[1]: Started sshd@10-10.200.8.22:22-10.200.16.10:50634.service - OpenSSH per-connection server daemon (10.200.16.10:50634). Jan 23 18:42:53.609619 sshd[5292]: Accepted publickey for core from 10.200.16.10 port 50634 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:42:53.610257 sshd-session[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:53.613845 systemd-logind[2527]: New session 14 of user core. Jan 23 18:42:53.619930 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:42:53.962078 sshd[5302]: Connection closed by 10.200.16.10 port 50634 Jan 23 18:42:53.962764 sshd-session[5292]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:53.966556 systemd[1]: sshd@10-10.200.8.22:22-10.200.16.10:50634.service: Deactivated successfully. Jan 23 18:42:53.966800 systemd-logind[2527]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:42:53.968361 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:42:53.969587 systemd-logind[2527]: Removed session 14. Jan 23 18:42:54.076341 systemd[1]: Started sshd@11-10.200.8.22:22-10.200.16.10:50646.service - OpenSSH per-connection server daemon (10.200.16.10:50646). Jan 23 18:42:54.636093 sshd[5329]: Accepted publickey for core from 10.200.16.10 port 50646 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:42:54.637336 sshd-session[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:54.641452 systemd-logind[2527]: New session 15 of user core. Jan 23 18:42:54.646947 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:42:55.049219 sshd[5333]: Connection closed by 10.200.16.10 port 50646 Jan 23 18:42:55.049957 sshd-session[5329]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:55.053501 systemd[1]: sshd@11-10.200.8.22:22-10.200.16.10:50646.service: Deactivated successfully. Jan 23 18:42:55.053728 systemd-logind[2527]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:42:55.055337 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:42:55.056559 systemd-logind[2527]: Removed session 15. Jan 23 18:42:55.164340 systemd[1]: Started sshd@12-10.200.8.22:22-10.200.16.10:50656.service - OpenSSH per-connection server daemon (10.200.16.10:50656). Jan 23 18:42:55.717069 sshd[5342]: Accepted publickey for core from 10.200.16.10 port 50656 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:42:55.718390 sshd-session[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:55.722615 systemd-logind[2527]: New session 16 of user core. Jan 23 18:42:55.730933 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:42:56.515440 sshd[5346]: Connection closed by 10.200.16.10 port 50656 Jan 23 18:42:56.515923 sshd-session[5342]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:56.519280 systemd[1]: sshd@12-10.200.8.22:22-10.200.16.10:50656.service: Deactivated successfully. Jan 23 18:42:56.520769 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:42:56.521647 systemd-logind[2527]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:42:56.522800 systemd-logind[2527]: Removed session 16. Jan 23 18:42:56.629753 systemd[1]: Started sshd@13-10.200.8.22:22-10.200.16.10:50672.service - OpenSSH per-connection server daemon (10.200.16.10:50672). Jan 23 18:42:57.181632 sshd[5363]: Accepted publickey for core from 10.200.16.10 port 50672 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:42:57.182881 sshd-session[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:57.186350 systemd-logind[2527]: New session 17 of user core. Jan 23 18:42:57.190938 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:42:57.608642 sshd[5367]: Connection closed by 10.200.16.10 port 50672 Jan 23 18:42:57.609944 sshd-session[5363]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:57.613203 systemd-logind[2527]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:42:57.613454 systemd[1]: sshd@13-10.200.8.22:22-10.200.16.10:50672.service: Deactivated successfully. Jan 23 18:42:57.615577 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:42:57.617066 systemd-logind[2527]: Removed session 17. Jan 23 18:42:57.722122 systemd[1]: Started sshd@14-10.200.8.22:22-10.200.16.10:50686.service - OpenSSH per-connection server daemon (10.200.16.10:50686). Jan 23 18:42:58.275654 sshd[5377]: Accepted publickey for core from 10.200.16.10 port 50686 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:42:58.276422 sshd-session[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:58.280560 systemd-logind[2527]: New session 18 of user core. Jan 23 18:42:58.288945 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:42:58.629753 sshd[5381]: Connection closed by 10.200.16.10 port 50686 Jan 23 18:42:58.630348 sshd-session[5377]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:58.633600 systemd[1]: sshd@14-10.200.8.22:22-10.200.16.10:50686.service: Deactivated successfully. Jan 23 18:42:58.635441 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:42:58.636330 systemd-logind[2527]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:42:58.637815 systemd-logind[2527]: Removed session 18. Jan 23 18:43:03.746497 systemd[1]: Started sshd@15-10.200.8.22:22-10.200.16.10:47018.service - OpenSSH per-connection server daemon (10.200.16.10:47018). Jan 23 18:43:04.317816 sshd[5437]: Accepted publickey for core from 10.200.16.10 port 47018 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:43:04.319823 sshd-session[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:43:04.324646 systemd-logind[2527]: New session 19 of user core. Jan 23 18:43:04.328968 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:43:04.678758 sshd[5443]: Connection closed by 10.200.16.10 port 47018 Jan 23 18:43:04.680316 sshd-session[5437]: pam_unix(sshd:session): session closed for user core Jan 23 18:43:04.682759 systemd[1]: sshd@15-10.200.8.22:22-10.200.16.10:47018.service: Deactivated successfully. Jan 23 18:43:04.684360 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:43:04.686117 systemd-logind[2527]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:43:04.686809 systemd-logind[2527]: Removed session 19. Jan 23 18:43:09.797584 systemd[1]: Started sshd@16-10.200.8.22:22-10.200.16.10:35176.service - OpenSSH per-connection server daemon (10.200.16.10:35176). Jan 23 18:43:10.348231 sshd[5478]: Accepted publickey for core from 10.200.16.10 port 35176 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:43:10.349486 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:43:10.353674 systemd-logind[2527]: New session 20 of user core. Jan 23 18:43:10.360966 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 18:43:10.702607 sshd[5482]: Connection closed by 10.200.16.10 port 35176 Jan 23 18:43:10.703308 sshd-session[5478]: pam_unix(sshd:session): session closed for user core Jan 23 18:43:10.706816 systemd[1]: sshd@16-10.200.8.22:22-10.200.16.10:35176.service: Deactivated successfully. Jan 23 18:43:10.708627 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 18:43:10.710338 systemd-logind[2527]: Session 20 logged out. Waiting for processes to exit. Jan 23 18:43:10.711189 systemd-logind[2527]: Removed session 20. Jan 23 18:43:15.821743 systemd[1]: Started sshd@17-10.200.8.22:22-10.200.16.10:35182.service - OpenSSH per-connection server daemon (10.200.16.10:35182). Jan 23 18:43:16.372153 sshd[5515]: Accepted publickey for core from 10.200.16.10 port 35182 ssh2: RSA SHA256:ArjGDwGYyq5YJ6AXPFU/NVpphwBChF7ne4ULEwmf8js Jan 23 18:43:16.373388 sshd-session[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:43:16.376903 systemd-logind[2527]: New session 21 of user core. Jan 23 18:43:16.381938 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 18:43:16.727001 sshd[5519]: Connection closed by 10.200.16.10 port 35182 Jan 23 18:43:16.727940 sshd-session[5515]: pam_unix(sshd:session): session closed for user core Jan 23 18:43:16.730632 systemd[1]: sshd@17-10.200.8.22:22-10.200.16.10:35182.service: Deactivated successfully. Jan 23 18:43:16.732552 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 18:43:16.734161 systemd-logind[2527]: Session 21 logged out. Waiting for processes to exit. Jan 23 18:43:16.734935 systemd-logind[2527]: Removed session 21.