Jan 23 19:03:57.983477 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 19:03:57.983505 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:03:57.983517 kernel: BIOS-provided physical RAM map: Jan 23 19:03:57.983524 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 19:03:57.983530 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 23 19:03:57.983537 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 23 19:03:57.983544 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 23 19:03:57.983551 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 23 19:03:57.983558 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 23 19:03:57.983566 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 23 19:03:57.983573 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 23 19:03:57.983579 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 23 19:03:57.983586 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 23 19:03:57.983593 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 23 19:03:57.983601 kernel: NX (Execute Disable) protection: active Jan 23 19:03:57.983610 kernel: APIC: Static calls initialized Jan 23 19:03:57.983617 kernel: efi: EFI v2.7 by Microsoft Jan 23 19:03:57.983624 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3e9ab698 RNG=0x3ffd2018 Jan 23 19:03:57.983631 kernel: random: crng init done Jan 23 19:03:57.983638 kernel: secureboot: Secure boot disabled Jan 23 19:03:57.983645 kernel: SMBIOS 3.1.0 present. Jan 23 19:03:57.983653 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 23 19:03:57.983660 kernel: DMI: Memory slots populated: 2/2 Jan 23 19:03:57.983667 kernel: Hypervisor detected: Microsoft Hyper-V Jan 23 19:03:57.983674 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 23 19:03:57.983681 kernel: Hyper-V: Nested features: 0x3e0101 Jan 23 19:03:57.983690 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 23 19:03:57.983697 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 23 19:03:57.983704 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 23 19:03:57.983711 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 23 19:03:57.983718 kernel: tsc: Detected 2299.999 MHz processor Jan 23 19:03:57.983725 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 19:03:57.983733 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 19:03:57.983741 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 23 19:03:57.983749 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 19:03:57.985076 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 19:03:57.985090 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 23 19:03:57.985099 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 23 19:03:57.985106 kernel: Using GB pages for direct mapping Jan 23 19:03:57.985115 kernel: ACPI: Early table checksum verification disabled Jan 23 19:03:57.985127 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 23 19:03:57.985135 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 19:03:57.985146 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 19:03:57.985155 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 19:03:57.985163 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 23 19:03:57.985171 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 19:03:57.985180 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 19:03:57.985188 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 19:03:57.985196 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 23 19:03:57.985206 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 23 19:03:57.985214 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 19:03:57.985222 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 23 19:03:57.985231 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 23 19:03:57.985239 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 23 19:03:57.985247 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 23 19:03:57.985255 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 23 19:03:57.985264 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 23 19:03:57.985272 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 23 19:03:57.985281 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 23 19:03:57.985290 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 23 19:03:57.985297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 19:03:57.985306 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 23 19:03:57.985315 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 23 19:03:57.985324 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 23 19:03:57.985333 kernel: Zone ranges: Jan 23 19:03:57.985341 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 19:03:57.985349 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 19:03:57.985358 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 23 19:03:57.985366 kernel: Device empty Jan 23 19:03:57.985375 kernel: Movable zone start for each node Jan 23 19:03:57.985383 kernel: Early memory node ranges Jan 23 19:03:57.985392 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 19:03:57.985400 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 23 19:03:57.985408 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 23 19:03:57.985417 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 23 19:03:57.985425 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 23 19:03:57.985434 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 23 19:03:57.985442 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 19:03:57.985451 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 19:03:57.985459 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 23 19:03:57.985467 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 23 19:03:57.985475 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 23 19:03:57.985484 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 23 19:03:57.985491 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 19:03:57.985500 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 19:03:57.985510 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 19:03:57.985518 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 23 19:03:57.985527 kernel: TSC deadline timer available Jan 23 19:03:57.985535 kernel: CPU topo: Max. logical packages: 1 Jan 23 19:03:57.985542 kernel: CPU topo: Max. logical dies: 1 Jan 23 19:03:57.985550 kernel: CPU topo: Max. dies per package: 1 Jan 23 19:03:57.985558 kernel: CPU topo: Max. threads per core: 2 Jan 23 19:03:57.985566 kernel: CPU topo: Num. cores per package: 1 Jan 23 19:03:57.985574 kernel: CPU topo: Num. threads per package: 2 Jan 23 19:03:57.985582 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 19:03:57.985592 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 23 19:03:57.985600 kernel: Booting paravirtualized kernel on Hyper-V Jan 23 19:03:57.985607 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 19:03:57.985615 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 19:03:57.985623 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 19:03:57.985632 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 19:03:57.985640 kernel: pcpu-alloc: [0] 0 1 Jan 23 19:03:57.985647 kernel: Hyper-V: PV spinlocks enabled Jan 23 19:03:57.985655 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 19:03:57.985666 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:03:57.985674 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 23 19:03:57.985682 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 19:03:57.985690 kernel: Fallback order for Node 0: 0 Jan 23 19:03:57.985698 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 23 19:03:57.985706 kernel: Policy zone: Normal Jan 23 19:03:57.985714 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 19:03:57.985721 kernel: software IO TLB: area num 2. Jan 23 19:03:57.985730 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 19:03:57.985738 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 19:03:57.985745 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 19:03:57.985775 kernel: Dynamic Preempt: voluntary Jan 23 19:03:57.985783 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 19:03:57.985793 kernel: rcu: RCU event tracing is enabled. Jan 23 19:03:57.985808 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 19:03:57.985818 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 19:03:57.985826 kernel: Rude variant of Tasks RCU enabled. Jan 23 19:03:57.985834 kernel: Tracing variant of Tasks RCU enabled. Jan 23 19:03:57.985843 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 19:03:57.985852 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 19:03:57.985861 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 19:03:57.985870 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 19:03:57.985879 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 19:03:57.985887 kernel: Using NULL legacy PIC Jan 23 19:03:57.985898 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 23 19:03:57.985907 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 19:03:57.985915 kernel: Console: colour dummy device 80x25 Jan 23 19:03:57.985924 kernel: printk: legacy console [tty1] enabled Jan 23 19:03:57.985932 kernel: printk: legacy console [ttyS0] enabled Jan 23 19:03:57.985941 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 23 19:03:57.985950 kernel: ACPI: Core revision 20240827 Jan 23 19:03:57.985958 kernel: Failed to register legacy timer interrupt Jan 23 19:03:57.985967 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 19:03:57.985977 kernel: x2apic enabled Jan 23 19:03:57.985985 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 19:03:57.985994 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 19:03:57.986002 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 19:03:57.986011 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 23 19:03:57.986020 kernel: Hyper-V: Using IPI hypercalls Jan 23 19:03:57.986028 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 23 19:03:57.986037 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 23 19:03:57.986046 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 23 19:03:57.986057 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 23 19:03:57.986065 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 23 19:03:57.986074 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 23 19:03:57.986082 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Jan 23 19:03:57.986091 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Jan 23 19:03:57.986100 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 19:03:57.986108 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 19:03:57.986117 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 19:03:57.986125 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 19:03:57.986133 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 19:03:57.986143 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 19:03:57.986151 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 23 19:03:57.986160 kernel: RETBleed: Vulnerable Jan 23 19:03:57.986168 kernel: Speculative Store Bypass: Vulnerable Jan 23 19:03:57.986177 kernel: active return thunk: its_return_thunk Jan 23 19:03:57.986185 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 19:03:57.986193 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 19:03:57.986201 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 19:03:57.986209 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 19:03:57.986218 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 19:03:57.986227 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 19:03:57.986235 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 19:03:57.986243 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 23 19:03:57.986251 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 23 19:03:57.986259 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 23 19:03:57.986267 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 19:03:57.986275 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 19:03:57.986283 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 19:03:57.986291 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 19:03:57.986299 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 23 19:03:57.986307 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 23 19:03:57.986315 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 23 19:03:57.986325 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 23 19:03:57.986333 kernel: Freeing SMP alternatives memory: 32K Jan 23 19:03:57.986342 kernel: pid_max: default: 32768 minimum: 301 Jan 23 19:03:57.986350 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 19:03:57.986358 kernel: landlock: Up and running. Jan 23 19:03:57.986366 kernel: SELinux: Initializing. Jan 23 19:03:57.986374 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 19:03:57.986382 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 19:03:57.986390 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 23 19:03:57.986398 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 23 19:03:57.986407 kernel: signal: max sigframe size: 11952 Jan 23 19:03:57.986417 kernel: rcu: Hierarchical SRCU implementation. Jan 23 19:03:57.986425 kernel: rcu: Max phase no-delay instances is 400. Jan 23 19:03:57.986434 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 19:03:57.986442 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 19:03:57.986450 kernel: smp: Bringing up secondary CPUs ... Jan 23 19:03:57.986458 kernel: smpboot: x86: Booting SMP configuration: Jan 23 19:03:57.986466 kernel: .... node #0, CPUs: #1 Jan 23 19:03:57.986475 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 19:03:57.986483 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 23 19:03:57.986494 kernel: Memory: 8068832K/8383228K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 308180K reserved, 0K cma-reserved) Jan 23 19:03:57.986502 kernel: devtmpfs: initialized Jan 23 19:03:57.986511 kernel: x86/mm: Memory block size: 128MB Jan 23 19:03:57.986519 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 23 19:03:57.986527 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 19:03:57.986536 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 19:03:57.986544 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 19:03:57.986553 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 19:03:57.986561 kernel: audit: initializing netlink subsys (disabled) Jan 23 19:03:57.986571 kernel: audit: type=2000 audit(1769195034.102:1): state=initialized audit_enabled=0 res=1 Jan 23 19:03:57.986579 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 19:03:57.986588 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 19:03:57.986596 kernel: cpuidle: using governor menu Jan 23 19:03:57.986604 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 19:03:57.986612 kernel: dca service started, version 1.12.1 Jan 23 19:03:57.986621 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 23 19:03:57.986629 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 23 19:03:57.986639 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 19:03:57.986648 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 19:03:57.986656 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 19:03:57.986664 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 19:03:57.986672 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 19:03:57.986681 kernel: ACPI: Added _OSI(Module Device) Jan 23 19:03:57.986689 kernel: ACPI: Added _OSI(Processor Device) Jan 23 19:03:57.986697 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 19:03:57.986706 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 19:03:57.986715 kernel: ACPI: Interpreter enabled Jan 23 19:03:57.986724 kernel: ACPI: PM: (supports S0 S5) Jan 23 19:03:57.986732 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 19:03:57.986741 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 19:03:57.986749 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 23 19:03:57.987947 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 23 19:03:57.988011 kernel: iommu: Default domain type: Translated Jan 23 19:03:57.988073 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 19:03:57.988083 kernel: efivars: Registered efivars operations Jan 23 19:03:57.988146 kernel: PCI: Using ACPI for IRQ routing Jan 23 19:03:57.988158 kernel: PCI: System does not support PCI Jan 23 19:03:57.988166 kernel: vgaarb: loaded Jan 23 19:03:57.988175 kernel: clocksource: Switched to clocksource tsc-early Jan 23 19:03:57.988185 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 19:03:57.988195 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 19:03:57.988205 kernel: pnp: PnP ACPI init Jan 23 19:03:57.988214 kernel: pnp: PnP ACPI: found 3 devices Jan 23 19:03:57.988224 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 19:03:57.988233 kernel: NET: Registered PF_INET protocol family Jan 23 19:03:57.988245 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 19:03:57.988254 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 23 19:03:57.988263 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 19:03:57.988273 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 19:03:57.988282 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 23 19:03:57.988291 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 23 19:03:57.988299 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 19:03:57.988308 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 19:03:57.988318 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 19:03:57.988328 kernel: NET: Registered PF_XDP protocol family Jan 23 19:03:57.988336 kernel: PCI: CLS 0 bytes, default 64 Jan 23 19:03:57.988345 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 19:03:57.988354 kernel: software IO TLB: mapped [mem 0x000000003a9ab000-0x000000003e9ab000] (64MB) Jan 23 19:03:57.988364 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 23 19:03:57.988374 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 23 19:03:57.988385 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Jan 23 19:03:57.988394 kernel: clocksource: Switched to clocksource tsc Jan 23 19:03:57.988406 kernel: Initialise system trusted keyrings Jan 23 19:03:57.988415 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 23 19:03:57.988423 kernel: Key type asymmetric registered Jan 23 19:03:57.988432 kernel: Asymmetric key parser 'x509' registered Jan 23 19:03:57.988442 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 19:03:57.988451 kernel: io scheduler mq-deadline registered Jan 23 19:03:57.988460 kernel: io scheduler kyber registered Jan 23 19:03:57.988469 kernel: io scheduler bfq registered Jan 23 19:03:57.988478 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 19:03:57.988490 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 19:03:57.988498 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 19:03:57.988508 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 23 19:03:57.988521 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 19:03:57.988532 kernel: i8042: PNP: No PS/2 controller found. Jan 23 19:03:57.988687 kernel: rtc_cmos 00:02: registered as rtc0 Jan 23 19:03:57.988789 kernel: rtc_cmos 00:02: setting system clock to 2026-01-23T19:03:57 UTC (1769195037) Jan 23 19:03:57.988866 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 23 19:03:57.988879 kernel: intel_pstate: Intel P-state driver initializing Jan 23 19:03:57.988888 kernel: efifb: probing for efifb Jan 23 19:03:57.988896 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 19:03:57.988905 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 19:03:57.988917 kernel: efifb: scrolling: redraw Jan 23 19:03:57.988926 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 19:03:57.988934 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 19:03:57.988942 kernel: fb0: EFI VGA frame buffer device Jan 23 19:03:57.988951 kernel: pstore: Using crash dump compression: deflate Jan 23 19:03:57.988962 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 19:03:57.988970 kernel: NET: Registered PF_INET6 protocol family Jan 23 19:03:57.988979 kernel: Segment Routing with IPv6 Jan 23 19:03:57.988988 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 19:03:57.988996 kernel: NET: Registered PF_PACKET protocol family Jan 23 19:03:57.989005 kernel: Key type dns_resolver registered Jan 23 19:03:57.989013 kernel: IPI shorthand broadcast: enabled Jan 23 19:03:57.989022 kernel: sched_clock: Marking stable (2947005212, 85213774)->(3364207982, -331988996) Jan 23 19:03:57.989030 kernel: registered taskstats version 1 Jan 23 19:03:57.989041 kernel: Loading compiled-in X.509 certificates Jan 23 19:03:57.989050 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 19:03:57.989058 kernel: Demotion targets for Node 0: null Jan 23 19:03:57.989067 kernel: Key type .fscrypt registered Jan 23 19:03:57.989075 kernel: Key type fscrypt-provisioning registered Jan 23 19:03:57.989084 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 19:03:57.989092 kernel: ima: Allocated hash algorithm: sha1 Jan 23 19:03:57.989101 kernel: ima: No architecture policies found Jan 23 19:03:57.989109 kernel: clk: Disabling unused clocks Jan 23 19:03:57.989119 kernel: Warning: unable to open an initial console. Jan 23 19:03:57.989128 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 19:03:57.989136 kernel: Write protecting the kernel read-only data: 40960k Jan 23 19:03:57.989144 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 19:03:57.989153 kernel: Run /init as init process Jan 23 19:03:57.989161 kernel: with arguments: Jan 23 19:03:57.989169 kernel: /init Jan 23 19:03:57.989177 kernel: with environment: Jan 23 19:03:57.989185 kernel: HOME=/ Jan 23 19:03:57.989195 kernel: TERM=linux Jan 23 19:03:57.989205 systemd[1]: Successfully made /usr/ read-only. Jan 23 19:03:57.989217 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 19:03:57.989227 systemd[1]: Detected virtualization microsoft. Jan 23 19:03:57.989235 systemd[1]: Detected architecture x86-64. Jan 23 19:03:57.989244 systemd[1]: Running in initrd. Jan 23 19:03:57.989252 systemd[1]: No hostname configured, using default hostname. Jan 23 19:03:57.989263 systemd[1]: Hostname set to . Jan 23 19:03:57.989271 systemd[1]: Initializing machine ID from random generator. Jan 23 19:03:57.989280 systemd[1]: Queued start job for default target initrd.target. Jan 23 19:03:57.989289 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:03:57.989298 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:03:57.989308 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 19:03:57.989316 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 19:03:57.989325 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 19:03:57.989336 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 19:03:57.989346 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 19:03:57.989355 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 19:03:57.989364 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:03:57.989373 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:03:57.989382 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:03:57.989391 systemd[1]: Reached target slices.target - Slice Units. Jan 23 19:03:57.989402 systemd[1]: Reached target swap.target - Swaps. Jan 23 19:03:57.989411 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:03:57.989420 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 19:03:57.989429 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 19:03:57.989437 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 19:03:57.989446 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 19:03:57.989455 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:03:57.989464 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 19:03:57.989473 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:03:57.989483 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:03:57.989492 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 19:03:57.989501 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 19:03:57.989509 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 19:03:57.989518 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 19:03:57.989527 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 19:03:57.989536 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 19:03:57.989545 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 19:03:57.989563 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:03:57.989573 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 19:03:57.989583 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:03:57.989610 systemd-journald[186]: Collecting audit messages is disabled. Jan 23 19:03:57.989633 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 19:03:57.989642 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 19:03:57.989654 systemd-journald[186]: Journal started Jan 23 19:03:57.989677 systemd-journald[186]: Runtime Journal (/run/log/journal/8fd6f62723274d3ebf7986f2aedcf8a2) is 8M, max 158.6M, 150.6M free. Jan 23 19:03:57.984713 systemd-modules-load[187]: Inserted module 'overlay' Jan 23 19:03:57.999721 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 19:03:58.001066 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:03:58.018061 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:03:58.029022 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 19:03:58.021070 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:03:58.033912 kernel: Bridge firewalling registered Jan 23 19:03:58.031498 systemd-modules-load[187]: Inserted module 'br_netfilter' Jan 23 19:03:58.036122 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 19:03:58.036959 systemd-tmpfiles[200]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 19:03:58.043573 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 19:03:58.049659 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 19:03:58.055101 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:03:58.060955 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 19:03:58.065244 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:03:58.072766 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 19:03:58.078299 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:03:58.089691 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:03:58.108513 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:03:58.114853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:03:58.150450 systemd-resolved[265]: Positive Trust Anchors: Jan 23 19:03:58.152131 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:03:58.152170 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:03:58.172484 systemd-resolved[265]: Defaulting to hostname 'linux'. Jan 23 19:03:58.175224 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:03:58.179461 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:03:58.181692 kernel: SCSI subsystem initialized Jan 23 19:03:58.188769 kernel: Loading iSCSI transport class v2.0-870. Jan 23 19:03:58.197780 kernel: iscsi: registered transport (tcp) Jan 23 19:03:58.215034 kernel: iscsi: registered transport (qla4xxx) Jan 23 19:03:58.215075 kernel: QLogic iSCSI HBA Driver Jan 23 19:03:58.228148 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 19:03:58.237863 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:03:58.240408 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 19:03:58.272997 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 19:03:58.277624 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 19:03:58.322779 kernel: raid6: avx512x4 gen() 44330 MB/s Jan 23 19:03:58.340765 kernel: raid6: avx512x2 gen() 43989 MB/s Jan 23 19:03:58.357764 kernel: raid6: avx512x1 gen() 26595 MB/s Jan 23 19:03:58.375767 kernel: raid6: avx2x4 gen() 37749 MB/s Jan 23 19:03:58.392765 kernel: raid6: avx2x2 gen() 38824 MB/s Jan 23 19:03:58.410431 kernel: raid6: avx2x1 gen() 31174 MB/s Jan 23 19:03:58.410454 kernel: raid6: using algorithm avx512x4 gen() 44330 MB/s Jan 23 19:03:58.430071 kernel: raid6: .... xor() 7794 MB/s, rmw enabled Jan 23 19:03:58.430100 kernel: raid6: using avx512x2 recovery algorithm Jan 23 19:03:58.447773 kernel: xor: automatically using best checksumming function avx Jan 23 19:03:58.565772 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 19:03:58.570118 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 19:03:58.575955 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:03:58.598766 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 23 19:03:58.602897 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:03:58.609973 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 19:03:58.633868 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Jan 23 19:03:58.652254 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 19:03:58.656568 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 19:03:58.686613 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:03:58.694336 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 19:03:58.735795 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 19:03:58.758286 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 19:03:58.764769 kernel: AES CTR mode by8 optimization enabled Jan 23 19:03:58.769908 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:03:58.772338 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:03:58.777819 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:03:58.786793 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 19:03:58.786918 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:03:58.798863 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 19:03:58.798894 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 19:03:58.802770 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 23 19:03:58.806951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:03:58.807260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:03:58.816143 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 19:03:58.821261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:03:58.826452 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 19:03:58.826471 kernel: PTP clock support registered Jan 23 19:03:58.833829 kernel: scsi host0: storvsc_host_t Jan 23 19:03:58.841639 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 19:03:58.843876 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 19:03:58.848455 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d6c7d96 (unnamed net_device) (uninitialized): VF slot 1 added Jan 23 19:03:58.853853 kernel: hv_vmbus: registering driver hv_pci Jan 23 19:03:58.861788 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 23 19:03:58.866367 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 19:03:58.866417 kernel: hv_vmbus: registering driver hv_utils Jan 23 19:03:58.870793 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 19:03:58.870827 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 19:03:58.874446 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 19:03:58.392436 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 19:03:58.405863 systemd-journald[186]: Time jumped backwards, rotating. Jan 23 19:03:58.405916 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 19:03:58.406055 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 19:03:58.406064 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 19:03:58.406164 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 23 19:03:58.392412 systemd-resolved[265]: Clock change detected. Flushing caches. Jan 23 19:03:58.410965 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:03:58.418621 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 23 19:03:58.418818 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 19:03:58.420964 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 23 19:03:58.424772 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 19:03:58.424906 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 23 19:03:58.431100 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 23 19:03:58.446349 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 19:03:58.446515 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 23 19:03:58.449245 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 19:03:58.464765 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 23 19:03:58.464896 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 23 19:03:58.476794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#288 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 19:03:58.610765 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 19:03:58.617765 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 19:03:58.969776 kernel: nvme nvme0: using unchecked data buffer Jan 23 19:03:59.144791 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 23 19:03:59.180243 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 23 19:03:59.214814 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 23 19:03:59.220815 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 23 19:03:59.222846 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 19:03:59.235181 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 23 19:03:59.236884 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 19:03:59.247001 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:03:59.250365 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 19:03:59.252850 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 19:03:59.259059 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 19:03:59.271066 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 19:03:59.273298 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 19:03:59.394845 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 23 19:03:59.395013 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 23 19:03:59.397491 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 23 19:03:59.399102 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 19:03:59.403859 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 23 19:03:59.407796 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 23 19:03:59.413109 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 23 19:03:59.413131 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 23 19:03:59.429769 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 19:03:59.429925 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 23 19:03:59.432980 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 23 19:03:59.438478 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 23 19:03:59.449771 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 23 19:03:59.453775 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d6c7d96 eth0: VF registering: eth1 Jan 23 19:03:59.459784 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 23 19:03:59.471770 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 23 19:04:00.289770 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 19:04:00.290674 disk-uuid[657]: The operation has completed successfully. Jan 23 19:04:00.349450 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 19:04:00.349536 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 19:04:00.379084 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 19:04:00.399935 sh[694]: Success Jan 23 19:04:00.431029 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 19:04:00.431075 kernel: device-mapper: uevent: version 1.0.3 Jan 23 19:04:00.432597 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 19:04:00.440769 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 19:04:00.664647 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 19:04:00.670839 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 19:04:00.678662 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 19:04:00.693203 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (707) Jan 23 19:04:00.693246 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 19:04:00.694539 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:04:01.015071 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 19:04:01.015157 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 19:04:01.016206 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 19:04:01.032271 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 19:04:01.035073 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 19:04:01.037818 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 19:04:01.040995 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 19:04:01.055870 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 19:04:01.075765 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (730) Jan 23 19:04:01.079391 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:04:01.079422 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:04:01.101017 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 19:04:01.101044 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 19:04:01.101051 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 19:04:01.105803 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:04:01.107187 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 19:04:01.113899 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 19:04:01.143736 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 19:04:01.146861 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:04:01.174186 systemd-networkd[876]: lo: Link UP Jan 23 19:04:01.174194 systemd-networkd[876]: lo: Gained carrier Jan 23 19:04:01.175132 systemd-networkd[876]: Enumeration completed Jan 23 19:04:01.186930 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 23 19:04:01.187113 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 19:04:01.187231 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d6c7d96 eth0: Data path switched to VF: enP30832s1 Jan 23 19:04:01.175495 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:04:01.175498 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:04:01.175787 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:04:01.182217 systemd[1]: Reached target network.target - Network. Jan 23 19:04:01.186794 systemd-networkd[876]: enP30832s1: Link UP Jan 23 19:04:01.186863 systemd-networkd[876]: eth0: Link UP Jan 23 19:04:01.186960 systemd-networkd[876]: eth0: Gained carrier Jan 23 19:04:01.186971 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:04:01.193890 systemd-networkd[876]: enP30832s1: Gained carrier Jan 23 19:04:01.202784 systemd-networkd[876]: eth0: DHCPv4 address 10.200.4.21/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 23 19:04:02.189888 ignition[821]: Ignition 2.22.0 Jan 23 19:04:02.189900 ignition[821]: Stage: fetch-offline Jan 23 19:04:02.192280 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 19:04:02.190008 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:04:02.195449 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 19:04:02.190015 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 19:04:02.190098 ignition[821]: parsed url from cmdline: "" Jan 23 19:04:02.190101 ignition[821]: no config URL provided Jan 23 19:04:02.190111 ignition[821]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 19:04:02.190116 ignition[821]: no config at "/usr/lib/ignition/user.ign" Jan 23 19:04:02.190120 ignition[821]: failed to fetch config: resource requires networking Jan 23 19:04:02.190363 ignition[821]: Ignition finished successfully Jan 23 19:04:02.221489 ignition[885]: Ignition 2.22.0 Jan 23 19:04:02.221500 ignition[885]: Stage: fetch Jan 23 19:04:02.221680 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:04:02.221687 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 19:04:02.221785 ignition[885]: parsed url from cmdline: "" Jan 23 19:04:02.221788 ignition[885]: no config URL provided Jan 23 19:04:02.221792 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 19:04:02.221798 ignition[885]: no config at "/usr/lib/ignition/user.ign" Jan 23 19:04:02.221816 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 19:04:02.235873 systemd-networkd[876]: eth0: Gained IPv6LL Jan 23 19:04:02.295856 ignition[885]: GET result: OK Jan 23 19:04:02.295991 ignition[885]: config has been read from IMDS userdata Jan 23 19:04:02.296013 ignition[885]: parsing config with SHA512: 7e464ab6992db76a460196e84462fe36783fa33fb149fa242e565e8c46606cb6dbae7ac6f9b45990d910a436a054953ea77d798b082ccc56a802832e42d9b4dd Jan 23 19:04:02.300347 unknown[885]: fetched base config from "system" Jan 23 19:04:02.300351 unknown[885]: fetched base config from "system" Jan 23 19:04:02.300355 unknown[885]: fetched user config from "azure" Jan 23 19:04:02.303179 ignition[885]: fetch: fetch complete Jan 23 19:04:02.303184 ignition[885]: fetch: fetch passed Jan 23 19:04:02.303228 ignition[885]: Ignition finished successfully Jan 23 19:04:02.307248 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 19:04:02.309882 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 19:04:02.338589 ignition[891]: Ignition 2.22.0 Jan 23 19:04:02.338599 ignition[891]: Stage: kargs Jan 23 19:04:02.340975 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 19:04:02.338792 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:04:02.338800 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 19:04:02.339519 ignition[891]: kargs: kargs passed Jan 23 19:04:02.339551 ignition[891]: Ignition finished successfully Jan 23 19:04:02.350429 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 19:04:02.377968 ignition[897]: Ignition 2.22.0 Jan 23 19:04:02.377978 ignition[897]: Stage: disks Jan 23 19:04:02.378316 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:04:02.380034 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 19:04:02.378325 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 19:04:02.382593 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 19:04:02.379031 ignition[897]: disks: disks passed Jan 23 19:04:02.385278 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 19:04:02.379062 ignition[897]: Ignition finished successfully Jan 23 19:04:02.389795 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 19:04:02.393789 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:04:02.396811 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:04:02.402624 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 19:04:02.468114 systemd-fsck[905]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 23 19:04:02.472703 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 19:04:02.480833 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 19:04:02.732762 kernel: EXT4-fs (nvme0n1p9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 19:04:02.733269 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 19:04:02.738164 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 19:04:02.755449 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 19:04:02.773829 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 19:04:02.777838 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 19:04:02.781066 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 19:04:02.781095 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 19:04:02.788765 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (914) Jan 23 19:04:02.796814 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 19:04:02.801638 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:04:02.801659 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:04:02.799347 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 19:04:02.810763 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 19:04:02.810795 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 19:04:02.812108 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 19:04:02.813588 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 19:04:03.308947 coreos-metadata[916]: Jan 23 19:04:03.308 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 19:04:03.314511 coreos-metadata[916]: Jan 23 19:04:03.314 INFO Fetch successful Jan 23 19:04:03.316106 coreos-metadata[916]: Jan 23 19:04:03.314 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 19:04:03.324658 coreos-metadata[916]: Jan 23 19:04:03.324 INFO Fetch successful Jan 23 19:04:03.340185 coreos-metadata[916]: Jan 23 19:04:03.340 INFO wrote hostname ci-4459.2.3-a-b64689fca1 to /sysroot/etc/hostname Jan 23 19:04:03.343343 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 19:04:03.551946 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 19:04:03.602018 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Jan 23 19:04:03.606970 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 19:04:03.611496 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 19:04:04.746251 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 19:04:04.781867 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 19:04:04.786490 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 19:04:04.808293 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 19:04:04.810974 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:04:04.830731 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 19:04:04.838214 ignition[1034]: INFO : Ignition 2.22.0 Jan 23 19:04:04.838214 ignition[1034]: INFO : Stage: mount Jan 23 19:04:04.840935 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:04:04.840935 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 19:04:04.840935 ignition[1034]: INFO : mount: mount passed Jan 23 19:04:04.840935 ignition[1034]: INFO : Ignition finished successfully Jan 23 19:04:04.841973 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 19:04:04.853281 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 19:04:04.862684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 19:04:04.887768 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1045) Jan 23 19:04:04.887801 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:04:04.889893 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:04:04.895296 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 19:04:04.895326 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 19:04:04.897248 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 19:04:04.898585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 19:04:04.923189 ignition[1062]: INFO : Ignition 2.22.0 Jan 23 19:04:04.923189 ignition[1062]: INFO : Stage: files Jan 23 19:04:04.929784 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:04:04.929784 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 19:04:04.929784 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Jan 23 19:04:04.937866 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 19:04:04.937866 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 19:04:04.970943 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 19:04:04.974825 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 19:04:04.974825 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 19:04:04.973270 unknown[1062]: wrote ssh authorized keys file for user: core Jan 23 19:04:05.001478 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 19:04:05.004530 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 19:04:05.039781 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 19:04:05.119417 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 19:04:05.132380 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 19:04:05.132380 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 19:04:05.132380 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 19:04:05.132380 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 19:04:05.132380 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 19:04:05.132380 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 19:04:05.132380 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 19:04:05.132380 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 19:04:05.158796 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 19:04:05.158796 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 19:04:05.158796 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:04:05.158796 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:04:05.158796 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:04:05.158796 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 19:04:05.410060 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 19:04:05.736085 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 19:04:05.736085 ignition[1062]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 19:04:05.773382 ignition[1062]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 19:04:05.778843 ignition[1062]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 19:04:05.778843 ignition[1062]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 19:04:05.787553 ignition[1062]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 19:04:05.787553 ignition[1062]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 19:04:05.787553 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 19:04:05.787553 ignition[1062]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 19:04:05.787553 ignition[1062]: INFO : files: files passed Jan 23 19:04:05.787553 ignition[1062]: INFO : Ignition finished successfully Jan 23 19:04:05.783241 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 19:04:05.790588 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 19:04:05.802400 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 19:04:05.820828 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 19:04:05.820919 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 19:04:05.837716 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:04:05.837716 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:04:05.848815 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:04:05.841588 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 19:04:05.847465 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 19:04:05.853359 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 19:04:05.899014 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 19:04:05.899105 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 19:04:05.902668 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 19:04:05.906849 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 19:04:05.908440 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 19:04:05.909846 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 19:04:05.925782 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 19:04:05.930420 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 19:04:05.943257 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:04:05.943411 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:04:05.943658 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 19:04:05.944851 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 19:04:05.944963 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 19:04:05.945459 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 19:04:05.945827 systemd[1]: Stopped target basic.target - Basic System. Jan 23 19:04:05.946157 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 19:04:05.946763 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 19:04:05.947049 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 19:04:05.947385 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 19:04:05.947768 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 19:04:05.947939 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 19:04:05.961745 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 19:04:05.963486 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 19:04:05.963638 systemd[1]: Stopped target swap.target - Swaps. Jan 23 19:04:05.963857 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 19:04:05.963974 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 19:04:05.988915 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:04:05.990509 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:04:05.995857 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 19:04:05.997264 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:04:05.999178 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 19:04:05.999307 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 19:04:06.000173 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 19:04:06.000286 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 19:04:06.000579 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 19:04:06.000676 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 19:04:06.001244 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 19:04:06.001337 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 19:04:06.003949 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 19:04:06.005846 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 19:04:06.006044 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 19:04:06.050159 ignition[1116]: INFO : Ignition 2.22.0 Jan 23 19:04:06.050159 ignition[1116]: INFO : Stage: umount Jan 23 19:04:06.050159 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:04:06.050159 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 19:04:06.006173 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:04:06.062176 ignition[1116]: INFO : umount: umount passed Jan 23 19:04:06.062176 ignition[1116]: INFO : Ignition finished successfully Jan 23 19:04:06.006501 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 19:04:06.006603 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 19:04:06.017410 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 19:04:06.017479 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 19:04:06.056792 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 19:04:06.056881 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 19:04:06.065190 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 19:04:06.065673 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 19:04:06.065731 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 19:04:06.078849 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 19:04:06.078897 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 19:04:06.079304 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 19:04:06.079331 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 19:04:06.079619 systemd[1]: Stopped target network.target - Network. Jan 23 19:04:06.079841 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 19:04:06.079870 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 19:04:06.080162 systemd[1]: Stopped target paths.target - Path Units. Jan 23 19:04:06.080415 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 19:04:06.087871 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:04:06.092682 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 19:04:06.093965 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 19:04:06.097911 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 19:04:06.097948 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 19:04:06.102840 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 19:04:06.102870 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 19:04:06.105526 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 19:04:06.105577 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 19:04:06.117745 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 19:04:06.118939 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 19:04:06.124613 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 19:04:06.132483 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 19:04:06.138122 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 19:04:06.138195 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 19:04:06.149364 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 19:04:06.149553 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 19:04:06.149640 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 19:04:06.154157 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 19:04:06.154791 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 19:04:06.158272 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 19:04:06.158299 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:04:06.162832 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 19:04:06.164950 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 19:04:06.165006 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 19:04:06.165246 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 19:04:06.165276 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:04:06.169279 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 19:04:06.169322 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 19:04:06.169872 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 19:04:06.169906 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:04:06.173908 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:04:06.191082 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 19:04:06.191126 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:04:06.191332 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 19:04:06.191420 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:04:06.196180 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 19:04:06.196239 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 19:04:06.201265 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 19:04:06.201299 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:04:06.201540 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 19:04:06.201576 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 19:04:06.201955 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 19:04:06.201987 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 19:04:06.202242 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 19:04:06.202270 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 19:04:06.205857 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 19:04:06.205887 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 19:04:06.205935 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:04:06.216052 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 19:04:06.216135 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:04:06.225734 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d6c7d96 eth0: Data path switched from VF: enP30832s1 Jan 23 19:04:06.226459 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 19:04:06.256552 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 19:04:06.256599 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:04:06.261828 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 19:04:06.261875 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:04:06.266424 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:04:06.266468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:04:06.272215 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 19:04:06.272262 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 19:04:06.272290 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 19:04:06.272324 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:04:06.272624 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 19:04:06.272707 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 19:04:06.276083 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 19:04:06.276152 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 19:04:06.367136 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 19:04:06.367231 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 19:04:06.370548 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 19:04:06.374841 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 19:04:06.374892 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 19:04:06.379870 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 19:04:06.396119 systemd[1]: Switching root. Jan 23 19:04:06.520969 systemd-journald[186]: Journal stopped Jan 23 19:04:10.727263 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Jan 23 19:04:10.727294 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 19:04:10.727309 kernel: SELinux: policy capability open_perms=1 Jan 23 19:04:10.727318 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 19:04:10.727325 kernel: SELinux: policy capability always_check_network=0 Jan 23 19:04:10.727333 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 19:04:10.727344 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 19:04:10.727352 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 19:04:10.727362 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 19:04:10.727370 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 19:04:10.727379 kernel: audit: type=1403 audit(1769195047.648:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 19:04:10.735798 systemd[1]: Successfully loaded SELinux policy in 318.490ms. Jan 23 19:04:10.735817 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.983ms. Jan 23 19:04:10.735830 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 19:04:10.735846 systemd[1]: Detected virtualization microsoft. Jan 23 19:04:10.735855 systemd[1]: Detected architecture x86-64. Jan 23 19:04:10.735864 systemd[1]: Detected first boot. Jan 23 19:04:10.735874 systemd[1]: Hostname set to . Jan 23 19:04:10.735882 systemd[1]: Initializing machine ID from random generator. Jan 23 19:04:10.735893 zram_generator::config[1159]: No configuration found. Jan 23 19:04:10.735906 kernel: Guest personality initialized and is inactive Jan 23 19:04:10.735916 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 23 19:04:10.735924 kernel: Initialized host personality Jan 23 19:04:10.735933 kernel: NET: Registered PF_VSOCK protocol family Jan 23 19:04:10.735942 systemd[1]: Populated /etc with preset unit settings. Jan 23 19:04:10.735953 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 19:04:10.735962 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 19:04:10.735971 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 19:04:10.735982 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 19:04:10.735991 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 19:04:10.736002 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 19:04:10.736018 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 19:04:10.736028 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 19:04:10.736038 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 19:04:10.736047 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 19:04:10.736057 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 19:04:10.736069 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 19:04:10.736079 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:04:10.736089 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:04:10.736100 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 19:04:10.736112 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 19:04:10.736122 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 19:04:10.736131 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 19:04:10.736141 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 19:04:10.736150 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:04:10.736159 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:04:10.736168 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 19:04:10.736176 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 19:04:10.736184 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 19:04:10.736193 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 19:04:10.736204 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:04:10.736214 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 19:04:10.736224 systemd[1]: Reached target slices.target - Slice Units. Jan 23 19:04:10.736233 systemd[1]: Reached target swap.target - Swaps. Jan 23 19:04:10.736243 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 19:04:10.736251 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 19:04:10.736263 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 19:04:10.736274 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:04:10.736284 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 19:04:10.736294 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:04:10.736303 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 19:04:10.736312 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 19:04:10.736323 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 19:04:10.736335 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 19:04:10.736345 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:04:10.736355 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 19:04:10.736364 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 19:04:10.736391 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 19:04:10.736402 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 19:04:10.736411 systemd[1]: Reached target machines.target - Containers. Jan 23 19:04:10.736421 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 19:04:10.737783 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:04:10.737800 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 19:04:10.737809 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 19:04:10.737818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:04:10.737828 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:04:10.737840 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:04:10.737851 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 19:04:10.737867 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:04:10.737880 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 19:04:10.737897 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 19:04:10.737910 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 19:04:10.737923 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 19:04:10.737934 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 19:04:10.737946 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:04:10.737957 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 19:04:10.737968 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 19:04:10.737980 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 19:04:10.737996 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 19:04:10.738007 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 19:04:10.738020 kernel: fuse: init (API version 7.41) Jan 23 19:04:10.738032 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 19:04:10.738044 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 19:04:10.738056 systemd[1]: Stopped verity-setup.service. Jan 23 19:04:10.738070 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:04:10.738115 systemd-journald[1242]: Collecting audit messages is disabled. Jan 23 19:04:10.738151 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 19:04:10.738165 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 19:04:10.738176 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 19:04:10.738190 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 19:04:10.738201 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 19:04:10.738216 kernel: loop: module loaded Jan 23 19:04:10.738227 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 19:04:10.738240 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 19:04:10.738254 systemd-journald[1242]: Journal started Jan 23 19:04:10.738281 systemd-journald[1242]: Runtime Journal (/run/log/journal/9eca4a865d384c55a1586fb662e044f5) is 8M, max 158.6M, 150.6M free. Jan 23 19:04:10.273055 systemd[1]: Queued start job for default target multi-user.target. Jan 23 19:04:10.281244 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 19:04:10.281566 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 19:04:10.745766 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 19:04:10.746743 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:04:10.750008 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 19:04:10.750235 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 19:04:10.753392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:04:10.753596 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:04:10.756606 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:04:10.756933 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:04:10.760124 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 19:04:10.760345 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 19:04:10.763231 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:04:10.763378 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:04:10.766036 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 19:04:10.769600 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:04:10.770766 kernel: ACPI: bus type drm_connector registered Jan 23 19:04:10.773418 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:04:10.773574 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:04:10.776665 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 19:04:10.780188 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 19:04:10.790597 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 19:04:10.794956 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 19:04:10.800958 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 19:04:10.803442 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 19:04:10.803479 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 19:04:10.806644 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 19:04:10.813905 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 19:04:10.816942 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:04:10.819860 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 19:04:10.822525 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 19:04:10.827033 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:04:10.828268 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 19:04:10.829938 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:04:10.833609 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:04:10.838046 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 19:04:10.842995 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 19:04:10.856974 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:04:10.859006 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 19:04:10.863061 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 19:04:10.870356 systemd-journald[1242]: Time spent on flushing to /var/log/journal/9eca4a865d384c55a1586fb662e044f5 is 43.043ms for 994 entries. Jan 23 19:04:10.870356 systemd-journald[1242]: System Journal (/var/log/journal/9eca4a865d384c55a1586fb662e044f5) is 11.8M, max 2.6G, 2.6G free. Jan 23 19:04:10.934467 systemd-journald[1242]: Received client request to flush runtime journal. Jan 23 19:04:10.934507 systemd-journald[1242]: /var/log/journal/9eca4a865d384c55a1586fb662e044f5/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 23 19:04:10.934531 systemd-journald[1242]: Rotating system journal. Jan 23 19:04:10.934559 kernel: loop0: detected capacity change from 0 to 128560 Jan 23 19:04:10.877692 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 19:04:10.880932 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 19:04:10.886878 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 19:04:10.914283 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 23 19:04:10.914295 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 23 19:04:10.919454 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:04:10.924865 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 19:04:10.936914 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 19:04:10.942829 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:04:10.976780 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 19:04:11.117594 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 19:04:11.120661 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 19:04:11.141066 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jan 23 19:04:11.141085 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Jan 23 19:04:11.143319 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:04:11.273774 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 19:04:11.282931 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 19:04:11.300776 kernel: loop1: detected capacity change from 0 to 229808 Jan 23 19:04:11.357769 kernel: loop2: detected capacity change from 0 to 110984 Jan 23 19:04:11.546862 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 19:04:11.551038 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:04:11.580496 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jan 23 19:04:11.686776 kernel: loop3: detected capacity change from 0 to 27936 Jan 23 19:04:11.746698 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:04:11.752885 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:04:11.798892 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 19:04:11.830877 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 19:04:11.930808 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 19:04:11.931052 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 19:04:11.937770 kernel: hv_vmbus: registering driver hv_balloon Jan 23 19:04:11.941997 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 19:04:11.942166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#294 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 19:04:11.948832 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 19:04:11.949498 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 19:04:11.949518 kernel: Console: switching to colour dummy device 80x25 Jan 23 19:04:11.955815 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 19:04:11.958775 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 19:04:12.049769 kernel: loop4: detected capacity change from 0 to 128560 Jan 23 19:04:12.056043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:04:12.068214 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:04:12.068389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:04:12.073343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:04:12.078768 kernel: loop5: detected capacity change from 0 to 229808 Jan 23 19:04:12.096165 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:04:12.096641 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:04:12.106876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:04:12.109629 kernel: loop6: detected capacity change from 0 to 110984 Jan 23 19:04:12.126770 kernel: loop7: detected capacity change from 0 to 27936 Jan 23 19:04:12.139920 (sd-merge)[1405]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 23 19:04:12.140302 (sd-merge)[1405]: Merged extensions into '/usr'. Jan 23 19:04:12.141694 systemd-networkd[1340]: lo: Link UP Jan 23 19:04:12.141907 systemd-networkd[1340]: lo: Gained carrier Jan 23 19:04:12.146004 systemd-networkd[1340]: Enumeration completed Jan 23 19:04:12.146386 systemd-networkd[1340]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:04:12.146447 systemd-networkd[1340]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:04:12.147837 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:04:12.151883 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 19:04:12.153775 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 23 19:04:12.155990 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 19:04:12.158999 systemd[1]: Reload requested from client PID 1300 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 19:04:12.159011 systemd[1]: Reloading... Jan 23 19:04:12.176020 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 19:04:12.187782 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d6c7d96 eth0: Data path switched to VF: enP30832s1 Jan 23 19:04:12.191082 systemd-networkd[1340]: enP30832s1: Link UP Jan 23 19:04:12.191166 systemd-networkd[1340]: eth0: Link UP Jan 23 19:04:12.191168 systemd-networkd[1340]: eth0: Gained carrier Jan 23 19:04:12.191185 systemd-networkd[1340]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:04:12.195551 systemd-networkd[1340]: enP30832s1: Gained carrier Jan 23 19:04:12.203845 systemd-networkd[1340]: eth0: DHCPv4 address 10.200.4.21/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 23 19:04:12.247764 zram_generator::config[1446]: No configuration found. Jan 23 19:04:12.366768 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 23 19:04:12.518790 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 23 19:04:12.520977 systemd[1]: Reloading finished in 361 ms. Jan 23 19:04:12.554672 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 19:04:12.556632 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:04:12.578976 systemd[1]: Starting ensure-sysext.service... Jan 23 19:04:12.583870 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 19:04:12.587330 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:04:12.602975 systemd[1]: Reload requested from client PID 1517 ('systemctl') (unit ensure-sysext.service)... Jan 23 19:04:12.602990 systemd[1]: Reloading... Jan 23 19:04:12.617609 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 19:04:12.631645 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 19:04:12.632233 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 19:04:12.633105 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 19:04:12.633836 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 19:04:12.634121 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Jan 23 19:04:12.634205 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Jan 23 19:04:12.638084 systemd-tmpfiles[1519]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:04:12.638867 systemd-tmpfiles[1519]: Skipping /boot Jan 23 19:04:12.646904 systemd-tmpfiles[1519]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:04:12.646982 systemd-tmpfiles[1519]: Skipping /boot Jan 23 19:04:12.671081 zram_generator::config[1553]: No configuration found. Jan 23 19:04:12.844619 systemd[1]: Reloading finished in 241 ms. Jan 23 19:04:12.855412 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 19:04:12.869033 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 19:04:12.873051 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:04:12.880974 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:04:12.886010 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 19:04:12.891295 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 19:04:12.897115 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:04:12.901029 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 19:04:12.910292 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:04:12.910541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:04:12.912188 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:04:12.915571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:04:12.919112 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:04:12.921455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:04:12.921574 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:04:12.921661 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:04:12.929583 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:04:12.929735 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:04:12.929895 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:04:12.929970 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:04:12.930047 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:04:12.936977 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:04:12.937188 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:04:12.942857 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:04:12.947110 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:04:12.947226 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:04:12.947383 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 19:04:12.950919 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:04:12.952530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:04:12.956896 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:04:12.960159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:04:12.960428 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:04:12.963716 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:04:12.964166 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:04:12.967469 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:04:12.968154 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:04:12.978977 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:04:12.979251 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:04:12.980200 systemd[1]: Finished ensure-sysext.service. Jan 23 19:04:12.985163 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 19:04:13.046559 systemd-resolved[1618]: Positive Trust Anchors: Jan 23 19:04:13.046817 systemd-resolved[1618]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:04:13.046856 systemd-resolved[1618]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:04:13.050871 systemd-resolved[1618]: Using system hostname 'ci-4459.2.3-a-b64689fca1'. Jan 23 19:04:13.052270 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:04:13.053858 augenrules[1650]: No rules Jan 23 19:04:13.054015 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:04:13.054175 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:04:13.056404 systemd[1]: Reached target network.target - Network. Jan 23 19:04:13.058842 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:04:13.065494 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 19:04:13.425948 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 19:04:13.429981 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 19:04:14.203925 systemd-networkd[1340]: eth0: Gained IPv6LL Jan 23 19:04:14.206384 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 19:04:14.208461 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 19:04:16.447563 ldconfig[1295]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 19:04:16.457927 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 19:04:16.460764 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 19:04:16.500709 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 19:04:16.502432 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:04:16.506884 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 19:04:16.508705 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 19:04:16.511808 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 19:04:16.513523 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 19:04:16.516847 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 19:04:16.519811 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 19:04:16.521508 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 19:04:16.521533 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:04:16.524796 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:04:16.543598 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 19:04:16.547670 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 19:04:16.550653 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 19:04:16.552692 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 19:04:16.554688 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 19:04:16.557469 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 19:04:16.559201 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 19:04:16.562402 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 19:04:16.565457 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:04:16.566802 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:04:16.569848 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:04:16.569870 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:04:16.571609 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 19:04:16.574530 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 19:04:16.589818 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 19:04:16.598851 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 19:04:16.602994 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 19:04:16.607028 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 19:04:16.611207 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 19:04:16.613368 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 19:04:16.614413 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 19:04:16.619893 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 23 19:04:16.620791 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 19:04:16.624899 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 19:04:16.627415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:04:16.630947 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 19:04:16.636735 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 19:04:16.640859 jq[1670]: false Jan 23 19:04:16.644690 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 19:04:16.647818 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 19:04:16.653098 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 19:04:16.665160 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 19:04:16.666973 oslogin_cache_refresh[1673]: Refreshing passwd entry cache Jan 23 19:04:16.669032 google_oslogin_nss_cache[1673]: oslogin_cache_refresh[1673]: Refreshing passwd entry cache Jan 23 19:04:16.668037 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 19:04:16.668453 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 19:04:16.671047 KVP[1674]: KVP starting; pid is:1674 Jan 23 19:04:16.671609 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 19:04:16.674589 chronyd[1664]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 19:04:16.682898 kernel: hv_utils: KVP IC version 4.0 Jan 23 19:04:16.678911 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 19:04:16.683227 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 19:04:16.686629 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 19:04:16.686916 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 19:04:16.689817 KVP[1674]: KVP LIC Version: 3.1 Jan 23 19:04:16.692581 google_oslogin_nss_cache[1673]: oslogin_cache_refresh[1673]: Failure getting users, quitting Jan 23 19:04:16.692581 google_oslogin_nss_cache[1673]: oslogin_cache_refresh[1673]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:04:16.692581 google_oslogin_nss_cache[1673]: oslogin_cache_refresh[1673]: Refreshing group entry cache Jan 23 19:04:16.691973 oslogin_cache_refresh[1673]: Failure getting users, quitting Jan 23 19:04:16.691989 oslogin_cache_refresh[1673]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:04:16.692049 oslogin_cache_refresh[1673]: Refreshing group entry cache Jan 23 19:04:16.700087 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 19:04:16.700284 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 19:04:16.704617 chronyd[1664]: Timezone right/UTC failed leap second check, ignoring Jan 23 19:04:16.704892 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 19:04:16.704825 chronyd[1664]: Loaded seccomp filter (level 2) Jan 23 19:04:16.710248 jq[1686]: true Jan 23 19:04:16.722880 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 19:04:16.723088 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 19:04:16.727010 (ntainerd)[1707]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 19:04:16.729686 jq[1708]: true Jan 23 19:04:16.735065 extend-filesystems[1671]: Found /dev/nvme0n1p6 Jan 23 19:04:16.758563 extend-filesystems[1671]: Found /dev/nvme0n1p9 Jan 23 19:04:16.761926 extend-filesystems[1671]: Checking size of /dev/nvme0n1p9 Jan 23 19:04:16.788188 update_engine[1684]: I20260123 19:04:16.787521 1684 main.cc:92] Flatcar Update Engine starting Jan 23 19:04:16.792437 tar[1692]: linux-amd64/LICENSE Jan 23 19:04:16.793344 tar[1692]: linux-amd64/helm Jan 23 19:04:16.806138 extend-filesystems[1671]: Old size kept for /dev/nvme0n1p9 Jan 23 19:04:16.808481 systemd-logind[1683]: New seat seat0. Jan 23 19:04:16.810075 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 19:04:16.810802 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 19:04:16.811465 systemd-logind[1683]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 19:04:16.815709 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 19:04:16.819423 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 19:04:16.832398 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 19:04:16.855301 google_oslogin_nss_cache[1673]: oslogin_cache_refresh[1673]: Failure getting groups, quitting Jan 23 19:04:16.855301 google_oslogin_nss_cache[1673]: oslogin_cache_refresh[1673]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:04:16.832277 dbus-daemon[1667]: [system] SELinux support is enabled Jan 23 19:04:16.841194 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 19:04:16.840892 oslogin_cache_refresh[1673]: Failure getting groups, quitting Jan 23 19:04:16.841234 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 19:04:16.840905 oslogin_cache_refresh[1673]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:04:16.845633 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 19:04:16.845651 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 19:04:16.849197 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 19:04:16.849411 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 19:04:16.861960 systemd[1]: Started update-engine.service - Update Engine. Jan 23 19:04:16.866767 update_engine[1684]: I20260123 19:04:16.866027 1684 update_check_scheduler.cc:74] Next update check in 4m22s Jan 23 19:04:16.871026 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 19:04:16.891770 bash[1734]: Updated "/home/core/.ssh/authorized_keys" Jan 23 19:04:16.888673 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 19:04:16.892600 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 19:04:16.952231 coreos-metadata[1666]: Jan 23 19:04:16.951 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 19:04:16.956071 coreos-metadata[1666]: Jan 23 19:04:16.955 INFO Fetch successful Jan 23 19:04:16.956071 coreos-metadata[1666]: Jan 23 19:04:16.955 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 19:04:16.959622 coreos-metadata[1666]: Jan 23 19:04:16.959 INFO Fetch successful Jan 23 19:04:16.959622 coreos-metadata[1666]: Jan 23 19:04:16.959 INFO Fetching http://168.63.129.16/machine/edc7db22-f95f-4e8c-bdde-0411d91bddbc/98e09c20%2D4605%2D4c8d%2D95c4%2D0066141e9bda.%5Fci%2D4459.2.3%2Da%2Db64689fca1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 19:04:16.961069 coreos-metadata[1666]: Jan 23 19:04:16.961 INFO Fetch successful Jan 23 19:04:16.961159 coreos-metadata[1666]: Jan 23 19:04:16.961 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 19:04:16.971404 coreos-metadata[1666]: Jan 23 19:04:16.970 INFO Fetch successful Jan 23 19:04:17.037055 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 19:04:17.041134 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 19:04:17.068083 sshd_keygen[1705]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 19:04:17.129703 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 19:04:17.135408 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 19:04:17.141986 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 19:04:17.162854 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 19:04:17.163130 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 19:04:17.169956 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 19:04:17.197851 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 19:04:17.200566 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 19:04:17.208636 locksmithd[1761]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 19:04:17.214688 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 19:04:17.219615 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 19:04:17.222207 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 19:04:17.458383 tar[1692]: linux-amd64/README.md Jan 23 19:04:17.475956 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 19:04:17.845112 containerd[1707]: time="2026-01-23T19:04:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 19:04:17.847663 containerd[1707]: time="2026-01-23T19:04:17.846904347Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 19:04:17.856286 containerd[1707]: time="2026-01-23T19:04:17.856252961Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.67µs" Jan 23 19:04:17.856387 containerd[1707]: time="2026-01-23T19:04:17.856368712Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 19:04:17.856451 containerd[1707]: time="2026-01-23T19:04:17.856441026Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 19:04:17.856600 containerd[1707]: time="2026-01-23T19:04:17.856590697Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 19:04:17.856639 containerd[1707]: time="2026-01-23T19:04:17.856632116Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 19:04:17.856688 containerd[1707]: time="2026-01-23T19:04:17.856680440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:04:17.856775 containerd[1707]: time="2026-01-23T19:04:17.856763161Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:04:17.856811 containerd[1707]: time="2026-01-23T19:04:17.856803545Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:04:17.857060 containerd[1707]: time="2026-01-23T19:04:17.857047662Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:04:17.857099 containerd[1707]: time="2026-01-23T19:04:17.857088418Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:04:17.857140 containerd[1707]: time="2026-01-23T19:04:17.857130604Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:04:17.857177 containerd[1707]: time="2026-01-23T19:04:17.857169001Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 19:04:17.857285 containerd[1707]: time="2026-01-23T19:04:17.857274387Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 19:04:17.857495 containerd[1707]: time="2026-01-23T19:04:17.857483029Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:04:17.857552 containerd[1707]: time="2026-01-23T19:04:17.857539687Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:04:17.857584 containerd[1707]: time="2026-01-23T19:04:17.857578417Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 19:04:17.857632 containerd[1707]: time="2026-01-23T19:04:17.857626170Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 19:04:17.857935 containerd[1707]: time="2026-01-23T19:04:17.857923379Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 19:04:17.858022 containerd[1707]: time="2026-01-23T19:04:17.858014944Z" level=info msg="metadata content store policy set" policy=shared Jan 23 19:04:17.870676 containerd[1707]: time="2026-01-23T19:04:17.870648094Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 19:04:17.870820 containerd[1707]: time="2026-01-23T19:04:17.870805734Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 19:04:17.871007 containerd[1707]: time="2026-01-23T19:04:17.870992210Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 19:04:17.871072 containerd[1707]: time="2026-01-23T19:04:17.871061317Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 19:04:17.871123 containerd[1707]: time="2026-01-23T19:04:17.871113678Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 19:04:17.871180 containerd[1707]: time="2026-01-23T19:04:17.871156399Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.871922307Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.871965746Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.871982447Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.871994625Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.872005561Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.872019174Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.872126180Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.872142330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.872156439Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.872167476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.872193187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.872205071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.872216405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.872226438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 19:04:17.873769 containerd[1707]: time="2026-01-23T19:04:17.872238600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 19:04:17.874131 containerd[1707]: time="2026-01-23T19:04:17.872252402Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 19:04:17.874131 containerd[1707]: time="2026-01-23T19:04:17.872265412Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 19:04:17.874131 containerd[1707]: time="2026-01-23T19:04:17.872307251Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 19:04:17.874131 containerd[1707]: time="2026-01-23T19:04:17.872320502Z" level=info msg="Start snapshots syncer" Jan 23 19:04:17.874131 containerd[1707]: time="2026-01-23T19:04:17.872344411Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 19:04:17.874205 containerd[1707]: time="2026-01-23T19:04:17.872603777Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 19:04:17.874205 containerd[1707]: time="2026-01-23T19:04:17.872655120Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 19:04:17.874301 containerd[1707]: time="2026-01-23T19:04:17.872705454Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 19:04:17.874425 containerd[1707]: time="2026-01-23T19:04:17.874394263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 19:04:17.874467 containerd[1707]: time="2026-01-23T19:04:17.874460180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 19:04:17.874504 containerd[1707]: time="2026-01-23T19:04:17.874498977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 19:04:17.874533 containerd[1707]: time="2026-01-23T19:04:17.874527711Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 19:04:17.874586 containerd[1707]: time="2026-01-23T19:04:17.874577408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 19:04:17.874628 containerd[1707]: time="2026-01-23T19:04:17.874620664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 19:04:17.874679 containerd[1707]: time="2026-01-23T19:04:17.874671420Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 19:04:17.874756 containerd[1707]: time="2026-01-23T19:04:17.874730281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 19:04:17.874806 containerd[1707]: time="2026-01-23T19:04:17.874798554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 19:04:17.874856 containerd[1707]: time="2026-01-23T19:04:17.874848334Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 19:04:17.874923 containerd[1707]: time="2026-01-23T19:04:17.874904716Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:04:17.874968 containerd[1707]: time="2026-01-23T19:04:17.874958362Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:04:17.875022 containerd[1707]: time="2026-01-23T19:04:17.875014106Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:04:17.875053 containerd[1707]: time="2026-01-23T19:04:17.875046867Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:04:17.875175 containerd[1707]: time="2026-01-23T19:04:17.875167390Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 19:04:17.875205 containerd[1707]: time="2026-01-23T19:04:17.875199912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 19:04:17.875239 containerd[1707]: time="2026-01-23T19:04:17.875229177Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 19:04:17.875274 containerd[1707]: time="2026-01-23T19:04:17.875268811Z" level=info msg="runtime interface created" Jan 23 19:04:17.875351 containerd[1707]: time="2026-01-23T19:04:17.875341990Z" level=info msg="created NRI interface" Jan 23 19:04:17.875397 containerd[1707]: time="2026-01-23T19:04:17.875389742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 19:04:17.875962 containerd[1707]: time="2026-01-23T19:04:17.875935166Z" level=info msg="Connect containerd service" Jan 23 19:04:17.876013 containerd[1707]: time="2026-01-23T19:04:17.875982119Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 19:04:17.877307 containerd[1707]: time="2026-01-23T19:04:17.877219379Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 19:04:17.947781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:04:17.957305 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:04:18.387581 containerd[1707]: time="2026-01-23T19:04:18.387304547Z" level=info msg="Start subscribing containerd event" Jan 23 19:04:18.387581 containerd[1707]: time="2026-01-23T19:04:18.387370080Z" level=info msg="Start recovering state" Jan 23 19:04:18.387581 containerd[1707]: time="2026-01-23T19:04:18.387483483Z" level=info msg="Start event monitor" Jan 23 19:04:18.387581 containerd[1707]: time="2026-01-23T19:04:18.387495353Z" level=info msg="Start cni network conf syncer for default" Jan 23 19:04:18.387581 containerd[1707]: time="2026-01-23T19:04:18.387502712Z" level=info msg="Start streaming server" Jan 23 19:04:18.387581 containerd[1707]: time="2026-01-23T19:04:18.387511154Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 19:04:18.387581 containerd[1707]: time="2026-01-23T19:04:18.387529407Z" level=info msg="runtime interface starting up..." Jan 23 19:04:18.387581 containerd[1707]: time="2026-01-23T19:04:18.387535375Z" level=info msg="starting plugins..." Jan 23 19:04:18.387581 containerd[1707]: time="2026-01-23T19:04:18.387548119Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 19:04:18.388710 containerd[1707]: time="2026-01-23T19:04:18.388078301Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 19:04:18.388710 containerd[1707]: time="2026-01-23T19:04:18.388153407Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 19:04:18.388404 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 19:04:18.391185 containerd[1707]: time="2026-01-23T19:04:18.390543242Z" level=info msg="containerd successfully booted in 0.546090s" Jan 23 19:04:18.391391 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 19:04:18.395493 systemd[1]: Startup finished in 3.055s (kernel) + 10.130s (initrd) + 11.063s (userspace) = 24.249s. Jan 23 19:04:18.514780 kubelet[1823]: E0123 19:04:18.514730 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:04:18.516735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:04:18.516877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:04:18.517179 systemd[1]: kubelet.service: Consumed 896ms CPU time, 268.1M memory peak. Jan 23 19:04:18.645908 login[1804]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 19:04:18.647347 login[1805]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 19:04:18.652232 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 19:04:18.653954 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 19:04:18.660459 systemd-logind[1683]: New session 1 of user core. Jan 23 19:04:18.664145 systemd-logind[1683]: New session 2 of user core. Jan 23 19:04:18.685456 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 19:04:18.689942 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 19:04:18.721982 (systemd)[1847]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 19:04:18.724599 systemd-logind[1683]: New session c1 of user core. Jan 23 19:04:18.781767 waagent[1801]: 2026-01-23T19:04:18.780380Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 23 19:04:18.782961 waagent[1801]: 2026-01-23T19:04:18.782670Z INFO Daemon Daemon OS: flatcar 4459.2.3 Jan 23 19:04:18.784820 waagent[1801]: 2026-01-23T19:04:18.784777Z INFO Daemon Daemon Python: 3.11.13 Jan 23 19:04:18.787094 waagent[1801]: 2026-01-23T19:04:18.787050Z INFO Daemon Daemon Run daemon Jan 23 19:04:18.789142 waagent[1801]: 2026-01-23T19:04:18.789102Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.3' Jan 23 19:04:18.793298 waagent[1801]: 2026-01-23T19:04:18.793251Z INFO Daemon Daemon Using waagent for provisioning Jan 23 19:04:18.795475 waagent[1801]: 2026-01-23T19:04:18.795433Z INFO Daemon Daemon Activate resource disk Jan 23 19:04:18.798762 waagent[1801]: 2026-01-23T19:04:18.797558Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 19:04:18.802330 waagent[1801]: 2026-01-23T19:04:18.802285Z INFO Daemon Daemon Found device: None Jan 23 19:04:18.804162 waagent[1801]: 2026-01-23T19:04:18.804129Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 19:04:18.807542 waagent[1801]: 2026-01-23T19:04:18.807511Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 19:04:18.812216 waagent[1801]: 2026-01-23T19:04:18.812180Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 19:04:18.814586 waagent[1801]: 2026-01-23T19:04:18.814554Z INFO Daemon Daemon Running default provisioning handler Jan 23 19:04:18.823274 waagent[1801]: 2026-01-23T19:04:18.823230Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 19:04:18.828637 waagent[1801]: 2026-01-23T19:04:18.828600Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 19:04:18.832766 waagent[1801]: 2026-01-23T19:04:18.832716Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 19:04:18.834901 waagent[1801]: 2026-01-23T19:04:18.834871Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 19:04:18.883840 systemd[1847]: Queued start job for default target default.target. Jan 23 19:04:18.895583 systemd[1847]: Created slice app.slice - User Application Slice. Jan 23 19:04:18.895621 systemd[1847]: Reached target paths.target - Paths. Jan 23 19:04:18.895655 systemd[1847]: Reached target timers.target - Timers. Jan 23 19:04:18.897307 waagent[1801]: 2026-01-23T19:04:18.897045Z INFO Daemon Daemon Successfully mounted dvd Jan 23 19:04:18.899845 systemd[1847]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 19:04:18.914838 systemd[1847]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 19:04:18.914973 systemd[1847]: Reached target sockets.target - Sockets. Jan 23 19:04:18.915064 systemd[1847]: Reached target basic.target - Basic System. Jan 23 19:04:18.915153 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 19:04:18.915199 systemd[1847]: Reached target default.target - Main User Target. Jan 23 19:04:18.915243 systemd[1847]: Startup finished in 184ms. Jan 23 19:04:18.924859 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 19:04:18.925599 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 19:04:18.944905 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 19:04:18.948783 waagent[1801]: 2026-01-23T19:04:18.946512Z INFO Daemon Daemon Detect protocol endpoint Jan 23 19:04:18.948783 waagent[1801]: 2026-01-23T19:04:18.946667Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 19:04:18.948783 waagent[1801]: 2026-01-23T19:04:18.946961Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 19:04:18.948783 waagent[1801]: 2026-01-23T19:04:18.947250Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 19:04:18.948783 waagent[1801]: 2026-01-23T19:04:18.947388Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 19:04:18.948783 waagent[1801]: 2026-01-23T19:04:18.947607Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 19:04:18.961394 waagent[1801]: 2026-01-23T19:04:18.961363Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 19:04:18.963677 waagent[1801]: 2026-01-23T19:04:18.961640Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 19:04:18.963677 waagent[1801]: 2026-01-23T19:04:18.961840Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 19:04:19.029805 waagent[1801]: 2026-01-23T19:04:19.029725Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 19:04:19.032995 waagent[1801]: 2026-01-23T19:04:19.029960Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 19:04:19.036176 waagent[1801]: 2026-01-23T19:04:19.034629Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 19:04:19.046452 waagent[1801]: 2026-01-23T19:04:19.046420Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.181 Jan 23 19:04:19.051479 waagent[1801]: 2026-01-23T19:04:19.046951Z INFO Daemon Jan 23 19:04:19.051479 waagent[1801]: 2026-01-23T19:04:19.047181Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 80ef35cc-710b-458e-9330-824a3d595d28 eTag: 11148277630699865661 source: Fabric] Jan 23 19:04:19.051479 waagent[1801]: 2026-01-23T19:04:19.047439Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 19:04:19.051479 waagent[1801]: 2026-01-23T19:04:19.048076Z INFO Daemon Jan 23 19:04:19.051479 waagent[1801]: 2026-01-23T19:04:19.048248Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 19:04:19.056917 waagent[1801]: 2026-01-23T19:04:19.056891Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 19:04:19.124183 waagent[1801]: 2026-01-23T19:04:19.124139Z INFO Daemon Downloaded certificate {'thumbprint': '9E6E32E73AD65F72EF64401F8098896FA04CAB6F', 'hasPrivateKey': True} Jan 23 19:04:19.127954 waagent[1801]: 2026-01-23T19:04:19.124560Z INFO Daemon Fetch goal state completed Jan 23 19:04:19.131778 waagent[1801]: 2026-01-23T19:04:19.131720Z INFO Daemon Daemon Starting provisioning Jan 23 19:04:19.142036 waagent[1801]: 2026-01-23T19:04:19.131904Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 19:04:19.142036 waagent[1801]: 2026-01-23T19:04:19.132138Z INFO Daemon Daemon Set hostname [ci-4459.2.3-a-b64689fca1] Jan 23 19:04:19.142036 waagent[1801]: 2026-01-23T19:04:19.134015Z INFO Daemon Daemon Publish hostname [ci-4459.2.3-a-b64689fca1] Jan 23 19:04:19.142036 waagent[1801]: 2026-01-23T19:04:19.134424Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 19:04:19.142036 waagent[1801]: 2026-01-23T19:04:19.134965Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 19:04:19.142829 systemd-networkd[1340]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:04:19.142835 systemd-networkd[1340]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:04:19.142857 systemd-networkd[1340]: eth0: DHCP lease lost Jan 23 19:04:19.143781 waagent[1801]: 2026-01-23T19:04:19.143513Z INFO Daemon Daemon Create user account if not exists Jan 23 19:04:19.144876 waagent[1801]: 2026-01-23T19:04:19.144828Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 19:04:19.149512 waagent[1801]: 2026-01-23T19:04:19.146444Z INFO Daemon Daemon Configure sudoer Jan 23 19:04:19.150720 waagent[1801]: 2026-01-23T19:04:19.150676Z INFO Daemon Daemon Configure sshd Jan 23 19:04:19.154670 waagent[1801]: 2026-01-23T19:04:19.154631Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 19:04:19.160718 waagent[1801]: 2026-01-23T19:04:19.155017Z INFO Daemon Daemon Deploy ssh public key. Jan 23 19:04:19.160834 systemd-networkd[1340]: eth0: DHCPv4 address 10.200.4.21/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 23 19:04:20.237936 waagent[1801]: 2026-01-23T19:04:20.237876Z INFO Daemon Daemon Provisioning complete Jan 23 19:04:20.245886 waagent[1801]: 2026-01-23T19:04:20.245855Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 19:04:20.251884 waagent[1801]: 2026-01-23T19:04:20.246051Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 19:04:20.251884 waagent[1801]: 2026-01-23T19:04:20.246342Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 23 19:04:20.346919 waagent[1895]: 2026-01-23T19:04:20.346854Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 23 19:04:20.347187 waagent[1895]: 2026-01-23T19:04:20.346947Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.3 Jan 23 19:04:20.347187 waagent[1895]: 2026-01-23T19:04:20.346987Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 23 19:04:20.347187 waagent[1895]: 2026-01-23T19:04:20.347025Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 23 19:04:20.445111 waagent[1895]: 2026-01-23T19:04:20.445062Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 23 19:04:20.445257 waagent[1895]: 2026-01-23T19:04:20.445233Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 19:04:20.445308 waagent[1895]: 2026-01-23T19:04:20.445286Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 19:04:20.453337 waagent[1895]: 2026-01-23T19:04:20.453283Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 19:04:20.460653 waagent[1895]: 2026-01-23T19:04:20.460623Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.181 Jan 23 19:04:20.460992 waagent[1895]: 2026-01-23T19:04:20.460962Z INFO ExtHandler Jan 23 19:04:20.461034 waagent[1895]: 2026-01-23T19:04:20.461014Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 89421b65-c31e-4abb-b1be-1d9c41db0c29 eTag: 11148277630699865661 source: Fabric] Jan 23 19:04:20.461238 waagent[1895]: 2026-01-23T19:04:20.461211Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 19:04:20.461560 waagent[1895]: 2026-01-23T19:04:20.461535Z INFO ExtHandler Jan 23 19:04:20.461602 waagent[1895]: 2026-01-23T19:04:20.461574Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 19:04:20.464006 waagent[1895]: 2026-01-23T19:04:20.463982Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 19:04:20.519410 waagent[1895]: 2026-01-23T19:04:20.519335Z INFO ExtHandler Downloaded certificate {'thumbprint': '9E6E32E73AD65F72EF64401F8098896FA04CAB6F', 'hasPrivateKey': True} Jan 23 19:04:20.519720 waagent[1895]: 2026-01-23T19:04:20.519692Z INFO ExtHandler Fetch goal state completed Jan 23 19:04:20.531913 waagent[1895]: 2026-01-23T19:04:20.531871Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4-dev (Library: OpenSSL 3.4.4-dev ) Jan 23 19:04:20.535819 waagent[1895]: 2026-01-23T19:04:20.535719Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1895 Jan 23 19:04:20.535903 waagent[1895]: 2026-01-23T19:04:20.535876Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 19:04:20.536122 waagent[1895]: 2026-01-23T19:04:20.536096Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 23 19:04:20.537141 waagent[1895]: 2026-01-23T19:04:20.537107Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 19:04:20.537420 waagent[1895]: 2026-01-23T19:04:20.537392Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 23 19:04:20.537519 waagent[1895]: 2026-01-23T19:04:20.537497Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 23 19:04:20.537920 waagent[1895]: 2026-01-23T19:04:20.537892Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 19:04:20.578964 waagent[1895]: 2026-01-23T19:04:20.578938Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 19:04:20.579091 waagent[1895]: 2026-01-23T19:04:20.579070Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 19:04:20.584547 waagent[1895]: 2026-01-23T19:04:20.584222Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 19:04:20.589282 systemd[1]: Reload requested from client PID 1910 ('systemctl') (unit waagent.service)... Jan 23 19:04:20.589295 systemd[1]: Reloading... Jan 23 19:04:20.677819 zram_generator::config[1955]: No configuration found. Jan 23 19:04:20.843404 systemd[1]: Reloading finished in 253 ms. Jan 23 19:04:20.857590 waagent[1895]: 2026-01-23T19:04:20.857272Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 19:04:20.857590 waagent[1895]: 2026-01-23T19:04:20.857409Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 19:04:21.099440 waagent[1895]: 2026-01-23T19:04:21.099334Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 19:04:21.099662 waagent[1895]: 2026-01-23T19:04:21.099635Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 23 19:04:21.100428 waagent[1895]: 2026-01-23T19:04:21.100391Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 19:04:21.100509 waagent[1895]: 2026-01-23T19:04:21.100427Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 19:04:21.100539 waagent[1895]: 2026-01-23T19:04:21.100514Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 19:04:21.100689 waagent[1895]: 2026-01-23T19:04:21.100667Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 19:04:21.101119 waagent[1895]: 2026-01-23T19:04:21.101090Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 19:04:21.101119 waagent[1895]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 19:04:21.101119 waagent[1895]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 19:04:21.101119 waagent[1895]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 19:04:21.101119 waagent[1895]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 19:04:21.101119 waagent[1895]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 19:04:21.101119 waagent[1895]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 19:04:21.101308 waagent[1895]: 2026-01-23T19:04:21.101163Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 19:04:21.101308 waagent[1895]: 2026-01-23T19:04:21.101216Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 19:04:21.101355 waagent[1895]: 2026-01-23T19:04:21.101319Z INFO EnvHandler ExtHandler Configure routes Jan 23 19:04:21.101381 waagent[1895]: 2026-01-23T19:04:21.101357Z INFO EnvHandler ExtHandler Gateway:None Jan 23 19:04:21.101404 waagent[1895]: 2026-01-23T19:04:21.101386Z INFO EnvHandler ExtHandler Routes:None Jan 23 19:04:21.101541 waagent[1895]: 2026-01-23T19:04:21.101522Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 19:04:21.101824 waagent[1895]: 2026-01-23T19:04:21.101720Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 19:04:21.101975 waagent[1895]: 2026-01-23T19:04:21.101951Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 19:04:21.102622 waagent[1895]: 2026-01-23T19:04:21.102596Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 19:04:21.102678 waagent[1895]: 2026-01-23T19:04:21.102653Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 19:04:21.102939 waagent[1895]: 2026-01-23T19:04:21.102884Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 19:04:21.119773 waagent[1895]: 2026-01-23T19:04:21.119059Z INFO ExtHandler ExtHandler Jan 23 19:04:21.119773 waagent[1895]: 2026-01-23T19:04:21.119133Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 47b529ef-d710-4413-bf36-4db7a3ece6e0 correlation 678242c9-a44e-4b84-a1a1-9a9aa3bab1bd created: 2026-01-23T19:03:28.014786Z] Jan 23 19:04:21.119773 waagent[1895]: 2026-01-23T19:04:21.119442Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 19:04:21.120132 waagent[1895]: 2026-01-23T19:04:21.120093Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 23 19:04:21.151941 waagent[1895]: 2026-01-23T19:04:21.151900Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 23 19:04:21.151941 waagent[1895]: Try `iptables -h' or 'iptables --help' for more information.) Jan 23 19:04:21.152236 waagent[1895]: 2026-01-23T19:04:21.152211Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 603B3E5E-E918-49BD-8D06-599B8724E468;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 23 19:04:21.165071 waagent[1895]: 2026-01-23T19:04:21.165031Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 19:04:21.165071 waagent[1895]: Executing ['ip', '-a', '-o', 'link']: Jan 23 19:04:21.165071 waagent[1895]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 19:04:21.165071 waagent[1895]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:6c:7d:96 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jan 23 19:04:21.165071 waagent[1895]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:6c:7d:96 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 23 19:04:21.165071 waagent[1895]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 19:04:21.165071 waagent[1895]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 19:04:21.165071 waagent[1895]: 2: eth0 inet 10.200.4.21/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 19:04:21.165071 waagent[1895]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 19:04:21.165071 waagent[1895]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 19:04:21.165071 waagent[1895]: 2: eth0 inet6 fe80::7eed:8dff:fe6c:7d96/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 19:04:21.209303 waagent[1895]: 2026-01-23T19:04:21.209257Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 23 19:04:21.209303 waagent[1895]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 19:04:21.209303 waagent[1895]: pkts bytes target prot opt in out source destination Jan 23 19:04:21.209303 waagent[1895]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 19:04:21.209303 waagent[1895]: pkts bytes target prot opt in out source destination Jan 23 19:04:21.209303 waagent[1895]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 19:04:21.209303 waagent[1895]: pkts bytes target prot opt in out source destination Jan 23 19:04:21.209303 waagent[1895]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 19:04:21.209303 waagent[1895]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 19:04:21.209303 waagent[1895]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 19:04:21.212160 waagent[1895]: 2026-01-23T19:04:21.212114Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 19:04:21.212160 waagent[1895]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 19:04:21.212160 waagent[1895]: pkts bytes target prot opt in out source destination Jan 23 19:04:21.212160 waagent[1895]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 19:04:21.212160 waagent[1895]: pkts bytes target prot opt in out source destination Jan 23 19:04:21.212160 waagent[1895]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Jan 23 19:04:21.212160 waagent[1895]: pkts bytes target prot opt in out source destination Jan 23 19:04:21.212160 waagent[1895]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 19:04:21.212160 waagent[1895]: 1 60 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 19:04:21.212160 waagent[1895]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 19:04:28.675228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 19:04:28.676637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:04:29.182852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:04:29.192053 (kubelet)[2047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:04:29.223353 kubelet[2047]: E0123 19:04:29.223322 2047 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:04:29.226317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:04:29.226447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:04:29.226739 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.5M memory peak. Jan 23 19:04:39.425212 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 19:04:39.426505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:04:39.923904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:04:39.936972 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:04:39.969975 kubelet[2062]: E0123 19:04:39.969931 2062 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:04:39.971649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:04:39.971802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:04:39.972129 systemd[1]: kubelet.service: Consumed 122ms CPU time, 110.8M memory peak. Jan 23 19:04:40.490774 chronyd[1664]: Selected source PHC0 Jan 23 19:04:44.111215 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 19:04:44.112487 systemd[1]: Started sshd@0-10.200.4.21:22-10.200.16.10:50270.service - OpenSSH per-connection server daemon (10.200.16.10:50270). Jan 23 19:04:44.838328 sshd[2070]: Accepted publickey for core from 10.200.16.10 port 50270 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:04:44.839431 sshd-session[2070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:04:44.842957 systemd-logind[1683]: New session 3 of user core. Jan 23 19:04:44.849902 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 19:04:45.381396 systemd[1]: Started sshd@1-10.200.4.21:22-10.200.16.10:50286.service - OpenSSH per-connection server daemon (10.200.16.10:50286). Jan 23 19:04:45.998167 sshd[2076]: Accepted publickey for core from 10.200.16.10 port 50286 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:04:45.999252 sshd-session[2076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:04:46.003360 systemd-logind[1683]: New session 4 of user core. Jan 23 19:04:46.005883 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 19:04:46.432337 sshd[2079]: Connection closed by 10.200.16.10 port 50286 Jan 23 19:04:46.432908 sshd-session[2076]: pam_unix(sshd:session): session closed for user core Jan 23 19:04:46.436067 systemd[1]: sshd@1-10.200.4.21:22-10.200.16.10:50286.service: Deactivated successfully. Jan 23 19:04:46.437589 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 19:04:46.438226 systemd-logind[1683]: Session 4 logged out. Waiting for processes to exit. Jan 23 19:04:46.439355 systemd-logind[1683]: Removed session 4. Jan 23 19:04:46.547057 systemd[1]: Started sshd@2-10.200.4.21:22-10.200.16.10:50300.service - OpenSSH per-connection server daemon (10.200.16.10:50300). Jan 23 19:04:47.164435 sshd[2085]: Accepted publickey for core from 10.200.16.10 port 50300 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:04:47.164854 sshd-session[2085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:04:47.168809 systemd-logind[1683]: New session 5 of user core. Jan 23 19:04:47.178885 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 19:04:47.596533 sshd[2088]: Connection closed by 10.200.16.10 port 50300 Jan 23 19:04:47.598026 sshd-session[2085]: pam_unix(sshd:session): session closed for user core Jan 23 19:04:47.601092 systemd-logind[1683]: Session 5 logged out. Waiting for processes to exit. Jan 23 19:04:47.601465 systemd[1]: sshd@2-10.200.4.21:22-10.200.16.10:50300.service: Deactivated successfully. Jan 23 19:04:47.603113 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 19:04:47.604351 systemd-logind[1683]: Removed session 5. Jan 23 19:04:47.709000 systemd[1]: Started sshd@3-10.200.4.21:22-10.200.16.10:50302.service - OpenSSH per-connection server daemon (10.200.16.10:50302). Jan 23 19:04:48.331204 sshd[2094]: Accepted publickey for core from 10.200.16.10 port 50302 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:04:48.331572 sshd-session[2094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:04:48.335555 systemd-logind[1683]: New session 6 of user core. Jan 23 19:04:48.341874 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 19:04:48.766530 sshd[2097]: Connection closed by 10.200.16.10 port 50302 Jan 23 19:04:48.767971 sshd-session[2094]: pam_unix(sshd:session): session closed for user core Jan 23 19:04:48.770249 systemd[1]: sshd@3-10.200.4.21:22-10.200.16.10:50302.service: Deactivated successfully. Jan 23 19:04:48.772038 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 19:04:48.773398 systemd-logind[1683]: Session 6 logged out. Waiting for processes to exit. Jan 23 19:04:48.774422 systemd-logind[1683]: Removed session 6. Jan 23 19:04:48.885054 systemd[1]: Started sshd@4-10.200.4.21:22-10.200.16.10:50316.service - OpenSSH per-connection server daemon (10.200.16.10:50316). Jan 23 19:04:49.504789 sshd[2103]: Accepted publickey for core from 10.200.16.10 port 50316 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:04:49.505636 sshd-session[2103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:04:49.509998 systemd-logind[1683]: New session 7 of user core. Jan 23 19:04:49.519895 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 19:04:50.175179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 19:04:50.176507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:04:50.720500 sudo[2107]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 19:04:50.720736 sudo[2107]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:04:50.963730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:04:50.973025 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:04:51.004842 kubelet[2120]: E0123 19:04:51.004805 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:04:51.006365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:04:51.006487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:04:51.006735 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.7M memory peak. Jan 23 19:04:53.967995 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 19:04:53.976989 (dockerd)[2139]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 19:04:55.701210 dockerd[2139]: time="2026-01-23T19:04:55.701152098Z" level=info msg="Starting up" Jan 23 19:04:55.704379 dockerd[2139]: time="2026-01-23T19:04:55.704242455Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 19:04:55.713290 dockerd[2139]: time="2026-01-23T19:04:55.713261086Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 19:04:55.849845 dockerd[2139]: time="2026-01-23T19:04:55.849656401Z" level=info msg="Loading containers: start." Jan 23 19:04:55.876776 kernel: Initializing XFRM netlink socket Jan 23 19:04:56.195655 systemd-networkd[1340]: docker0: Link UP Jan 23 19:04:56.208022 dockerd[2139]: time="2026-01-23T19:04:56.207988097Z" level=info msg="Loading containers: done." Jan 23 19:04:56.220347 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1018153847-merged.mount: Deactivated successfully. Jan 23 19:04:56.232584 dockerd[2139]: time="2026-01-23T19:04:56.232550059Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 19:04:56.232688 dockerd[2139]: time="2026-01-23T19:04:56.232620911Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 19:04:56.232722 dockerd[2139]: time="2026-01-23T19:04:56.232690107Z" level=info msg="Initializing buildkit" Jan 23 19:04:56.272570 dockerd[2139]: time="2026-01-23T19:04:56.272540335Z" level=info msg="Completed buildkit initialization" Jan 23 19:04:56.278399 dockerd[2139]: time="2026-01-23T19:04:56.278354968Z" level=info msg="Daemon has completed initialization" Jan 23 19:04:56.278539 dockerd[2139]: time="2026-01-23T19:04:56.278505229Z" level=info msg="API listen on /run/docker.sock" Jan 23 19:04:56.278708 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 19:04:57.345389 containerd[1707]: time="2026-01-23T19:04:57.345336901Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 19:04:58.216869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2790445374.mount: Deactivated successfully. Jan 23 19:04:59.466922 containerd[1707]: time="2026-01-23T19:04:59.466872999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:59.469156 containerd[1707]: time="2026-01-23T19:04:59.469120805Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114720" Jan 23 19:04:59.472306 containerd[1707]: time="2026-01-23T19:04:59.472266757Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:59.475909 containerd[1707]: time="2026-01-23T19:04:59.475877528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:04:59.476637 containerd[1707]: time="2026-01-23T19:04:59.476609933Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.13122511s" Jan 23 19:04:59.476675 containerd[1707]: time="2026-01-23T19:04:59.476639245Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 19:04:59.477205 containerd[1707]: time="2026-01-23T19:04:59.477182831Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 19:05:00.082734 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 23 19:05:00.820508 containerd[1707]: time="2026-01-23T19:05:00.820460926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:00.822982 containerd[1707]: time="2026-01-23T19:05:00.822845964Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016789" Jan 23 19:05:00.825605 containerd[1707]: time="2026-01-23T19:05:00.825583378Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:00.829788 containerd[1707]: time="2026-01-23T19:05:00.829587176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:00.830222 containerd[1707]: time="2026-01-23T19:05:00.830198238Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.352988704s" Jan 23 19:05:00.830262 containerd[1707]: time="2026-01-23T19:05:00.830231472Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 19:05:00.830647 containerd[1707]: time="2026-01-23T19:05:00.830629037Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 19:05:01.175228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 19:05:01.176852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:05:01.607073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:05:01.614998 (kubelet)[2415]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:05:01.647811 kubelet[2415]: E0123 19:05:01.647780 2415 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:05:01.649303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:05:01.649433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:05:01.649939 systemd[1]: kubelet.service: Consumed 126ms CPU time, 107.9M memory peak. Jan 23 19:05:02.145974 update_engine[1684]: I20260123 19:05:02.145916 1684 update_attempter.cc:509] Updating boot flags... Jan 23 19:05:02.559850 containerd[1707]: time="2026-01-23T19:05:02.559809157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:02.561931 containerd[1707]: time="2026-01-23T19:05:02.561803219Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158110" Jan 23 19:05:02.567986 containerd[1707]: time="2026-01-23T19:05:02.567958212Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:02.572392 containerd[1707]: time="2026-01-23T19:05:02.572363483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:02.572992 containerd[1707]: time="2026-01-23T19:05:02.572968817Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.742312108s" Jan 23 19:05:02.573043 containerd[1707]: time="2026-01-23T19:05:02.573000614Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 19:05:02.573760 containerd[1707]: time="2026-01-23T19:05:02.573728636Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 19:05:03.544017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2620778576.mount: Deactivated successfully. Jan 23 19:05:03.897816 containerd[1707]: time="2026-01-23T19:05:03.897715176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:03.900112 containerd[1707]: time="2026-01-23T19:05:03.900076745Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930104" Jan 23 19:05:03.902947 containerd[1707]: time="2026-01-23T19:05:03.902909456Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:03.906315 containerd[1707]: time="2026-01-23T19:05:03.906278752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:03.906765 containerd[1707]: time="2026-01-23T19:05:03.906554279Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.332801189s" Jan 23 19:05:03.906765 containerd[1707]: time="2026-01-23T19:05:03.906584942Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 19:05:03.907051 containerd[1707]: time="2026-01-23T19:05:03.907035264Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 19:05:04.434088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount578315927.mount: Deactivated successfully. Jan 23 19:05:05.313671 containerd[1707]: time="2026-01-23T19:05:05.313624910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:05.316592 containerd[1707]: time="2026-01-23T19:05:05.316461322Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jan 23 19:05:05.319634 containerd[1707]: time="2026-01-23T19:05:05.319609027Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:05.324020 containerd[1707]: time="2026-01-23T19:05:05.323992699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:05.324732 containerd[1707]: time="2026-01-23T19:05:05.324708409Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.417608075s" Jan 23 19:05:05.324827 containerd[1707]: time="2026-01-23T19:05:05.324814467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 19:05:05.325494 containerd[1707]: time="2026-01-23T19:05:05.325470245Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 19:05:05.815243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2113108704.mount: Deactivated successfully. Jan 23 19:05:05.831155 containerd[1707]: time="2026-01-23T19:05:05.831118252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:05:05.833868 containerd[1707]: time="2026-01-23T19:05:05.833763729Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 23 19:05:05.836736 containerd[1707]: time="2026-01-23T19:05:05.836713175Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:05:05.840333 containerd[1707]: time="2026-01-23T19:05:05.839829546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:05:05.840333 containerd[1707]: time="2026-01-23T19:05:05.840223540Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 514.729193ms" Jan 23 19:05:05.840333 containerd[1707]: time="2026-01-23T19:05:05.840248639Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 19:05:05.840891 containerd[1707]: time="2026-01-23T19:05:05.840868100Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 19:05:06.311361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1926314638.mount: Deactivated successfully. Jan 23 19:05:08.107345 containerd[1707]: time="2026-01-23T19:05:08.107299723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:08.109365 containerd[1707]: time="2026-01-23T19:05:08.109228248Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926235" Jan 23 19:05:08.112072 containerd[1707]: time="2026-01-23T19:05:08.112046708Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:08.115664 containerd[1707]: time="2026-01-23T19:05:08.115632750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:08.116684 containerd[1707]: time="2026-01-23T19:05:08.116358594Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.275387682s" Jan 23 19:05:08.116684 containerd[1707]: time="2026-01-23T19:05:08.116386363Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 19:05:10.941568 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:05:10.942055 systemd[1]: kubelet.service: Consumed 126ms CPU time, 107.9M memory peak. Jan 23 19:05:10.943903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:05:10.967559 systemd[1]: Reload requested from client PID 2609 ('systemctl') (unit session-7.scope)... Jan 23 19:05:10.967570 systemd[1]: Reloading... Jan 23 19:05:11.057778 zram_generator::config[2659]: No configuration found. Jan 23 19:05:11.240728 systemd[1]: Reloading finished in 272 ms. Jan 23 19:05:11.360430 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 19:05:11.360511 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 19:05:11.360792 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:05:11.360850 systemd[1]: kubelet.service: Consumed 80ms CPU time, 83.2M memory peak. Jan 23 19:05:11.362678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:05:11.835846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:05:11.839950 (kubelet)[2723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:05:11.873769 kubelet[2723]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:05:11.873769 kubelet[2723]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:05:11.873769 kubelet[2723]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:05:11.874028 kubelet[2723]: I0123 19:05:11.873811 2723 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:05:12.191624 kubelet[2723]: I0123 19:05:12.191592 2723 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 19:05:12.191624 kubelet[2723]: I0123 19:05:12.191614 2723 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:05:12.191867 kubelet[2723]: I0123 19:05:12.191856 2723 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 19:05:12.231774 kubelet[2723]: E0123 19:05:12.230256 2723 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 19:05:12.231774 kubelet[2723]: I0123 19:05:12.231185 2723 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:05:12.238804 kubelet[2723]: I0123 19:05:12.238785 2723 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:05:12.241473 kubelet[2723]: I0123 19:05:12.241450 2723 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 19:05:12.241651 kubelet[2723]: I0123 19:05:12.241632 2723 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:05:12.241818 kubelet[2723]: I0123 19:05:12.241650 2723 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-a-b64689fca1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:05:12.241931 kubelet[2723]: I0123 19:05:12.241819 2723 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:05:12.241931 kubelet[2723]: I0123 19:05:12.241829 2723 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 19:05:12.242729 kubelet[2723]: I0123 19:05:12.242712 2723 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:05:12.244676 kubelet[2723]: I0123 19:05:12.244662 2723 kubelet.go:480] "Attempting to sync node with API server" Jan 23 19:05:12.244743 kubelet[2723]: I0123 19:05:12.244679 2723 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:05:12.244743 kubelet[2723]: I0123 19:05:12.244700 2723 kubelet.go:386] "Adding apiserver pod source" Jan 23 19:05:12.244743 kubelet[2723]: I0123 19:05:12.244714 2723 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:05:12.254448 kubelet[2723]: E0123 19:05:12.254410 2723 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 19:05:12.254526 kubelet[2723]: E0123 19:05:12.254489 2723 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-a-b64689fca1&limit=500&resourceVersion=0\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 19:05:12.255245 kubelet[2723]: I0123 19:05:12.254586 2723 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:05:12.255245 kubelet[2723]: I0123 19:05:12.255096 2723 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 19:05:12.255698 kubelet[2723]: W0123 19:05:12.255677 2723 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 19:05:12.257761 kubelet[2723]: I0123 19:05:12.257734 2723 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 19:05:12.257825 kubelet[2723]: I0123 19:05:12.257801 2723 server.go:1289] "Started kubelet" Jan 23 19:05:12.258533 kubelet[2723]: I0123 19:05:12.258509 2723 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:05:12.258765 kubelet[2723]: I0123 19:05:12.258732 2723 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:05:12.259479 kubelet[2723]: I0123 19:05:12.259463 2723 server.go:317] "Adding debug handlers to kubelet server" Jan 23 19:05:12.262424 kubelet[2723]: I0123 19:05:12.262379 2723 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:05:12.262649 kubelet[2723]: I0123 19:05:12.262638 2723 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:05:12.263096 kubelet[2723]: I0123 19:05:12.263080 2723 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:05:12.265822 kubelet[2723]: I0123 19:05:12.265799 2723 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 19:05:12.265993 kubelet[2723]: E0123 19:05:12.265977 2723 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-a-b64689fca1\" not found" Jan 23 19:05:12.268336 kubelet[2723]: I0123 19:05:12.268184 2723 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 19:05:12.268336 kubelet[2723]: I0123 19:05:12.268241 2723 reconciler.go:26] "Reconciler: start to sync state" Jan 23 19:05:12.270407 kubelet[2723]: E0123 19:05:12.268462 2723 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.21:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.3-a-b64689fca1.188d7194bc7b3f28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.3-a-b64689fca1,UID:ci-4459.2.3-a-b64689fca1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.3-a-b64689fca1,},FirstTimestamp:2026-01-23 19:05:12.25776516 +0000 UTC m=+0.414140134,LastTimestamp:2026-01-23 19:05:12.25776516 +0000 UTC m=+0.414140134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.3-a-b64689fca1,}" Jan 23 19:05:12.270510 kubelet[2723]: E0123 19:05:12.270473 2723 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-a-b64689fca1?timeout=10s\": dial tcp 10.200.4.21:6443: connect: connection refused" interval="200ms" Jan 23 19:05:12.271018 kubelet[2723]: I0123 19:05:12.270997 2723 factory.go:223] Registration of the systemd container factory successfully Jan 23 19:05:12.271083 kubelet[2723]: I0123 19:05:12.271069 2723 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:05:12.272002 kubelet[2723]: E0123 19:05:12.271866 2723 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 19:05:12.272171 kubelet[2723]: I0123 19:05:12.272157 2723 factory.go:223] Registration of the containerd container factory successfully Jan 23 19:05:12.289305 kubelet[2723]: E0123 19:05:12.289224 2723 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 19:05:12.291030 kubelet[2723]: I0123 19:05:12.291019 2723 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:05:12.291030 kubelet[2723]: I0123 19:05:12.291029 2723 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:05:12.291121 kubelet[2723]: I0123 19:05:12.291043 2723 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:05:12.295494 kubelet[2723]: I0123 19:05:12.295479 2723 policy_none.go:49] "None policy: Start" Jan 23 19:05:12.295494 kubelet[2723]: I0123 19:05:12.295495 2723 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 19:05:12.295566 kubelet[2723]: I0123 19:05:12.295504 2723 state_mem.go:35] "Initializing new in-memory state store" Jan 23 19:05:12.301911 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 19:05:12.311329 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 19:05:12.313898 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 19:05:12.322274 kubelet[2723]: E0123 19:05:12.322259 2723 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 19:05:12.322408 kubelet[2723]: I0123 19:05:12.322397 2723 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:05:12.322452 kubelet[2723]: I0123 19:05:12.322411 2723 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:05:12.322911 kubelet[2723]: I0123 19:05:12.322683 2723 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:05:12.324098 kubelet[2723]: E0123 19:05:12.323821 2723 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:05:12.324098 kubelet[2723]: E0123 19:05:12.323857 2723 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.3-a-b64689fca1\" not found" Jan 23 19:05:12.424155 kubelet[2723]: I0123 19:05:12.424139 2723 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.424426 kubelet[2723]: E0123 19:05:12.424409 2723 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.21:6443/api/v1/nodes\": dial tcp 10.200.4.21:6443: connect: connection refused" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.471194 kubelet[2723]: E0123 19:05:12.471115 2723 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-a-b64689fca1?timeout=10s\": dial tcp 10.200.4.21:6443: connect: connection refused" interval="400ms" Jan 23 19:05:12.621631 kubelet[2723]: I0123 19:05:12.621583 2723 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 19:05:12.622766 kubelet[2723]: I0123 19:05:12.622623 2723 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 19:05:12.622766 kubelet[2723]: I0123 19:05:12.622642 2723 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 19:05:12.622766 kubelet[2723]: I0123 19:05:12.622662 2723 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:05:12.622766 kubelet[2723]: I0123 19:05:12.622669 2723 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 19:05:12.622766 kubelet[2723]: E0123 19:05:12.622703 2723 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 23 19:05:12.626345 kubelet[2723]: E0123 19:05:12.626324 2723 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 19:05:12.626683 kubelet[2723]: I0123 19:05:12.626612 2723 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.626983 kubelet[2723]: E0123 19:05:12.626950 2723 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.21:6443/api/v1/nodes\": dial tcp 10.200.4.21:6443: connect: connection refused" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.735455 systemd[1]: Created slice kubepods-burstable-pod3e1f01561ea00c76812beed04126b907.slice - libcontainer container kubepods-burstable-pod3e1f01561ea00c76812beed04126b907.slice. Jan 23 19:05:12.746296 kubelet[2723]: E0123 19:05:12.746275 2723 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-b64689fca1\" not found" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.749743 systemd[1]: Created slice kubepods-burstable-pod70960ef950626e324f4c1523ddd48680.slice - libcontainer container kubepods-burstable-pod70960ef950626e324f4c1523ddd48680.slice. Jan 23 19:05:12.753515 kubelet[2723]: E0123 19:05:12.753497 2723 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-b64689fca1\" not found" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.755827 systemd[1]: Created slice kubepods-burstable-podbf0a1dfd9d26eebe4283bfb200b3c959.slice - libcontainer container kubepods-burstable-podbf0a1dfd9d26eebe4283bfb200b3c959.slice. Jan 23 19:05:12.757367 kubelet[2723]: E0123 19:05:12.757344 2723 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-b64689fca1\" not found" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.772713 kubelet[2723]: I0123 19:05:12.772680 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf0a1dfd9d26eebe4283bfb200b3c959-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-a-b64689fca1\" (UID: \"bf0a1dfd9d26eebe4283bfb200b3c959\") " pod="kube-system/kube-scheduler-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.772779 kubelet[2723]: I0123 19:05:12.772732 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e1f01561ea00c76812beed04126b907-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-a-b64689fca1\" (UID: \"3e1f01561ea00c76812beed04126b907\") " pod="kube-system/kube-apiserver-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.772814 kubelet[2723]: I0123 19:05:12.772781 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e1f01561ea00c76812beed04126b907-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-a-b64689fca1\" (UID: \"3e1f01561ea00c76812beed04126b907\") " pod="kube-system/kube-apiserver-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.772814 kubelet[2723]: I0123 19:05:12.772804 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e1f01561ea00c76812beed04126b907-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-a-b64689fca1\" (UID: \"3e1f01561ea00c76812beed04126b907\") " pod="kube-system/kube-apiserver-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.772863 kubelet[2723]: I0123 19:05:12.772829 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70960ef950626e324f4c1523ddd48680-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-a-b64689fca1\" (UID: \"70960ef950626e324f4c1523ddd48680\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.772863 kubelet[2723]: I0123 19:05:12.772851 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70960ef950626e324f4c1523ddd48680-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-a-b64689fca1\" (UID: \"70960ef950626e324f4c1523ddd48680\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.772908 kubelet[2723]: I0123 19:05:12.772877 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70960ef950626e324f4c1523ddd48680-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-a-b64689fca1\" (UID: \"70960ef950626e324f4c1523ddd48680\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.772908 kubelet[2723]: I0123 19:05:12.772895 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/70960ef950626e324f4c1523ddd48680-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-a-b64689fca1\" (UID: \"70960ef950626e324f4c1523ddd48680\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.772953 kubelet[2723]: I0123 19:05:12.772918 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70960ef950626e324f4c1523ddd48680-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-a-b64689fca1\" (UID: \"70960ef950626e324f4c1523ddd48680\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:12.872351 kubelet[2723]: E0123 19:05:12.872325 2723 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-a-b64689fca1?timeout=10s\": dial tcp 10.200.4.21:6443: connect: connection refused" interval="800ms" Jan 23 19:05:13.028490 kubelet[2723]: I0123 19:05:13.028404 2723 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:13.029166 kubelet[2723]: E0123 19:05:13.028690 2723 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.21:6443/api/v1/nodes\": dial tcp 10.200.4.21:6443: connect: connection refused" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:13.047431 containerd[1707]: time="2026-01-23T19:05:13.047396456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-a-b64689fca1,Uid:3e1f01561ea00c76812beed04126b907,Namespace:kube-system,Attempt:0,}" Jan 23 19:05:13.054593 containerd[1707]: time="2026-01-23T19:05:13.054442589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-a-b64689fca1,Uid:70960ef950626e324f4c1523ddd48680,Namespace:kube-system,Attempt:0,}" Jan 23 19:05:13.058250 containerd[1707]: time="2026-01-23T19:05:13.058216792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-a-b64689fca1,Uid:bf0a1dfd9d26eebe4283bfb200b3c959,Namespace:kube-system,Attempt:0,}" Jan 23 19:05:13.068764 kubelet[2723]: E0123 19:05:13.068685 2723 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.21:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.3-a-b64689fca1.188d7194bc7b3f28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.3-a-b64689fca1,UID:ci-4459.2.3-a-b64689fca1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.3-a-b64689fca1,},FirstTimestamp:2026-01-23 19:05:12.25776516 +0000 UTC m=+0.414140134,LastTimestamp:2026-01-23 19:05:12.25776516 +0000 UTC m=+0.414140134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.3-a-b64689fca1,}" Jan 23 19:05:13.121233 kubelet[2723]: E0123 19:05:13.121207 2723 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-a-b64689fca1&limit=500&resourceVersion=0\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 19:05:13.170039 kubelet[2723]: E0123 19:05:13.170015 2723 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 19:05:13.386811 kubelet[2723]: E0123 19:05:13.386697 2723 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 19:05:13.673832 containerd[1707]: time="2026-01-23T19:05:13.671539356Z" level=info msg="connecting to shim 6c3675f2ef7667576ef41450ebb7689c095f09fe21d63d02cea1c38a0950d967" address="unix:///run/containerd/s/76c813f6dc87dfdd723f2c6978eae5ac0e4bf4aaafa17c2c0d47c2cc6844ab43" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:05:13.673926 kubelet[2723]: E0123 19:05:13.672725 2723 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-a-b64689fca1?timeout=10s\": dial tcp 10.200.4.21:6443: connect: connection refused" interval="1.6s" Jan 23 19:05:13.697601 containerd[1707]: time="2026-01-23T19:05:13.697562451Z" level=info msg="connecting to shim 1d33694598d612c3d01dcc5f69f758954dd04462c050b68463ab256bd2992bd6" address="unix:///run/containerd/s/179c9699ebe567e7ebdf39992d106562f96c161cabf41418c7ba2ee0f64241b4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:05:13.700267 containerd[1707]: time="2026-01-23T19:05:13.700236414Z" level=info msg="connecting to shim 6f2454da5b2b8b5e92c17763ec8e0cb3713adc1b7bd394ba3241d001170e06ac" address="unix:///run/containerd/s/5dbef7e8e2c1bb318839d24c196d3f8871c1ebf7ab4dc66b2ae9a043844d2e33" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:05:13.709484 systemd[1]: Started cri-containerd-6c3675f2ef7667576ef41450ebb7689c095f09fe21d63d02cea1c38a0950d967.scope - libcontainer container 6c3675f2ef7667576ef41450ebb7689c095f09fe21d63d02cea1c38a0950d967. Jan 23 19:05:13.722383 systemd[1]: Started cri-containerd-6f2454da5b2b8b5e92c17763ec8e0cb3713adc1b7bd394ba3241d001170e06ac.scope - libcontainer container 6f2454da5b2b8b5e92c17763ec8e0cb3713adc1b7bd394ba3241d001170e06ac. Jan 23 19:05:13.745872 systemd[1]: Started cri-containerd-1d33694598d612c3d01dcc5f69f758954dd04462c050b68463ab256bd2992bd6.scope - libcontainer container 1d33694598d612c3d01dcc5f69f758954dd04462c050b68463ab256bd2992bd6. Jan 23 19:05:13.830287 kubelet[2723]: I0123 19:05:13.830268 2723 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:13.830735 kubelet[2723]: E0123 19:05:13.830572 2723 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.21:6443/api/v1/nodes\": dial tcp 10.200.4.21:6443: connect: connection refused" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:13.963409 containerd[1707]: time="2026-01-23T19:05:13.963379734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-a-b64689fca1,Uid:bf0a1dfd9d26eebe4283bfb200b3c959,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d33694598d612c3d01dcc5f69f758954dd04462c050b68463ab256bd2992bd6\"" Jan 23 19:05:13.967025 containerd[1707]: time="2026-01-23T19:05:13.966701124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-a-b64689fca1,Uid:3e1f01561ea00c76812beed04126b907,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c3675f2ef7667576ef41450ebb7689c095f09fe21d63d02cea1c38a0950d967\"" Jan 23 19:05:14.003050 containerd[1707]: time="2026-01-23T19:05:14.003015246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-a-b64689fca1,Uid:70960ef950626e324f4c1523ddd48680,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f2454da5b2b8b5e92c17763ec8e0cb3713adc1b7bd394ba3241d001170e06ac\"" Jan 23 19:05:14.004302 containerd[1707]: time="2026-01-23T19:05:14.004273100Z" level=info msg="CreateContainer within sandbox \"1d33694598d612c3d01dcc5f69f758954dd04462c050b68463ab256bd2992bd6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 19:05:14.010657 containerd[1707]: time="2026-01-23T19:05:14.010627785Z" level=info msg="CreateContainer within sandbox \"6c3675f2ef7667576ef41450ebb7689c095f09fe21d63d02cea1c38a0950d967\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 19:05:14.014730 containerd[1707]: time="2026-01-23T19:05:14.014708091Z" level=info msg="CreateContainer within sandbox \"6f2454da5b2b8b5e92c17763ec8e0cb3713adc1b7bd394ba3241d001170e06ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 19:05:14.023480 containerd[1707]: time="2026-01-23T19:05:14.023455548Z" level=info msg="Container fe322a1a79c550e6d8ace150894d6b338d5e0424e28fcdb529857a98f61b2214: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:05:14.046243 containerd[1707]: time="2026-01-23T19:05:14.046217004Z" level=info msg="Container 772ae3c75fd1d30417da40a4ebe811ab2a098c7175db67d847acba848533b2bb: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:05:14.053543 containerd[1707]: time="2026-01-23T19:05:14.053520689Z" level=info msg="Container f4f174bdba50cdc7852fffa873b9ac8be8674eb90c9f38028a02479d9ee319f0: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:05:14.054685 containerd[1707]: time="2026-01-23T19:05:14.054663618Z" level=info msg="CreateContainer within sandbox \"1d33694598d612c3d01dcc5f69f758954dd04462c050b68463ab256bd2992bd6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fe322a1a79c550e6d8ace150894d6b338d5e0424e28fcdb529857a98f61b2214\"" Jan 23 19:05:14.055254 containerd[1707]: time="2026-01-23T19:05:14.055234048Z" level=info msg="StartContainer for \"fe322a1a79c550e6d8ace150894d6b338d5e0424e28fcdb529857a98f61b2214\"" Jan 23 19:05:14.056734 containerd[1707]: time="2026-01-23T19:05:14.056706016Z" level=info msg="connecting to shim fe322a1a79c550e6d8ace150894d6b338d5e0424e28fcdb529857a98f61b2214" address="unix:///run/containerd/s/179c9699ebe567e7ebdf39992d106562f96c161cabf41418c7ba2ee0f64241b4" protocol=ttrpc version=3 Jan 23 19:05:14.067903 containerd[1707]: time="2026-01-23T19:05:14.067819139Z" level=info msg="CreateContainer within sandbox \"6c3675f2ef7667576ef41450ebb7689c095f09fe21d63d02cea1c38a0950d967\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"772ae3c75fd1d30417da40a4ebe811ab2a098c7175db67d847acba848533b2bb\"" Jan 23 19:05:14.068931 containerd[1707]: time="2026-01-23T19:05:14.068911578Z" level=info msg="StartContainer for \"772ae3c75fd1d30417da40a4ebe811ab2a098c7175db67d847acba848533b2bb\"" Jan 23 19:05:14.070010 containerd[1707]: time="2026-01-23T19:05:14.069986332Z" level=info msg="connecting to shim 772ae3c75fd1d30417da40a4ebe811ab2a098c7175db67d847acba848533b2bb" address="unix:///run/containerd/s/76c813f6dc87dfdd723f2c6978eae5ac0e4bf4aaafa17c2c0d47c2cc6844ab43" protocol=ttrpc version=3 Jan 23 19:05:14.071055 systemd[1]: Started cri-containerd-fe322a1a79c550e6d8ace150894d6b338d5e0424e28fcdb529857a98f61b2214.scope - libcontainer container fe322a1a79c550e6d8ace150894d6b338d5e0424e28fcdb529857a98f61b2214. Jan 23 19:05:14.079795 containerd[1707]: time="2026-01-23T19:05:14.078556737Z" level=info msg="CreateContainer within sandbox \"6f2454da5b2b8b5e92c17763ec8e0cb3713adc1b7bd394ba3241d001170e06ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f4f174bdba50cdc7852fffa873b9ac8be8674eb90c9f38028a02479d9ee319f0\"" Jan 23 19:05:14.080375 containerd[1707]: time="2026-01-23T19:05:14.080353406Z" level=info msg="StartContainer for \"f4f174bdba50cdc7852fffa873b9ac8be8674eb90c9f38028a02479d9ee319f0\"" Jan 23 19:05:14.082934 containerd[1707]: time="2026-01-23T19:05:14.082901781Z" level=info msg="connecting to shim f4f174bdba50cdc7852fffa873b9ac8be8674eb90c9f38028a02479d9ee319f0" address="unix:///run/containerd/s/5dbef7e8e2c1bb318839d24c196d3f8871c1ebf7ab4dc66b2ae9a043844d2e33" protocol=ttrpc version=3 Jan 23 19:05:14.092838 systemd[1]: Started cri-containerd-772ae3c75fd1d30417da40a4ebe811ab2a098c7175db67d847acba848533b2bb.scope - libcontainer container 772ae3c75fd1d30417da40a4ebe811ab2a098c7175db67d847acba848533b2bb. Jan 23 19:05:14.102923 systemd[1]: Started cri-containerd-f4f174bdba50cdc7852fffa873b9ac8be8674eb90c9f38028a02479d9ee319f0.scope - libcontainer container f4f174bdba50cdc7852fffa873b9ac8be8674eb90c9f38028a02479d9ee319f0. Jan 23 19:05:14.121568 kubelet[2723]: E0123 19:05:14.121457 2723 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 19:05:14.139359 containerd[1707]: time="2026-01-23T19:05:14.139225045Z" level=info msg="StartContainer for \"fe322a1a79c550e6d8ace150894d6b338d5e0424e28fcdb529857a98f61b2214\" returns successfully" Jan 23 19:05:14.209541 containerd[1707]: time="2026-01-23T19:05:14.209514843Z" level=info msg="StartContainer for \"772ae3c75fd1d30417da40a4ebe811ab2a098c7175db67d847acba848533b2bb\" returns successfully" Jan 23 19:05:14.221335 containerd[1707]: time="2026-01-23T19:05:14.221264860Z" level=info msg="StartContainer for \"f4f174bdba50cdc7852fffa873b9ac8be8674eb90c9f38028a02479d9ee319f0\" returns successfully" Jan 23 19:05:14.636168 kubelet[2723]: E0123 19:05:14.636075 2723 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-b64689fca1\" not found" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:14.638768 kubelet[2723]: E0123 19:05:14.637938 2723 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-b64689fca1\" not found" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:14.641161 kubelet[2723]: E0123 19:05:14.641144 2723 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-b64689fca1\" not found" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:15.360853 waagent[1895]: 2026-01-23T19:05:15.360808Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 19:05:15.372993 waagent[1895]: 2026-01-23T19:05:15.372954Z INFO ExtHandler Jan 23 19:05:15.373083 waagent[1895]: 2026-01-23T19:05:15.373043Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 85703e4c-16e7-4a7c-8e44-a82c88aa0132 eTag: 16838100060855376128 source: Fabric] Jan 23 19:05:15.373363 waagent[1895]: 2026-01-23T19:05:15.373340Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 19:05:15.373838 waagent[1895]: 2026-01-23T19:05:15.373814Z INFO ExtHandler Jan 23 19:05:15.373886 waagent[1895]: 2026-01-23T19:05:15.373865Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 19:05:15.434187 kubelet[2723]: I0123 19:05:15.434161 2723 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:15.438642 waagent[1895]: 2026-01-23T19:05:15.438610Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 19:05:15.515792 waagent[1895]: 2026-01-23T19:05:15.515108Z INFO ExtHandler Downloaded certificate {'thumbprint': '9E6E32E73AD65F72EF64401F8098896FA04CAB6F', 'hasPrivateKey': True} Jan 23 19:05:15.515792 waagent[1895]: 2026-01-23T19:05:15.515539Z INFO ExtHandler Fetch goal state completed Jan 23 19:05:15.516017 waagent[1895]: 2026-01-23T19:05:15.515995Z INFO ExtHandler ExtHandler Jan 23 19:05:15.516102 waagent[1895]: 2026-01-23T19:05:15.516087Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 416cbcc4-41a9-4f17-82f1-1ad559fe2dec correlation 678242c9-a44e-4b84-a1a1-9a9aa3bab1bd created: 2026-01-23T19:05:10.284851Z] Jan 23 19:05:15.516409 waagent[1895]: 2026-01-23T19:05:15.516379Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 19:05:15.516968 waagent[1895]: 2026-01-23T19:05:15.516932Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 23 19:05:15.643973 kubelet[2723]: E0123 19:05:15.643898 2723 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-b64689fca1\" not found" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:15.644404 kubelet[2723]: E0123 19:05:15.644110 2723 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-b64689fca1\" not found" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:16.173687 kubelet[2723]: E0123 19:05:16.173651 2723 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.3-a-b64689fca1\" not found" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:16.251531 kubelet[2723]: I0123 19:05:16.251499 2723 apiserver.go:52] "Watching apiserver" Jan 23 19:05:16.268886 kubelet[2723]: I0123 19:05:16.268857 2723 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 19:05:16.378719 kubelet[2723]: I0123 19:05:16.378680 2723 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:16.467268 kubelet[2723]: I0123 19:05:16.467234 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:16.473980 kubelet[2723]: E0123 19:05:16.473955 2723 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-a-b64689fca1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:16.473980 kubelet[2723]: I0123 19:05:16.473981 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:16.475468 kubelet[2723]: E0123 19:05:16.475435 2723 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.3-a-b64689fca1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:16.475468 kubelet[2723]: I0123 19:05:16.475470 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:16.477122 kubelet[2723]: E0123 19:05:16.476834 2723 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-a-b64689fca1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:18.288733 systemd[1]: Reload requested from client PID 3003 ('systemctl') (unit session-7.scope)... Jan 23 19:05:18.288755 systemd[1]: Reloading... Jan 23 19:05:18.361779 zram_generator::config[3046]: No configuration found. Jan 23 19:05:18.555432 systemd[1]: Reloading finished in 266 ms. Jan 23 19:05:18.587657 kubelet[2723]: I0123 19:05:18.587596 2723 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:05:18.587957 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:05:18.605595 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 19:05:18.605829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:05:18.605874 systemd[1]: kubelet.service: Consumed 695ms CPU time, 130.5M memory peak. Jan 23 19:05:18.607192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:05:19.084705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:05:19.091009 (kubelet)[3117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:05:19.128961 kubelet[3117]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:05:19.130784 kubelet[3117]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:05:19.130784 kubelet[3117]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:05:19.130784 kubelet[3117]: I0123 19:05:19.129272 3117 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:05:19.138862 kubelet[3117]: I0123 19:05:19.138843 3117 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 19:05:19.138942 kubelet[3117]: I0123 19:05:19.138936 3117 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:05:19.139172 kubelet[3117]: I0123 19:05:19.139162 3117 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 19:05:19.140305 kubelet[3117]: I0123 19:05:19.140259 3117 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 19:05:19.142767 kubelet[3117]: I0123 19:05:19.142689 3117 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:05:19.146096 kubelet[3117]: I0123 19:05:19.146057 3117 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:05:19.149120 kubelet[3117]: I0123 19:05:19.149102 3117 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 19:05:19.149403 kubelet[3117]: I0123 19:05:19.149373 3117 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:05:19.150298 kubelet[3117]: I0123 19:05:19.149412 3117 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-a-b64689fca1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:05:19.150298 kubelet[3117]: I0123 19:05:19.149787 3117 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:05:19.150298 kubelet[3117]: I0123 19:05:19.149799 3117 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 19:05:19.150298 kubelet[3117]: I0123 19:05:19.149852 3117 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:05:19.150298 kubelet[3117]: I0123 19:05:19.149989 3117 kubelet.go:480] "Attempting to sync node with API server" Jan 23 19:05:19.150512 kubelet[3117]: I0123 19:05:19.149999 3117 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:05:19.150512 kubelet[3117]: I0123 19:05:19.150017 3117 kubelet.go:386] "Adding apiserver pod source" Jan 23 19:05:19.150512 kubelet[3117]: I0123 19:05:19.150029 3117 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:05:19.160094 kubelet[3117]: I0123 19:05:19.160031 3117 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:05:19.160704 kubelet[3117]: I0123 19:05:19.160692 3117 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 19:05:19.162860 kubelet[3117]: I0123 19:05:19.162849 3117 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 19:05:19.162969 kubelet[3117]: I0123 19:05:19.162964 3117 server.go:1289] "Started kubelet" Jan 23 19:05:19.164001 kubelet[3117]: I0123 19:05:19.163953 3117 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:05:19.164243 kubelet[3117]: I0123 19:05:19.164219 3117 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:05:19.164296 kubelet[3117]: I0123 19:05:19.164266 3117 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:05:19.164676 kubelet[3117]: I0123 19:05:19.164661 3117 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:05:19.166139 kubelet[3117]: I0123 19:05:19.166108 3117 server.go:317] "Adding debug handlers to kubelet server" Jan 23 19:05:19.175527 kubelet[3117]: I0123 19:05:19.175514 3117 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:05:19.176071 kubelet[3117]: I0123 19:05:19.176057 3117 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 19:05:19.177233 kubelet[3117]: I0123 19:05:19.176137 3117 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 19:05:19.177233 kubelet[3117]: I0123 19:05:19.176253 3117 reconciler.go:26] "Reconciler: start to sync state" Jan 23 19:05:19.178367 kubelet[3117]: I0123 19:05:19.178260 3117 factory.go:223] Registration of the systemd container factory successfully Jan 23 19:05:19.178367 kubelet[3117]: I0123 19:05:19.178331 3117 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:05:19.179946 kubelet[3117]: E0123 19:05:19.179923 3117 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 19:05:19.181454 kubelet[3117]: I0123 19:05:19.181433 3117 factory.go:223] Registration of the containerd container factory successfully Jan 23 19:05:19.182904 kubelet[3117]: I0123 19:05:19.182879 3117 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 19:05:19.184027 kubelet[3117]: I0123 19:05:19.184012 3117 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 19:05:19.184132 kubelet[3117]: I0123 19:05:19.184115 3117 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 19:05:19.184178 kubelet[3117]: I0123 19:05:19.184173 3117 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:05:19.184221 kubelet[3117]: I0123 19:05:19.184217 3117 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 19:05:19.184305 kubelet[3117]: E0123 19:05:19.184271 3117 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 19:05:19.230003 kubelet[3117]: I0123 19:05:19.229990 3117 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:05:19.230068 kubelet[3117]: I0123 19:05:19.230031 3117 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:05:19.230068 kubelet[3117]: I0123 19:05:19.230046 3117 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:05:19.256692 kubelet[3117]: I0123 19:05:19.256672 3117 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 19:05:19.256721 kubelet[3117]: I0123 19:05:19.256689 3117 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 19:05:19.256721 kubelet[3117]: I0123 19:05:19.256706 3117 policy_none.go:49] "None policy: Start" Jan 23 19:05:19.256721 kubelet[3117]: I0123 19:05:19.256716 3117 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 19:05:19.256802 kubelet[3117]: I0123 19:05:19.256727 3117 state_mem.go:35] "Initializing new in-memory state store" Jan 23 19:05:19.256845 kubelet[3117]: I0123 19:05:19.256830 3117 state_mem.go:75] "Updated machine memory state" Jan 23 19:05:19.261659 kubelet[3117]: E0123 19:05:19.261636 3117 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 19:05:19.261874 kubelet[3117]: I0123 19:05:19.261857 3117 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:05:19.261922 kubelet[3117]: I0123 19:05:19.261873 3117 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:05:19.262130 kubelet[3117]: I0123 19:05:19.262122 3117 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:05:19.263620 kubelet[3117]: E0123 19:05:19.263473 3117 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:05:19.285872 kubelet[3117]: I0123 19:05:19.285840 3117 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:19.285965 kubelet[3117]: I0123 19:05:19.285957 3117 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:19.286150 kubelet[3117]: I0123 19:05:19.286141 3117 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:19.291769 kubelet[3117]: I0123 19:05:19.291697 3117 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 19:05:19.295004 kubelet[3117]: I0123 19:05:19.294981 3117 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 19:05:19.295140 kubelet[3117]: I0123 19:05:19.295122 3117 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 19:05:19.369616 kubelet[3117]: I0123 19:05:19.369541 3117 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:19.377672 kubelet[3117]: I0123 19:05:19.377464 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e1f01561ea00c76812beed04126b907-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-a-b64689fca1\" (UID: \"3e1f01561ea00c76812beed04126b907\") " pod="kube-system/kube-apiserver-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:19.377672 kubelet[3117]: I0123 19:05:19.377491 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e1f01561ea00c76812beed04126b907-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-a-b64689fca1\" (UID: \"3e1f01561ea00c76812beed04126b907\") " pod="kube-system/kube-apiserver-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:19.377672 kubelet[3117]: I0123 19:05:19.377512 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e1f01561ea00c76812beed04126b907-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-a-b64689fca1\" (UID: \"3e1f01561ea00c76812beed04126b907\") " pod="kube-system/kube-apiserver-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:19.377672 kubelet[3117]: I0123 19:05:19.377531 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70960ef950626e324f4c1523ddd48680-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-a-b64689fca1\" (UID: \"70960ef950626e324f4c1523ddd48680\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:19.377672 kubelet[3117]: I0123 19:05:19.377551 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/70960ef950626e324f4c1523ddd48680-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-a-b64689fca1\" (UID: \"70960ef950626e324f4c1523ddd48680\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:19.377868 kubelet[3117]: I0123 19:05:19.377568 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70960ef950626e324f4c1523ddd48680-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-a-b64689fca1\" (UID: \"70960ef950626e324f4c1523ddd48680\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:19.377868 kubelet[3117]: I0123 19:05:19.377587 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70960ef950626e324f4c1523ddd48680-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-a-b64689fca1\" (UID: \"70960ef950626e324f4c1523ddd48680\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:19.377868 kubelet[3117]: I0123 19:05:19.377606 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70960ef950626e324f4c1523ddd48680-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-a-b64689fca1\" (UID: \"70960ef950626e324f4c1523ddd48680\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:19.377868 kubelet[3117]: I0123 19:05:19.377626 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf0a1dfd9d26eebe4283bfb200b3c959-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-a-b64689fca1\" (UID: \"bf0a1dfd9d26eebe4283bfb200b3c959\") " pod="kube-system/kube-scheduler-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:19.379171 kubelet[3117]: I0123 19:05:19.379152 3117 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:19.379245 kubelet[3117]: I0123 19:05:19.379201 3117 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.3-a-b64689fca1" Jan 23 19:05:20.151433 kubelet[3117]: I0123 19:05:20.151396 3117 apiserver.go:52] "Watching apiserver" Jan 23 19:05:20.176669 kubelet[3117]: I0123 19:05:20.176633 3117 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 19:05:20.214773 kubelet[3117]: I0123 19:05:20.214646 3117 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:20.214858 kubelet[3117]: I0123 19:05:20.214810 3117 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:20.224430 kubelet[3117]: I0123 19:05:20.223942 3117 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 19:05:20.224430 kubelet[3117]: E0123 19:05:20.223993 3117 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-a-b64689fca1\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:20.224792 kubelet[3117]: I0123 19:05:20.224780 3117 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 19:05:20.224930 kubelet[3117]: E0123 19:05:20.224921 3117 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-a-b64689fca1\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.3-a-b64689fca1" Jan 23 19:05:20.242062 kubelet[3117]: I0123 19:05:20.241993 3117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.3-a-b64689fca1" podStartSLOduration=1.241965858 podStartE2EDuration="1.241965858s" podCreationTimestamp="2026-01-23 19:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:05:20.231069961 +0000 UTC m=+1.136683083" watchObservedRunningTime="2026-01-23 19:05:20.241965858 +0000 UTC m=+1.147578976" Jan 23 19:05:20.242325 kubelet[3117]: I0123 19:05:20.242300 3117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.3-a-b64689fca1" podStartSLOduration=1.242199667 podStartE2EDuration="1.242199667s" podCreationTimestamp="2026-01-23 19:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:05:20.24184798 +0000 UTC m=+1.147461098" watchObservedRunningTime="2026-01-23 19:05:20.242199667 +0000 UTC m=+1.147812785" Jan 23 19:05:20.254264 kubelet[3117]: I0123 19:05:20.254176 3117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.3-a-b64689fca1" podStartSLOduration=1.25415879 podStartE2EDuration="1.25415879s" podCreationTimestamp="2026-01-23 19:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:05:20.253537334 +0000 UTC m=+1.159150449" watchObservedRunningTime="2026-01-23 19:05:20.25415879 +0000 UTC m=+1.159771906" Jan 23 19:05:20.524466 sudo[2107]: pam_unix(sudo:session): session closed for user root Jan 23 19:05:20.622549 sshd[2106]: Connection closed by 10.200.16.10 port 50316 Jan 23 19:05:20.623914 sshd-session[2103]: pam_unix(sshd:session): session closed for user core Jan 23 19:05:20.626319 systemd[1]: sshd@4-10.200.4.21:22-10.200.16.10:50316.service: Deactivated successfully. Jan 23 19:05:20.628154 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 19:05:20.628333 systemd[1]: session-7.scope: Consumed 3.396s CPU time, 232.5M memory peak. Jan 23 19:05:20.630540 systemd-logind[1683]: Session 7 logged out. Waiting for processes to exit. Jan 23 19:05:20.631476 systemd-logind[1683]: Removed session 7. Jan 23 19:05:24.290662 kubelet[3117]: I0123 19:05:24.290619 3117 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 19:05:24.291121 containerd[1707]: time="2026-01-23T19:05:24.290965198Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 19:05:24.291308 kubelet[3117]: I0123 19:05:24.291181 3117 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 19:05:25.389405 systemd[1]: Created slice kubepods-besteffort-podcd3a1b84_32e4_4517_b1e4_11ecec425035.slice - libcontainer container kubepods-besteffort-podcd3a1b84_32e4_4517_b1e4_11ecec425035.slice. Jan 23 19:05:25.404252 systemd[1]: Created slice kubepods-burstable-podbf366980_8ee7_406b_a806_8e2c2187a159.slice - libcontainer container kubepods-burstable-podbf366980_8ee7_406b_a806_8e2c2187a159.slice. Jan 23 19:05:25.418421 kubelet[3117]: I0123 19:05:25.418181 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd3a1b84-32e4-4517-b1e4-11ecec425035-lib-modules\") pod \"kube-proxy-tlnmt\" (UID: \"cd3a1b84-32e4-4517-b1e4-11ecec425035\") " pod="kube-system/kube-proxy-tlnmt" Jan 23 19:05:25.418421 kubelet[3117]: I0123 19:05:25.418232 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/bf366980-8ee7-406b-a806-8e2c2187a159-cni\") pod \"kube-flannel-ds-4zj6n\" (UID: \"bf366980-8ee7-406b-a806-8e2c2187a159\") " pod="kube-flannel/kube-flannel-ds-4zj6n" Jan 23 19:05:25.418421 kubelet[3117]: I0123 19:05:25.418250 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf366980-8ee7-406b-a806-8e2c2187a159-xtables-lock\") pod \"kube-flannel-ds-4zj6n\" (UID: \"bf366980-8ee7-406b-a806-8e2c2187a159\") " pod="kube-flannel/kube-flannel-ds-4zj6n" Jan 23 19:05:25.418421 kubelet[3117]: I0123 19:05:25.418268 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb2bp\" (UniqueName: \"kubernetes.io/projected/bf366980-8ee7-406b-a806-8e2c2187a159-kube-api-access-lb2bp\") pod \"kube-flannel-ds-4zj6n\" (UID: \"bf366980-8ee7-406b-a806-8e2c2187a159\") " pod="kube-flannel/kube-flannel-ds-4zj6n" Jan 23 19:05:25.418421 kubelet[3117]: I0123 19:05:25.418287 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bnk2\" (UniqueName: \"kubernetes.io/projected/cd3a1b84-32e4-4517-b1e4-11ecec425035-kube-api-access-8bnk2\") pod \"kube-proxy-tlnmt\" (UID: \"cd3a1b84-32e4-4517-b1e4-11ecec425035\") " pod="kube-system/kube-proxy-tlnmt" Jan 23 19:05:25.419645 kubelet[3117]: I0123 19:05:25.418302 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bf366980-8ee7-406b-a806-8e2c2187a159-run\") pod \"kube-flannel-ds-4zj6n\" (UID: \"bf366980-8ee7-406b-a806-8e2c2187a159\") " pod="kube-flannel/kube-flannel-ds-4zj6n" Jan 23 19:05:25.419645 kubelet[3117]: I0123 19:05:25.418318 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/bf366980-8ee7-406b-a806-8e2c2187a159-cni-plugin\") pod \"kube-flannel-ds-4zj6n\" (UID: \"bf366980-8ee7-406b-a806-8e2c2187a159\") " pod="kube-flannel/kube-flannel-ds-4zj6n" Jan 23 19:05:25.419645 kubelet[3117]: I0123 19:05:25.418333 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/bf366980-8ee7-406b-a806-8e2c2187a159-flannel-cfg\") pod \"kube-flannel-ds-4zj6n\" (UID: \"bf366980-8ee7-406b-a806-8e2c2187a159\") " pod="kube-flannel/kube-flannel-ds-4zj6n" Jan 23 19:05:25.419645 kubelet[3117]: I0123 19:05:25.418365 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cd3a1b84-32e4-4517-b1e4-11ecec425035-kube-proxy\") pod \"kube-proxy-tlnmt\" (UID: \"cd3a1b84-32e4-4517-b1e4-11ecec425035\") " pod="kube-system/kube-proxy-tlnmt" Jan 23 19:05:25.419645 kubelet[3117]: I0123 19:05:25.418383 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd3a1b84-32e4-4517-b1e4-11ecec425035-xtables-lock\") pod \"kube-proxy-tlnmt\" (UID: \"cd3a1b84-32e4-4517-b1e4-11ecec425035\") " pod="kube-system/kube-proxy-tlnmt" Jan 23 19:05:25.702946 containerd[1707]: time="2026-01-23T19:05:25.702911935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tlnmt,Uid:cd3a1b84-32e4-4517-b1e4-11ecec425035,Namespace:kube-system,Attempt:0,}" Jan 23 19:05:25.709420 containerd[1707]: time="2026-01-23T19:05:25.709393116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4zj6n,Uid:bf366980-8ee7-406b-a806-8e2c2187a159,Namespace:kube-flannel,Attempt:0,}" Jan 23 19:05:25.761563 containerd[1707]: time="2026-01-23T19:05:25.761533347Z" level=info msg="connecting to shim 264e3331e3e07766bba75c3f4d001cc71036469332f1f7ec7d295e413eb756dd" address="unix:///run/containerd/s/6bfab747afdb3e710663b70eaa1661bb6a1d5ba9dfa3c6fe11ed392524af2a70" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:05:25.783248 containerd[1707]: time="2026-01-23T19:05:25.782843588Z" level=info msg="connecting to shim eeaf16ef730ee4fc71cf9f7715fe164dc8a8ab0a48a71d59fcb18d3e1589c3e9" address="unix:///run/containerd/s/5ff72cd187dfc22dbc39aa27831697b5c73cd6c3c5e70e034a9b1ffa0f767fbc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:05:25.793950 systemd[1]: Started cri-containerd-264e3331e3e07766bba75c3f4d001cc71036469332f1f7ec7d295e413eb756dd.scope - libcontainer container 264e3331e3e07766bba75c3f4d001cc71036469332f1f7ec7d295e413eb756dd. Jan 23 19:05:25.807884 systemd[1]: Started cri-containerd-eeaf16ef730ee4fc71cf9f7715fe164dc8a8ab0a48a71d59fcb18d3e1589c3e9.scope - libcontainer container eeaf16ef730ee4fc71cf9f7715fe164dc8a8ab0a48a71d59fcb18d3e1589c3e9. Jan 23 19:05:25.827972 containerd[1707]: time="2026-01-23T19:05:25.827827626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tlnmt,Uid:cd3a1b84-32e4-4517-b1e4-11ecec425035,Namespace:kube-system,Attempt:0,} returns sandbox id \"264e3331e3e07766bba75c3f4d001cc71036469332f1f7ec7d295e413eb756dd\"" Jan 23 19:05:25.836140 containerd[1707]: time="2026-01-23T19:05:25.835280590Z" level=info msg="CreateContainer within sandbox \"264e3331e3e07766bba75c3f4d001cc71036469332f1f7ec7d295e413eb756dd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 19:05:25.859247 containerd[1707]: time="2026-01-23T19:05:25.859223181Z" level=info msg="Container 2f2657ec78c3a5a8b84fb4af615945d5a1466b42b747c88ece7691222f169220: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:05:25.862326 containerd[1707]: time="2026-01-23T19:05:25.862165470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4zj6n,Uid:bf366980-8ee7-406b-a806-8e2c2187a159,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"eeaf16ef730ee4fc71cf9f7715fe164dc8a8ab0a48a71d59fcb18d3e1589c3e9\"" Jan 23 19:05:25.863212 containerd[1707]: time="2026-01-23T19:05:25.863182722Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 23 19:05:25.882344 containerd[1707]: time="2026-01-23T19:05:25.882321019Z" level=info msg="CreateContainer within sandbox \"264e3331e3e07766bba75c3f4d001cc71036469332f1f7ec7d295e413eb756dd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f2657ec78c3a5a8b84fb4af615945d5a1466b42b747c88ece7691222f169220\"" Jan 23 19:05:25.882810 containerd[1707]: time="2026-01-23T19:05:25.882788237Z" level=info msg="StartContainer for \"2f2657ec78c3a5a8b84fb4af615945d5a1466b42b747c88ece7691222f169220\"" Jan 23 19:05:25.884433 containerd[1707]: time="2026-01-23T19:05:25.884382514Z" level=info msg="connecting to shim 2f2657ec78c3a5a8b84fb4af615945d5a1466b42b747c88ece7691222f169220" address="unix:///run/containerd/s/6bfab747afdb3e710663b70eaa1661bb6a1d5ba9dfa3c6fe11ed392524af2a70" protocol=ttrpc version=3 Jan 23 19:05:25.899899 systemd[1]: Started cri-containerd-2f2657ec78c3a5a8b84fb4af615945d5a1466b42b747c88ece7691222f169220.scope - libcontainer container 2f2657ec78c3a5a8b84fb4af615945d5a1466b42b747c88ece7691222f169220. Jan 23 19:05:25.956233 containerd[1707]: time="2026-01-23T19:05:25.955643777Z" level=info msg="StartContainer for \"2f2657ec78c3a5a8b84fb4af615945d5a1466b42b747c88ece7691222f169220\" returns successfully" Jan 23 19:05:26.793169 kubelet[3117]: I0123 19:05:26.793081 3117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tlnmt" podStartSLOduration=1.793064939 podStartE2EDuration="1.793064939s" podCreationTimestamp="2026-01-23 19:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:05:26.23809185 +0000 UTC m=+7.143704981" watchObservedRunningTime="2026-01-23 19:05:26.793064939 +0000 UTC m=+7.698678057" Jan 23 19:05:27.415904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2323748856.mount: Deactivated successfully. Jan 23 19:05:27.476522 containerd[1707]: time="2026-01-23T19:05:27.476481658Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:27.478776 containerd[1707]: time="2026-01-23T19:05:27.478624699Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Jan 23 19:05:27.481037 containerd[1707]: time="2026-01-23T19:05:27.481010661Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:27.485304 containerd[1707]: time="2026-01-23T19:05:27.485010973Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:27.486194 containerd[1707]: time="2026-01-23T19:05:27.485729053Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 1.6224973s" Jan 23 19:05:27.486194 containerd[1707]: time="2026-01-23T19:05:27.485775180Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 23 19:05:27.492133 containerd[1707]: time="2026-01-23T19:05:27.492108894Z" level=info msg="CreateContainer within sandbox \"eeaf16ef730ee4fc71cf9f7715fe164dc8a8ab0a48a71d59fcb18d3e1589c3e9\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 23 19:05:27.508261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552733168.mount: Deactivated successfully. Jan 23 19:05:27.508731 containerd[1707]: time="2026-01-23T19:05:27.508625145Z" level=info msg="Container ef41e18c55d0f342cb30c17bfa7b373e045f6c9fe005d1b399be4a0b10f1052d: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:05:27.522699 containerd[1707]: time="2026-01-23T19:05:27.522674900Z" level=info msg="CreateContainer within sandbox \"eeaf16ef730ee4fc71cf9f7715fe164dc8a8ab0a48a71d59fcb18d3e1589c3e9\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"ef41e18c55d0f342cb30c17bfa7b373e045f6c9fe005d1b399be4a0b10f1052d\"" Jan 23 19:05:27.523218 containerd[1707]: time="2026-01-23T19:05:27.523196407Z" level=info msg="StartContainer for \"ef41e18c55d0f342cb30c17bfa7b373e045f6c9fe005d1b399be4a0b10f1052d\"" Jan 23 19:05:27.524132 containerd[1707]: time="2026-01-23T19:05:27.524100533Z" level=info msg="connecting to shim ef41e18c55d0f342cb30c17bfa7b373e045f6c9fe005d1b399be4a0b10f1052d" address="unix:///run/containerd/s/5ff72cd187dfc22dbc39aa27831697b5c73cd6c3c5e70e034a9b1ffa0f767fbc" protocol=ttrpc version=3 Jan 23 19:05:27.543908 systemd[1]: Started cri-containerd-ef41e18c55d0f342cb30c17bfa7b373e045f6c9fe005d1b399be4a0b10f1052d.scope - libcontainer container ef41e18c55d0f342cb30c17bfa7b373e045f6c9fe005d1b399be4a0b10f1052d. Jan 23 19:05:27.566278 systemd[1]: cri-containerd-ef41e18c55d0f342cb30c17bfa7b373e045f6c9fe005d1b399be4a0b10f1052d.scope: Deactivated successfully. Jan 23 19:05:27.569269 containerd[1707]: time="2026-01-23T19:05:27.569236552Z" level=info msg="StartContainer for \"ef41e18c55d0f342cb30c17bfa7b373e045f6c9fe005d1b399be4a0b10f1052d\" returns successfully" Jan 23 19:05:27.569606 containerd[1707]: time="2026-01-23T19:05:27.569578557Z" level=info msg="received container exit event container_id:\"ef41e18c55d0f342cb30c17bfa7b373e045f6c9fe005d1b399be4a0b10f1052d\" id:\"ef41e18c55d0f342cb30c17bfa7b373e045f6c9fe005d1b399be4a0b10f1052d\" pid:3459 exited_at:{seconds:1769195127 nanos:568653009}" Jan 23 19:05:27.586949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef41e18c55d0f342cb30c17bfa7b373e045f6c9fe005d1b399be4a0b10f1052d-rootfs.mount: Deactivated successfully. Jan 23 19:05:29.233739 containerd[1707]: time="2026-01-23T19:05:29.233660124Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 23 19:05:31.321504 containerd[1707]: time="2026-01-23T19:05:31.321459323Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:31.323704 containerd[1707]: time="2026-01-23T19:05:31.323664244Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Jan 23 19:05:31.328770 containerd[1707]: time="2026-01-23T19:05:31.328558575Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:31.332451 containerd[1707]: time="2026-01-23T19:05:31.332410618Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:05:31.333424 containerd[1707]: time="2026-01-23T19:05:31.333339695Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 2.099634779s" Jan 23 19:05:31.333424 containerd[1707]: time="2026-01-23T19:05:31.333367971Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 23 19:05:31.340382 containerd[1707]: time="2026-01-23T19:05:31.340350162Z" level=info msg="CreateContainer within sandbox \"eeaf16ef730ee4fc71cf9f7715fe164dc8a8ab0a48a71d59fcb18d3e1589c3e9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 19:05:31.359947 containerd[1707]: time="2026-01-23T19:05:31.359898208Z" level=info msg="Container 8ecfdeff253bed9bb8e7faf21f6195b680fa09afaa7b7661a1d001be6617af7c: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:05:31.375414 containerd[1707]: time="2026-01-23T19:05:31.375387799Z" level=info msg="CreateContainer within sandbox \"eeaf16ef730ee4fc71cf9f7715fe164dc8a8ab0a48a71d59fcb18d3e1589c3e9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8ecfdeff253bed9bb8e7faf21f6195b680fa09afaa7b7661a1d001be6617af7c\"" Jan 23 19:05:31.375897 containerd[1707]: time="2026-01-23T19:05:31.375879181Z" level=info msg="StartContainer for \"8ecfdeff253bed9bb8e7faf21f6195b680fa09afaa7b7661a1d001be6617af7c\"" Jan 23 19:05:31.376711 containerd[1707]: time="2026-01-23T19:05:31.376658694Z" level=info msg="connecting to shim 8ecfdeff253bed9bb8e7faf21f6195b680fa09afaa7b7661a1d001be6617af7c" address="unix:///run/containerd/s/5ff72cd187dfc22dbc39aa27831697b5c73cd6c3c5e70e034a9b1ffa0f767fbc" protocol=ttrpc version=3 Jan 23 19:05:31.395910 systemd[1]: Started cri-containerd-8ecfdeff253bed9bb8e7faf21f6195b680fa09afaa7b7661a1d001be6617af7c.scope - libcontainer container 8ecfdeff253bed9bb8e7faf21f6195b680fa09afaa7b7661a1d001be6617af7c. Jan 23 19:05:31.416135 systemd[1]: cri-containerd-8ecfdeff253bed9bb8e7faf21f6195b680fa09afaa7b7661a1d001be6617af7c.scope: Deactivated successfully. Jan 23 19:05:31.421003 containerd[1707]: time="2026-01-23T19:05:31.420902244Z" level=info msg="received container exit event container_id:\"8ecfdeff253bed9bb8e7faf21f6195b680fa09afaa7b7661a1d001be6617af7c\" id:\"8ecfdeff253bed9bb8e7faf21f6195b680fa09afaa7b7661a1d001be6617af7c\" pid:3535 exited_at:{seconds:1769195131 nanos:417732471}" Jan 23 19:05:31.427111 containerd[1707]: time="2026-01-23T19:05:31.427080372Z" level=info msg="StartContainer for \"8ecfdeff253bed9bb8e7faf21f6195b680fa09afaa7b7661a1d001be6617af7c\" returns successfully" Jan 23 19:05:31.437660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ecfdeff253bed9bb8e7faf21f6195b680fa09afaa7b7661a1d001be6617af7c-rootfs.mount: Deactivated successfully. Jan 23 19:05:31.449067 kubelet[3117]: I0123 19:05:31.448348 3117 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 19:05:31.517470 systemd[1]: Created slice kubepods-burstable-podddb9b9c3_1552_4fdd_910c_1a27826c897f.slice - libcontainer container kubepods-burstable-podddb9b9c3_1552_4fdd_910c_1a27826c897f.slice. Jan 23 19:05:31.521856 systemd[1]: Created slice kubepods-burstable-pod903b6b05_2103_4e3c_af73_4ee22a8f68b4.slice - libcontainer container kubepods-burstable-pod903b6b05_2103_4e3c_af73_4ee22a8f68b4.slice. Jan 23 19:05:31.559380 kubelet[3117]: I0123 19:05:31.559288 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddb9b9c3-1552-4fdd-910c-1a27826c897f-config-volume\") pod \"coredns-674b8bbfcf-9cd8f\" (UID: \"ddb9b9c3-1552-4fdd-910c-1a27826c897f\") " pod="kube-system/coredns-674b8bbfcf-9cd8f" Jan 23 19:05:31.559380 kubelet[3117]: I0123 19:05:31.559335 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q45fq\" (UniqueName: \"kubernetes.io/projected/903b6b05-2103-4e3c-af73-4ee22a8f68b4-kube-api-access-q45fq\") pod \"coredns-674b8bbfcf-trrz7\" (UID: \"903b6b05-2103-4e3c-af73-4ee22a8f68b4\") " pod="kube-system/coredns-674b8bbfcf-trrz7" Jan 23 19:05:31.559380 kubelet[3117]: I0123 19:05:31.559352 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq6mm\" (UniqueName: \"kubernetes.io/projected/ddb9b9c3-1552-4fdd-910c-1a27826c897f-kube-api-access-tq6mm\") pod \"coredns-674b8bbfcf-9cd8f\" (UID: \"ddb9b9c3-1552-4fdd-910c-1a27826c897f\") " pod="kube-system/coredns-674b8bbfcf-9cd8f" Jan 23 19:05:31.559380 kubelet[3117]: I0123 19:05:31.559364 3117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/903b6b05-2103-4e3c-af73-4ee22a8f68b4-config-volume\") pod \"coredns-674b8bbfcf-trrz7\" (UID: \"903b6b05-2103-4e3c-af73-4ee22a8f68b4\") " pod="kube-system/coredns-674b8bbfcf-trrz7" Jan 23 19:05:31.821952 containerd[1707]: time="2026-01-23T19:05:31.821913997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9cd8f,Uid:ddb9b9c3-1552-4fdd-910c-1a27826c897f,Namespace:kube-system,Attempt:0,}" Jan 23 19:05:31.825922 containerd[1707]: time="2026-01-23T19:05:31.825867824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-trrz7,Uid:903b6b05-2103-4e3c-af73-4ee22a8f68b4,Namespace:kube-system,Attempt:0,}" Jan 23 19:05:31.844823 containerd[1707]: time="2026-01-23T19:05:31.844720641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9cd8f,Uid:ddb9b9c3-1552-4fdd-910c-1a27826c897f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5dd0b03c6c1a7f3fe98023b0bb3f1acb81a1223d9b7d0d035d4c4ef24501f0a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 19:05:31.845172 kubelet[3117]: E0123 19:05:31.845132 3117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5dd0b03c6c1a7f3fe98023b0bb3f1acb81a1223d9b7d0d035d4c4ef24501f0a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 19:05:31.845379 kubelet[3117]: E0123 19:05:31.845328 3117 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5dd0b03c6c1a7f3fe98023b0bb3f1acb81a1223d9b7d0d035d4c4ef24501f0a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-9cd8f" Jan 23 19:05:31.845462 kubelet[3117]: E0123 19:05:31.845448 3117 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5dd0b03c6c1a7f3fe98023b0bb3f1acb81a1223d9b7d0d035d4c4ef24501f0a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-9cd8f" Jan 23 19:05:31.845590 kubelet[3117]: E0123 19:05:31.845552 3117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9cd8f_kube-system(ddb9b9c3-1552-4fdd-910c-1a27826c897f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9cd8f_kube-system(ddb9b9c3-1552-4fdd-910c-1a27826c897f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5dd0b03c6c1a7f3fe98023b0bb3f1acb81a1223d9b7d0d035d4c4ef24501f0a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-9cd8f" podUID="ddb9b9c3-1552-4fdd-910c-1a27826c897f" Jan 23 19:05:31.850769 containerd[1707]: time="2026-01-23T19:05:31.850720683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-trrz7,Uid:903b6b05-2103-4e3c-af73-4ee22a8f68b4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d55da8716911181a630cab9f33c93918c4d1fb6b715ebd2600a97e8409faae9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 19:05:31.850927 kubelet[3117]: E0123 19:05:31.850890 3117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d55da8716911181a630cab9f33c93918c4d1fb6b715ebd2600a97e8409faae9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 23 19:05:31.850975 kubelet[3117]: E0123 19:05:31.850942 3117 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d55da8716911181a630cab9f33c93918c4d1fb6b715ebd2600a97e8409faae9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-trrz7" Jan 23 19:05:31.850975 kubelet[3117]: E0123 19:05:31.850961 3117 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d55da8716911181a630cab9f33c93918c4d1fb6b715ebd2600a97e8409faae9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-trrz7" Jan 23 19:05:31.851053 kubelet[3117]: E0123 19:05:31.851025 3117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-trrz7_kube-system(903b6b05-2103-4e3c-af73-4ee22a8f68b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-trrz7_kube-system(903b6b05-2103-4e3c-af73-4ee22a8f68b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d55da8716911181a630cab9f33c93918c4d1fb6b715ebd2600a97e8409faae9\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-trrz7" podUID="903b6b05-2103-4e3c-af73-4ee22a8f68b4" Jan 23 19:05:32.251079 containerd[1707]: time="2026-01-23T19:05:32.251048771Z" level=info msg="CreateContainer within sandbox \"eeaf16ef730ee4fc71cf9f7715fe164dc8a8ab0a48a71d59fcb18d3e1589c3e9\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 23 19:05:32.266664 containerd[1707]: time="2026-01-23T19:05:32.266632770Z" level=info msg="Container 75e10a70e7caa4b270a24192fd39930514280d97eeb5cd21ee13662b9d916ecc: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:05:32.277187 containerd[1707]: time="2026-01-23T19:05:32.277159429Z" level=info msg="CreateContainer within sandbox \"eeaf16ef730ee4fc71cf9f7715fe164dc8a8ab0a48a71d59fcb18d3e1589c3e9\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"75e10a70e7caa4b270a24192fd39930514280d97eeb5cd21ee13662b9d916ecc\"" Jan 23 19:05:32.277555 containerd[1707]: time="2026-01-23T19:05:32.277517543Z" level=info msg="StartContainer for \"75e10a70e7caa4b270a24192fd39930514280d97eeb5cd21ee13662b9d916ecc\"" Jan 23 19:05:32.278374 containerd[1707]: time="2026-01-23T19:05:32.278339080Z" level=info msg="connecting to shim 75e10a70e7caa4b270a24192fd39930514280d97eeb5cd21ee13662b9d916ecc" address="unix:///run/containerd/s/5ff72cd187dfc22dbc39aa27831697b5c73cd6c3c5e70e034a9b1ffa0f767fbc" protocol=ttrpc version=3 Jan 23 19:05:32.292931 systemd[1]: Started cri-containerd-75e10a70e7caa4b270a24192fd39930514280d97eeb5cd21ee13662b9d916ecc.scope - libcontainer container 75e10a70e7caa4b270a24192fd39930514280d97eeb5cd21ee13662b9d916ecc. Jan 23 19:05:32.318714 containerd[1707]: time="2026-01-23T19:05:32.318663672Z" level=info msg="StartContainer for \"75e10a70e7caa4b270a24192fd39930514280d97eeb5cd21ee13662b9d916ecc\" returns successfully" Jan 23 19:05:33.252773 kubelet[3117]: I0123 19:05:33.252701 3117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-4zj6n" podStartSLOduration=2.781356319 podStartE2EDuration="8.252686204s" podCreationTimestamp="2026-01-23 19:05:25 +0000 UTC" firstStartedPulling="2026-01-23 19:05:25.862881792 +0000 UTC m=+6.768494897" lastFinishedPulling="2026-01-23 19:05:31.334211671 +0000 UTC m=+12.239824782" observedRunningTime="2026-01-23 19:05:33.25253645 +0000 UTC m=+14.158149569" watchObservedRunningTime="2026-01-23 19:05:33.252686204 +0000 UTC m=+14.158299320" Jan 23 19:05:33.450241 systemd-networkd[1340]: flannel.1: Link UP Jan 23 19:05:33.450248 systemd-networkd[1340]: flannel.1: Gained carrier Jan 23 19:05:34.843893 systemd-networkd[1340]: flannel.1: Gained IPv6LL Jan 23 19:05:43.185432 containerd[1707]: time="2026-01-23T19:05:43.185134295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-trrz7,Uid:903b6b05-2103-4e3c-af73-4ee22a8f68b4,Namespace:kube-system,Attempt:0,}" Jan 23 19:05:43.205216 systemd-networkd[1340]: cni0: Link UP Jan 23 19:05:43.205223 systemd-networkd[1340]: cni0: Gained carrier Jan 23 19:05:43.209155 systemd-networkd[1340]: cni0: Lost carrier Jan 23 19:05:43.244582 systemd-networkd[1340]: veth9c9d6361: Link UP Jan 23 19:05:43.247640 kernel: cni0: port 1(veth9c9d6361) entered blocking state Jan 23 19:05:43.247706 kernel: cni0: port 1(veth9c9d6361) entered disabled state Jan 23 19:05:43.249930 kernel: veth9c9d6361: entered allmulticast mode Jan 23 19:05:43.249994 kernel: veth9c9d6361: entered promiscuous mode Jan 23 19:05:43.257495 kernel: cni0: port 1(veth9c9d6361) entered blocking state Jan 23 19:05:43.257539 kernel: cni0: port 1(veth9c9d6361) entered forwarding state Jan 23 19:05:43.257650 systemd-networkd[1340]: veth9c9d6361: Gained carrier Jan 23 19:05:43.258333 systemd-networkd[1340]: cni0: Gained carrier Jan 23 19:05:43.259975 containerd[1707]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00008c950), "name":"cbr0", "type":"bridge"} Jan 23 19:05:43.259975 containerd[1707]: delegateAdd: netconf sent to delegate plugin: Jan 23 19:05:43.300209 containerd[1707]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-23T19:05:43.300147569Z" level=info msg="connecting to shim 4c9833c8127b8557bb53e493504289bae350dd56b32dab95031d519fc5f8cf0e" address="unix:///run/containerd/s/e0c88d5c339c632e29a3f5f06a2246d3082c79c4ddff115f0a96d2bd5f81d8c9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:05:43.323898 systemd[1]: Started cri-containerd-4c9833c8127b8557bb53e493504289bae350dd56b32dab95031d519fc5f8cf0e.scope - libcontainer container 4c9833c8127b8557bb53e493504289bae350dd56b32dab95031d519fc5f8cf0e. Jan 23 19:05:43.360692 containerd[1707]: time="2026-01-23T19:05:43.360650899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-trrz7,Uid:903b6b05-2103-4e3c-af73-4ee22a8f68b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c9833c8127b8557bb53e493504289bae350dd56b32dab95031d519fc5f8cf0e\"" Jan 23 19:05:43.369493 containerd[1707]: time="2026-01-23T19:05:43.369467906Z" level=info msg="CreateContainer within sandbox \"4c9833c8127b8557bb53e493504289bae350dd56b32dab95031d519fc5f8cf0e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:05:43.389831 containerd[1707]: time="2026-01-23T19:05:43.389064198Z" level=info msg="Container 115371430ad1a390b62ad9e187c977b50dd5d093af715eabcb91ff0fb8154fd8: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:05:43.406056 containerd[1707]: time="2026-01-23T19:05:43.406032315Z" level=info msg="CreateContainer within sandbox \"4c9833c8127b8557bb53e493504289bae350dd56b32dab95031d519fc5f8cf0e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"115371430ad1a390b62ad9e187c977b50dd5d093af715eabcb91ff0fb8154fd8\"" Jan 23 19:05:43.407411 containerd[1707]: time="2026-01-23T19:05:43.406408165Z" level=info msg="StartContainer for \"115371430ad1a390b62ad9e187c977b50dd5d093af715eabcb91ff0fb8154fd8\"" Jan 23 19:05:43.407411 containerd[1707]: time="2026-01-23T19:05:43.407292625Z" level=info msg="connecting to shim 115371430ad1a390b62ad9e187c977b50dd5d093af715eabcb91ff0fb8154fd8" address="unix:///run/containerd/s/e0c88d5c339c632e29a3f5f06a2246d3082c79c4ddff115f0a96d2bd5f81d8c9" protocol=ttrpc version=3 Jan 23 19:05:43.421899 systemd[1]: Started cri-containerd-115371430ad1a390b62ad9e187c977b50dd5d093af715eabcb91ff0fb8154fd8.scope - libcontainer container 115371430ad1a390b62ad9e187c977b50dd5d093af715eabcb91ff0fb8154fd8. Jan 23 19:05:43.447205 containerd[1707]: time="2026-01-23T19:05:43.447091447Z" level=info msg="StartContainer for \"115371430ad1a390b62ad9e187c977b50dd5d093af715eabcb91ff0fb8154fd8\" returns successfully" Jan 23 19:05:44.185071 containerd[1707]: time="2026-01-23T19:05:44.185031774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9cd8f,Uid:ddb9b9c3-1552-4fdd-910c-1a27826c897f,Namespace:kube-system,Attempt:0,}" Jan 23 19:05:44.211871 systemd-networkd[1340]: vethcf384df4: Link UP Jan 23 19:05:44.214495 kernel: cni0: port 2(vethcf384df4) entered blocking state Jan 23 19:05:44.214565 kernel: cni0: port 2(vethcf384df4) entered disabled state Jan 23 19:05:44.215153 kernel: vethcf384df4: entered allmulticast mode Jan 23 19:05:44.215925 kernel: vethcf384df4: entered promiscuous mode Jan 23 19:05:44.224320 kernel: cni0: port 2(vethcf384df4) entered blocking state Jan 23 19:05:44.224373 kernel: cni0: port 2(vethcf384df4) entered forwarding state Jan 23 19:05:44.224551 systemd-networkd[1340]: vethcf384df4: Gained carrier Jan 23 19:05:44.225957 containerd[1707]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00008c950), "name":"cbr0", "type":"bridge"} Jan 23 19:05:44.225957 containerd[1707]: delegateAdd: netconf sent to delegate plugin: Jan 23 19:05:44.267072 containerd[1707]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-23T19:05:44.266939615Z" level=info msg="connecting to shim eddecf808c5d2493812bb04d1d0f84e3e5bf5fe3379ddb572ee14f8cd5e59434" address="unix:///run/containerd/s/45c80ea9b32ef2268e3787973fae7a6cfbfedd0eafc2f705b6282fef6c7878be" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:05:44.295222 kubelet[3117]: I0123 19:05:44.295169 3117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-trrz7" podStartSLOduration=19.29515269 podStartE2EDuration="19.29515269s" podCreationTimestamp="2026-01-23 19:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:05:44.280610107 +0000 UTC m=+25.186223223" watchObservedRunningTime="2026-01-23 19:05:44.29515269 +0000 UTC m=+25.200765812" Jan 23 19:05:44.297919 systemd[1]: Started cri-containerd-eddecf808c5d2493812bb04d1d0f84e3e5bf5fe3379ddb572ee14f8cd5e59434.scope - libcontainer container eddecf808c5d2493812bb04d1d0f84e3e5bf5fe3379ddb572ee14f8cd5e59434. Jan 23 19:05:44.347308 containerd[1707]: time="2026-01-23T19:05:44.347222755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9cd8f,Uid:ddb9b9c3-1552-4fdd-910c-1a27826c897f,Namespace:kube-system,Attempt:0,} returns sandbox id \"eddecf808c5d2493812bb04d1d0f84e3e5bf5fe3379ddb572ee14f8cd5e59434\"" Jan 23 19:05:44.353771 containerd[1707]: time="2026-01-23T19:05:44.353734959Z" level=info msg="CreateContainer within sandbox \"eddecf808c5d2493812bb04d1d0f84e3e5bf5fe3379ddb572ee14f8cd5e59434\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:05:44.379776 containerd[1707]: time="2026-01-23T19:05:44.379337447Z" level=info msg="Container a1501be10841b3064c256123f514535e1fb8c9999c0445367fa18ea94f3fd903: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:05:44.392349 containerd[1707]: time="2026-01-23T19:05:44.392321251Z" level=info msg="CreateContainer within sandbox \"eddecf808c5d2493812bb04d1d0f84e3e5bf5fe3379ddb572ee14f8cd5e59434\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a1501be10841b3064c256123f514535e1fb8c9999c0445367fa18ea94f3fd903\"" Jan 23 19:05:44.392679 containerd[1707]: time="2026-01-23T19:05:44.392661958Z" level=info msg="StartContainer for \"a1501be10841b3064c256123f514535e1fb8c9999c0445367fa18ea94f3fd903\"" Jan 23 19:05:44.393644 containerd[1707]: time="2026-01-23T19:05:44.393536139Z" level=info msg="connecting to shim a1501be10841b3064c256123f514535e1fb8c9999c0445367fa18ea94f3fd903" address="unix:///run/containerd/s/45c80ea9b32ef2268e3787973fae7a6cfbfedd0eafc2f705b6282fef6c7878be" protocol=ttrpc version=3 Jan 23 19:05:44.408879 systemd[1]: Started cri-containerd-a1501be10841b3064c256123f514535e1fb8c9999c0445367fa18ea94f3fd903.scope - libcontainer container a1501be10841b3064c256123f514535e1fb8c9999c0445367fa18ea94f3fd903. Jan 23 19:05:44.436990 containerd[1707]: time="2026-01-23T19:05:44.436656205Z" level=info msg="StartContainer for \"a1501be10841b3064c256123f514535e1fb8c9999c0445367fa18ea94f3fd903\" returns successfully" Jan 23 19:05:44.955914 systemd-networkd[1340]: veth9c9d6361: Gained IPv6LL Jan 23 19:05:45.083913 systemd-networkd[1340]: cni0: Gained IPv6LL Jan 23 19:05:45.595918 systemd-networkd[1340]: vethcf384df4: Gained IPv6LL Jan 23 19:05:55.280330 kubelet[3117]: I0123 19:05:55.279301 3117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9cd8f" podStartSLOduration=30.279286229 podStartE2EDuration="30.279286229s" podCreationTimestamp="2026-01-23 19:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:05:45.281502093 +0000 UTC m=+26.187115214" watchObservedRunningTime="2026-01-23 19:05:55.279286229 +0000 UTC m=+36.184899351" Jan 23 19:06:52.388673 systemd[1]: Started sshd@5-10.200.4.21:22-10.200.16.10:51630.service - OpenSSH per-connection server daemon (10.200.16.10:51630). Jan 23 19:06:53.002664 sshd[4263]: Accepted publickey for core from 10.200.16.10 port 51630 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:06:53.003083 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:06:53.007195 systemd-logind[1683]: New session 8 of user core. Jan 23 19:06:53.009917 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 19:06:53.487153 sshd[4266]: Connection closed by 10.200.16.10 port 51630 Jan 23 19:06:53.487957 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:53.491235 systemd[1]: sshd@5-10.200.4.21:22-10.200.16.10:51630.service: Deactivated successfully. Jan 23 19:06:53.491407 systemd-logind[1683]: Session 8 logged out. Waiting for processes to exit. Jan 23 19:06:53.493176 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 19:06:53.494624 systemd-logind[1683]: Removed session 8. Jan 23 19:06:58.596481 systemd[1]: Started sshd@6-10.200.4.21:22-10.200.16.10:51642.service - OpenSSH per-connection server daemon (10.200.16.10:51642). Jan 23 19:06:59.211835 sshd[4304]: Accepted publickey for core from 10.200.16.10 port 51642 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:06:59.212941 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:06:59.216832 systemd-logind[1683]: New session 9 of user core. Jan 23 19:06:59.225890 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 19:06:59.687935 sshd[4327]: Connection closed by 10.200.16.10 port 51642 Jan 23 19:06:59.688927 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Jan 23 19:06:59.692125 systemd[1]: sshd@6-10.200.4.21:22-10.200.16.10:51642.service: Deactivated successfully. Jan 23 19:06:59.693889 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 19:06:59.694565 systemd-logind[1683]: Session 9 logged out. Waiting for processes to exit. Jan 23 19:06:59.695520 systemd-logind[1683]: Removed session 9. Jan 23 19:06:59.798283 systemd[1]: Started sshd@7-10.200.4.21:22-10.200.16.10:60084.service - OpenSSH per-connection server daemon (10.200.16.10:60084). Jan 23 19:07:00.418781 sshd[4340]: Accepted publickey for core from 10.200.16.10 port 60084 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:07:00.419530 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:00.423739 systemd-logind[1683]: New session 10 of user core. Jan 23 19:07:00.432900 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 19:07:00.923465 sshd[4343]: Connection closed by 10.200.16.10 port 60084 Jan 23 19:07:00.924002 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:00.927057 systemd[1]: sshd@7-10.200.4.21:22-10.200.16.10:60084.service: Deactivated successfully. Jan 23 19:07:00.928778 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 19:07:00.929443 systemd-logind[1683]: Session 10 logged out. Waiting for processes to exit. Jan 23 19:07:00.930515 systemd-logind[1683]: Removed session 10. Jan 23 19:07:01.040207 systemd[1]: Started sshd@8-10.200.4.21:22-10.200.16.10:60094.service - OpenSSH per-connection server daemon (10.200.16.10:60094). Jan 23 19:07:01.661795 sshd[4353]: Accepted publickey for core from 10.200.16.10 port 60094 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:07:01.662818 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:01.666994 systemd-logind[1683]: New session 11 of user core. Jan 23 19:07:01.672899 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 19:07:02.145335 sshd[4356]: Connection closed by 10.200.16.10 port 60094 Jan 23 19:07:02.145931 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:02.149200 systemd[1]: sshd@8-10.200.4.21:22-10.200.16.10:60094.service: Deactivated successfully. Jan 23 19:07:02.150953 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 19:07:02.151781 systemd-logind[1683]: Session 11 logged out. Waiting for processes to exit. Jan 23 19:07:02.153183 systemd-logind[1683]: Removed session 11. Jan 23 19:07:07.256551 systemd[1]: Started sshd@9-10.200.4.21:22-10.200.16.10:60110.service - OpenSSH per-connection server daemon (10.200.16.10:60110). Jan 23 19:07:07.872165 sshd[4388]: Accepted publickey for core from 10.200.16.10 port 60110 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:07:07.873370 sshd-session[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:07.878544 systemd-logind[1683]: New session 12 of user core. Jan 23 19:07:07.888875 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 19:07:08.351282 sshd[4391]: Connection closed by 10.200.16.10 port 60110 Jan 23 19:07:08.351918 sshd-session[4388]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:08.355234 systemd-logind[1683]: Session 12 logged out. Waiting for processes to exit. Jan 23 19:07:08.355445 systemd[1]: sshd@9-10.200.4.21:22-10.200.16.10:60110.service: Deactivated successfully. Jan 23 19:07:08.357310 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 19:07:08.359002 systemd-logind[1683]: Removed session 12. Jan 23 19:07:13.466099 systemd[1]: Started sshd@10-10.200.4.21:22-10.200.16.10:54636.service - OpenSSH per-connection server daemon (10.200.16.10:54636). Jan 23 19:07:14.090477 sshd[4423]: Accepted publickey for core from 10.200.16.10 port 54636 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:07:14.091581 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:14.095705 systemd-logind[1683]: New session 13 of user core. Jan 23 19:07:14.103896 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 19:07:14.572217 sshd[4446]: Connection closed by 10.200.16.10 port 54636 Jan 23 19:07:14.573691 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:14.576771 systemd-logind[1683]: Session 13 logged out. Waiting for processes to exit. Jan 23 19:07:14.576918 systemd[1]: sshd@10-10.200.4.21:22-10.200.16.10:54636.service: Deactivated successfully. Jan 23 19:07:14.578570 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 19:07:14.580141 systemd-logind[1683]: Removed session 13. Jan 23 19:07:19.686634 systemd[1]: Started sshd@11-10.200.4.21:22-10.200.16.10:40542.service - OpenSSH per-connection server daemon (10.200.16.10:40542). Jan 23 19:07:20.304039 sshd[4480]: Accepted publickey for core from 10.200.16.10 port 40542 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:07:20.305092 sshd-session[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:20.309262 systemd-logind[1683]: New session 14 of user core. Jan 23 19:07:20.315910 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 19:07:20.782652 sshd[4483]: Connection closed by 10.200.16.10 port 40542 Jan 23 19:07:20.783978 sshd-session[4480]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:20.786528 systemd[1]: sshd@11-10.200.4.21:22-10.200.16.10:40542.service: Deactivated successfully. Jan 23 19:07:20.788124 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 19:07:20.789392 systemd-logind[1683]: Session 14 logged out. Waiting for processes to exit. Jan 23 19:07:20.791009 systemd-logind[1683]: Removed session 14. Jan 23 19:07:20.890386 systemd[1]: Started sshd@12-10.200.4.21:22-10.200.16.10:40550.service - OpenSSH per-connection server daemon (10.200.16.10:40550). Jan 23 19:07:21.503962 sshd[4495]: Accepted publickey for core from 10.200.16.10 port 40550 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:07:21.505107 sshd-session[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:21.509293 systemd-logind[1683]: New session 15 of user core. Jan 23 19:07:21.516866 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 19:07:22.014130 sshd[4498]: Connection closed by 10.200.16.10 port 40550 Jan 23 19:07:22.015629 sshd-session[4495]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:22.018575 systemd-logind[1683]: Session 15 logged out. Waiting for processes to exit. Jan 23 19:07:22.019099 systemd[1]: sshd@12-10.200.4.21:22-10.200.16.10:40550.service: Deactivated successfully. Jan 23 19:07:22.021162 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 19:07:22.022691 systemd-logind[1683]: Removed session 15. Jan 23 19:07:22.127314 systemd[1]: Started sshd@13-10.200.4.21:22-10.200.16.10:40560.service - OpenSSH per-connection server daemon (10.200.16.10:40560). Jan 23 19:07:22.749490 sshd[4508]: Accepted publickey for core from 10.200.16.10 port 40560 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:07:22.750576 sshd-session[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:22.754709 systemd-logind[1683]: New session 16 of user core. Jan 23 19:07:22.760896 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 19:07:23.657012 sshd[4511]: Connection closed by 10.200.16.10 port 40560 Jan 23 19:07:23.657922 sshd-session[4508]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:23.660490 systemd[1]: sshd@13-10.200.4.21:22-10.200.16.10:40560.service: Deactivated successfully. Jan 23 19:07:23.662311 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 19:07:23.663653 systemd-logind[1683]: Session 16 logged out. Waiting for processes to exit. Jan 23 19:07:23.664948 systemd-logind[1683]: Removed session 16. Jan 23 19:07:23.770295 systemd[1]: Started sshd@14-10.200.4.21:22-10.200.16.10:40572.service - OpenSSH per-connection server daemon (10.200.16.10:40572). Jan 23 19:07:24.396357 sshd[4533]: Accepted publickey for core from 10.200.16.10 port 40572 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:07:24.397218 sshd-session[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:24.400827 systemd-logind[1683]: New session 17 of user core. Jan 23 19:07:24.405897 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 19:07:24.960081 sshd[4550]: Connection closed by 10.200.16.10 port 40572 Jan 23 19:07:24.960927 sshd-session[4533]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:24.963626 systemd[1]: sshd@14-10.200.4.21:22-10.200.16.10:40572.service: Deactivated successfully. Jan 23 19:07:24.965310 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 19:07:24.967278 systemd-logind[1683]: Session 17 logged out. Waiting for processes to exit. Jan 23 19:07:24.967964 systemd-logind[1683]: Removed session 17. Jan 23 19:07:25.073452 systemd[1]: Started sshd@15-10.200.4.21:22-10.200.16.10:40586.service - OpenSSH per-connection server daemon (10.200.16.10:40586). Jan 23 19:07:25.687018 sshd[4560]: Accepted publickey for core from 10.200.16.10 port 40586 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:07:25.688130 sshd-session[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:25.692386 systemd-logind[1683]: New session 18 of user core. Jan 23 19:07:25.698911 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 19:07:26.164424 sshd[4563]: Connection closed by 10.200.16.10 port 40586 Jan 23 19:07:26.164909 sshd-session[4560]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:26.168314 systemd[1]: sshd@15-10.200.4.21:22-10.200.16.10:40586.service: Deactivated successfully. Jan 23 19:07:26.169926 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 19:07:26.170595 systemd-logind[1683]: Session 18 logged out. Waiting for processes to exit. Jan 23 19:07:26.171939 systemd-logind[1683]: Removed session 18. Jan 23 19:07:31.289840 systemd[1]: Started sshd@16-10.200.4.21:22-10.200.16.10:41696.service - OpenSSH per-connection server daemon (10.200.16.10:41696). Jan 23 19:07:31.912535 sshd[4599]: Accepted publickey for core from 10.200.16.10 port 41696 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:07:31.912956 sshd-session[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:31.917063 systemd-logind[1683]: New session 19 of user core. Jan 23 19:07:31.922930 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 19:07:32.390980 sshd[4602]: Connection closed by 10.200.16.10 port 41696 Jan 23 19:07:32.391929 sshd-session[4599]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:32.394445 systemd[1]: sshd@16-10.200.4.21:22-10.200.16.10:41696.service: Deactivated successfully. Jan 23 19:07:32.396141 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 19:07:32.397694 systemd-logind[1683]: Session 19 logged out. Waiting for processes to exit. Jan 23 19:07:32.398520 systemd-logind[1683]: Removed session 19. Jan 23 19:07:37.504835 systemd[1]: Started sshd@17-10.200.4.21:22-10.200.16.10:41698.service - OpenSSH per-connection server daemon (10.200.16.10:41698). Jan 23 19:07:38.120723 sshd[4634]: Accepted publickey for core from 10.200.16.10 port 41698 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:07:38.121147 sshd-session[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:07:38.125192 systemd-logind[1683]: New session 20 of user core. Jan 23 19:07:38.130917 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 19:07:38.597556 sshd[4637]: Connection closed by 10.200.16.10 port 41698 Jan 23 19:07:38.598054 sshd-session[4634]: pam_unix(sshd:session): session closed for user core Jan 23 19:07:38.601030 systemd[1]: sshd@17-10.200.4.21:22-10.200.16.10:41698.service: Deactivated successfully. Jan 23 19:07:38.602620 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 19:07:38.603373 systemd-logind[1683]: Session 20 logged out. Waiting for processes to exit. Jan 23 19:07:38.604703 systemd-logind[1683]: Removed session 20.