Dec 16 13:04:20.895162 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:04:20.895186 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:04:20.895198 kernel: BIOS-provided physical RAM map: Dec 16 13:04:20.895205 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:04:20.895210 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 16 13:04:20.895216 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Dec 16 13:04:20.895223 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Dec 16 13:04:20.895229 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Dec 16 13:04:20.895235 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Dec 16 13:04:20.895242 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 16 13:04:20.895248 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 16 13:04:20.895254 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 16 13:04:20.895260 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 16 13:04:20.895266 kernel: printk: legacy bootconsole [earlyser0] enabled Dec 16 13:04:20.895274 kernel: NX (Execute Disable) protection: active Dec 16 13:04:20.895282 kernel: APIC: Static calls initialized Dec 16 13:04:20.895288 kernel: efi: EFI v2.7 by Microsoft Dec 16 13:04:20.895295 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa3018 RNG=0x3ffd2018 Dec 16 13:04:20.895302 kernel: random: crng init done Dec 16 13:04:20.895308 kernel: secureboot: Secure boot disabled Dec 16 13:04:20.895315 kernel: SMBIOS 3.1.0 present. Dec 16 13:04:20.895321 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Dec 16 13:04:20.895328 kernel: DMI: Memory slots populated: 2/2 Dec 16 13:04:20.895334 kernel: Hypervisor detected: Microsoft Hyper-V Dec 16 13:04:20.895340 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Dec 16 13:04:20.895347 kernel: Hyper-V: Nested features: 0x3e0101 Dec 16 13:04:20.895354 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 16 13:04:20.895360 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 16 13:04:20.895367 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:04:20.895373 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 13:04:20.895379 kernel: tsc: Detected 2299.999 MHz processor Dec 16 13:04:20.895386 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:04:20.895392 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:04:20.895398 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Dec 16 13:04:20.895405 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:04:20.895411 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:04:20.895419 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Dec 16 13:04:20.895426 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Dec 16 13:04:20.895432 kernel: Using GB pages for direct mapping Dec 16 13:04:20.895439 kernel: ACPI: Early table checksum verification disabled Dec 16 13:04:20.895448 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 16 13:04:20.895457 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:20.895471 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:20.895478 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 13:04:20.895484 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 16 13:04:20.895491 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:20.895499 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:20.895505 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:20.895513 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:04:20.895523 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 13:04:20.895530 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 13:04:20.895536 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 16 13:04:20.895543 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Dec 16 13:04:20.895550 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 16 13:04:20.895557 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 16 13:04:20.895564 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 16 13:04:20.895572 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 16 13:04:20.895579 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 16 13:04:20.895587 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Dec 16 13:04:20.895594 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 16 13:04:20.895601 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 16 13:04:20.895608 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Dec 16 13:04:20.895615 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Dec 16 13:04:20.895622 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Dec 16 13:04:20.895629 kernel: Zone ranges: Dec 16 13:04:20.895636 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:04:20.895643 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:04:20.895651 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:04:20.895658 kernel: Device empty Dec 16 13:04:20.895665 kernel: Movable zone start for each node Dec 16 13:04:20.895672 kernel: Early memory node ranges Dec 16 13:04:20.895678 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:04:20.895685 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Dec 16 13:04:20.895692 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Dec 16 13:04:20.895698 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 16 13:04:20.895705 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 13:04:20.895713 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 16 13:04:20.895719 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:04:20.895726 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:04:20.895733 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 16 13:04:20.895740 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Dec 16 13:04:20.895746 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 16 13:04:20.895753 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 16 13:04:20.895760 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:04:20.895766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:04:20.895774 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:04:20.895781 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 16 13:04:20.895788 kernel: TSC deadline timer available Dec 16 13:04:20.895795 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:04:20.895801 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:04:20.895808 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:04:20.895815 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:04:20.895821 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:04:20.895828 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:04:20.895835 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:04:20.895843 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 16 13:04:20.895850 kernel: Booting paravirtualized kernel on Hyper-V Dec 16 13:04:20.895857 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:04:20.895864 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:04:20.895871 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:04:20.895878 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:04:20.895884 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:04:20.895891 kernel: Hyper-V: PV spinlocks enabled Dec 16 13:04:20.895898 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:04:20.895907 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:04:20.895914 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 13:04:20.895921 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:04:20.895928 kernel: Fallback order for Node 0: 0 Dec 16 13:04:20.895934 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Dec 16 13:04:20.895941 kernel: Policy zone: Normal Dec 16 13:04:20.895948 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:04:20.895954 kernel: software IO TLB: area num 2. Dec 16 13:04:20.895963 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:04:20.895969 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:04:20.895976 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:04:20.895983 kernel: Dynamic Preempt: voluntary Dec 16 13:04:20.895989 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:04:20.895999 kernel: rcu: RCU event tracing is enabled. Dec 16 13:04:20.896011 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:04:20.896020 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:04:20.896027 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:04:20.896034 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:04:20.896042 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:04:20.896050 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:04:20.896058 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:04:20.896065 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:04:20.896073 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:04:20.896080 kernel: Using NULL legacy PIC Dec 16 13:04:20.896089 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 16 13:04:20.896096 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:04:20.896103 kernel: Console: colour dummy device 80x25 Dec 16 13:04:20.896111 kernel: printk: legacy console [tty1] enabled Dec 16 13:04:20.896118 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:04:20.896136 kernel: printk: legacy bootconsole [earlyser0] disabled Dec 16 13:04:20.896145 kernel: ACPI: Core revision 20240827 Dec 16 13:04:20.896152 kernel: Failed to register legacy timer interrupt Dec 16 13:04:20.896320 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:04:20.899347 kernel: x2apic enabled Dec 16 13:04:20.899361 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:04:20.899372 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Dec 16 13:04:20.899381 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 13:04:20.899389 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Dec 16 13:04:20.899398 kernel: Hyper-V: Using IPI hypercalls Dec 16 13:04:20.899407 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 16 13:04:20.899416 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 16 13:04:20.899425 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 16 13:04:20.899438 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 16 13:04:20.899447 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 16 13:04:20.899456 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 16 13:04:20.899465 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Dec 16 13:04:20.899474 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Dec 16 13:04:20.899483 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:04:20.899492 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 13:04:20.899501 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 13:04:20.899510 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:04:20.899519 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:04:20.899529 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:04:20.899538 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 13:04:20.899546 kernel: RETBleed: Vulnerable Dec 16 13:04:20.899555 kernel: Speculative Store Bypass: Vulnerable Dec 16 13:04:20.899563 kernel: active return thunk: its_return_thunk Dec 16 13:04:20.899572 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:04:20.899581 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:04:20.899590 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:04:20.899598 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:04:20.899607 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:04:20.899617 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:04:20.899626 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:04:20.899635 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Dec 16 13:04:20.899643 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Dec 16 13:04:20.899652 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Dec 16 13:04:20.899661 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:04:20.899670 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 16 13:04:20.899678 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 16 13:04:20.899687 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 16 13:04:20.899695 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Dec 16 13:04:20.899704 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Dec 16 13:04:20.899713 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Dec 16 13:04:20.899723 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Dec 16 13:04:20.899731 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:04:20.899740 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:04:20.899749 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:04:20.899757 kernel: landlock: Up and running. Dec 16 13:04:20.899766 kernel: SELinux: Initializing. Dec 16 13:04:20.899775 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:04:20.899783 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:04:20.899792 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Dec 16 13:04:20.899801 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Dec 16 13:04:20.899810 kernel: signal: max sigframe size: 11952 Dec 16 13:04:20.899820 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:04:20.899830 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:04:20.899839 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:04:20.899848 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:04:20.899857 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:04:20.899866 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:04:20.899875 kernel: .... node #0, CPUs: #1 Dec 16 13:04:20.899884 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:04:20.899893 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 16 13:04:20.899904 kernel: Memory: 8068832K/8383228K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 308180K reserved, 0K cma-reserved) Dec 16 13:04:20.899912 kernel: devtmpfs: initialized Dec 16 13:04:20.899921 kernel: x86/mm: Memory block size: 128MB Dec 16 13:04:20.899930 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 16 13:04:20.899939 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:04:20.899948 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:04:20.899957 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:04:20.899966 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:04:20.899975 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:04:20.899985 kernel: audit: type=2000 audit(1765890258.074:1): state=initialized audit_enabled=0 res=1 Dec 16 13:04:20.899993 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:04:20.900002 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:04:20.900011 kernel: cpuidle: using governor menu Dec 16 13:04:20.900020 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:04:20.900029 kernel: dca service started, version 1.12.1 Dec 16 13:04:20.900038 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Dec 16 13:04:20.900046 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Dec 16 13:04:20.900057 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:04:20.900065 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:04:20.900074 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:04:20.900083 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:04:20.900092 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:04:20.900101 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:04:20.900110 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:04:20.900119 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:04:20.900152 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:04:20.900161 kernel: ACPI: Interpreter enabled Dec 16 13:04:20.900170 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:04:20.900178 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:04:20.900186 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:04:20.900195 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 16 13:04:20.900203 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 16 13:04:20.900212 kernel: iommu: Default domain type: Translated Dec 16 13:04:20.900221 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:04:20.900229 kernel: efivars: Registered efivars operations Dec 16 13:04:20.900238 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:04:20.900246 kernel: PCI: System does not support PCI Dec 16 13:04:20.900254 kernel: vgaarb: loaded Dec 16 13:04:20.900264 kernel: clocksource: Switched to clocksource tsc-early Dec 16 13:04:20.900272 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:04:20.900280 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:04:20.900289 kernel: pnp: PnP ACPI init Dec 16 13:04:20.900298 kernel: pnp: PnP ACPI: found 3 devices Dec 16 13:04:20.900306 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:04:20.900315 kernel: NET: Registered PF_INET protocol family Dec 16 13:04:20.900326 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:04:20.900334 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 16 13:04:20.900343 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:04:20.900352 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:04:20.900361 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 13:04:20.900370 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 16 13:04:20.900378 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:04:20.900386 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:04:20.900396 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:04:20.900405 kernel: NET: Registered PF_XDP protocol family Dec 16 13:04:20.900414 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:04:20.900423 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:04:20.900432 kernel: software IO TLB: mapped [mem 0x000000003a9b9000-0x000000003e9b9000] (64MB) Dec 16 13:04:20.900441 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Dec 16 13:04:20.900448 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Dec 16 13:04:20.900456 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Dec 16 13:04:20.900463 kernel: clocksource: Switched to clocksource tsc Dec 16 13:04:20.900472 kernel: Initialise system trusted keyrings Dec 16 13:04:20.900480 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 16 13:04:20.900487 kernel: Key type asymmetric registered Dec 16 13:04:20.900495 kernel: Asymmetric key parser 'x509' registered Dec 16 13:04:20.900502 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:04:20.900510 kernel: io scheduler mq-deadline registered Dec 16 13:04:20.900518 kernel: io scheduler kyber registered Dec 16 13:04:20.900525 kernel: io scheduler bfq registered Dec 16 13:04:20.900533 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:04:20.900542 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:04:20.900550 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:04:20.900557 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 16 13:04:20.900565 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:04:20.900572 kernel: i8042: PNP: No PS/2 controller found. Dec 16 13:04:20.900690 kernel: rtc_cmos 00:02: registered as rtc0 Dec 16 13:04:20.900758 kernel: rtc_cmos 00:02: setting system clock to 2025-12-16T13:04:20 UTC (1765890260) Dec 16 13:04:20.900821 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 16 13:04:20.900831 kernel: intel_pstate: Intel P-state driver initializing Dec 16 13:04:20.900839 kernel: efifb: probing for efifb Dec 16 13:04:20.900847 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 13:04:20.900854 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 13:04:20.900862 kernel: efifb: scrolling: redraw Dec 16 13:04:20.900870 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:04:20.900877 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:04:20.900885 kernel: fb0: EFI VGA frame buffer device Dec 16 13:04:20.900892 kernel: pstore: Using crash dump compression: deflate Dec 16 13:04:20.900901 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:04:20.900909 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:04:20.900916 kernel: Segment Routing with IPv6 Dec 16 13:04:20.900924 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:04:20.900931 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:04:20.900939 kernel: Key type dns_resolver registered Dec 16 13:04:20.900946 kernel: IPI shorthand broadcast: enabled Dec 16 13:04:20.900954 kernel: sched_clock: Marking stable (2616003961, 79605229)->(2983455922, -287846732) Dec 16 13:04:20.900961 kernel: registered taskstats version 1 Dec 16 13:04:20.900970 kernel: Loading compiled-in X.509 certificates Dec 16 13:04:20.900978 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:04:20.900986 kernel: Demotion targets for Node 0: null Dec 16 13:04:20.900993 kernel: Key type .fscrypt registered Dec 16 13:04:20.901001 kernel: Key type fscrypt-provisioning registered Dec 16 13:04:20.901008 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:04:20.901016 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:04:20.901024 kernel: ima: No architecture policies found Dec 16 13:04:20.901031 kernel: clk: Disabling unused clocks Dec 16 13:04:20.901040 kernel: Warning: unable to open an initial console. Dec 16 13:04:20.901048 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:04:20.901055 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:04:20.901063 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:04:20.901070 kernel: Run /init as init process Dec 16 13:04:20.901078 kernel: with arguments: Dec 16 13:04:20.901085 kernel: /init Dec 16 13:04:20.901093 kernel: with environment: Dec 16 13:04:20.901100 kernel: HOME=/ Dec 16 13:04:20.901109 kernel: TERM=linux Dec 16 13:04:20.901117 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:04:20.901140 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:04:20.901153 systemd[1]: Detected virtualization microsoft. Dec 16 13:04:20.901161 systemd[1]: Detected architecture x86-64. Dec 16 13:04:20.901169 systemd[1]: Running in initrd. Dec 16 13:04:20.901177 systemd[1]: No hostname configured, using default hostname. Dec 16 13:04:20.901187 systemd[1]: Hostname set to . Dec 16 13:04:20.901195 systemd[1]: Initializing machine ID from random generator. Dec 16 13:04:20.901203 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:04:20.901212 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:04:20.901220 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:04:20.901229 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:04:20.901237 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:04:20.901245 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:04:20.901255 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:04:20.901264 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:04:20.901272 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:04:20.901281 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:04:20.901289 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:04:20.901297 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:04:20.901305 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:04:20.901314 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:04:20.901322 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:04:20.901330 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:04:20.901338 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:04:20.901347 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:04:20.901355 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:04:20.901363 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:04:20.901371 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:04:20.901379 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:04:20.901388 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:04:20.901397 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:04:20.901405 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:04:20.901413 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:04:20.901421 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:04:20.901429 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:04:20.901437 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:04:20.901446 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:04:20.901461 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:20.901484 systemd-journald[186]: Collecting audit messages is disabled. Dec 16 13:04:20.901506 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:04:20.901516 systemd-journald[186]: Journal started Dec 16 13:04:20.901537 systemd-journald[186]: Runtime Journal (/run/log/journal/d0a868511c234218a72ee9aede537b1e) is 8M, max 158.6M, 150.6M free. Dec 16 13:04:20.894475 systemd-modules-load[187]: Inserted module 'overlay' Dec 16 13:04:20.911086 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:04:20.911342 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:04:20.916472 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:04:20.921553 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:04:20.927398 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:04:20.934144 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:04:20.936541 systemd-modules-load[187]: Inserted module 'br_netfilter' Dec 16 13:04:20.938107 kernel: Bridge firewalling registered Dec 16 13:04:20.936933 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:20.942673 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:04:20.946093 systemd-tmpfiles[200]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:04:20.950256 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:04:20.954630 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:04:20.960334 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:04:20.966584 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:04:20.972223 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:04:20.979478 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:04:20.982230 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:04:20.988670 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:04:20.992554 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:04:20.996759 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:04:21.017207 systemd-resolved[219]: Positive Trust Anchors: Dec 16 13:04:21.017495 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:04:21.017526 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:04:21.036649 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:04:21.023762 systemd-resolved[219]: Defaulting to hostname 'linux'. Dec 16 13:04:21.046987 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:04:21.052726 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:04:21.081140 kernel: SCSI subsystem initialized Dec 16 13:04:21.088139 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:04:21.096143 kernel: iscsi: registered transport (tcp) Dec 16 13:04:21.111144 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:04:21.111180 kernel: QLogic iSCSI HBA Driver Dec 16 13:04:21.122629 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:04:21.133924 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:04:21.136399 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:04:21.165792 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:04:21.168558 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:04:21.219143 kernel: raid6: avx512x4 gen() 48383 MB/s Dec 16 13:04:21.236135 kernel: raid6: avx512x2 gen() 46790 MB/s Dec 16 13:04:21.253137 kernel: raid6: avx512x1 gen() 30057 MB/s Dec 16 13:04:21.271134 kernel: raid6: avx2x4 gen() 41809 MB/s Dec 16 13:04:21.289137 kernel: raid6: avx2x2 gen() 43999 MB/s Dec 16 13:04:21.307255 kernel: raid6: avx2x1 gen() 30905 MB/s Dec 16 13:04:21.307268 kernel: raid6: using algorithm avx512x4 gen() 48383 MB/s Dec 16 13:04:21.325515 kernel: raid6: .... xor() 8036 MB/s, rmw enabled Dec 16 13:04:21.325533 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:04:21.342144 kernel: xor: automatically using best checksumming function avx Dec 16 13:04:21.446139 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:04:21.450298 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:04:21.453229 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:04:21.466926 systemd-udevd[436]: Using default interface naming scheme 'v255'. Dec 16 13:04:21.470455 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:04:21.476712 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:04:21.492229 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Dec 16 13:04:21.507903 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:04:21.510222 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:04:21.536599 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:04:21.542235 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:04:21.582143 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:04:21.597531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:21.597995 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:21.603947 kernel: AES CTR mode by8 optimization enabled Dec 16 13:04:21.605190 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:21.611382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:21.617203 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 13:04:21.621270 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:21.623300 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:21.628262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:21.646637 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 13:04:21.646666 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 13:04:21.648307 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 13:04:21.655057 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:21.667072 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 13:04:21.667087 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 16 13:04:21.677603 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5228f86c (unnamed net_device) (uninitialized): VF slot 1 added Dec 16 13:04:21.677746 kernel: hv_vmbus: registering driver hv_pci Dec 16 13:04:21.680142 kernel: PTP clock support registered Dec 16 13:04:21.683883 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Dec 16 13:04:21.689150 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 13:04:21.698171 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Dec 16 13:04:21.701288 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 13:04:21.701333 kernel: hv_vmbus: registering driver hv_utils Dec 16 13:04:21.702909 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Dec 16 13:04:21.706782 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 13:04:21.706811 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:04:21.709781 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 13:04:21.709811 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 13:04:21.711559 kernel: scsi host0: storvsc_host_t Dec 16 13:04:21.711729 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 13:04:21.677675 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 13:04:21.680334 systemd-journald[186]: Time jumped backwards, rotating. Dec 16 13:04:21.680375 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 13:04:21.676940 systemd-resolved[219]: Clock change detected. Flushing caches. Dec 16 13:04:21.686314 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 16 13:04:21.686339 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 13:04:21.692826 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Dec 16 13:04:21.695939 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Dec 16 13:04:21.703993 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 13:04:21.704116 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:04:21.706793 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 13:04:21.711939 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:04:21.712082 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Dec 16 13:04:21.719817 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#24 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:04:21.727276 kernel: nvme nvme0: pci function c05b:00:00.0 Dec 16 13:04:21.727438 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Dec 16 13:04:21.740952 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#46 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:04:21.886794 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 13:04:21.891793 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:04:22.150795 kernel: nvme nvme0: using unchecked data buffer Dec 16 13:04:22.354738 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:04:22.365015 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Dec 16 13:04:22.404556 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Dec 16 13:04:22.485689 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 13:04:22.486405 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 13:04:22.492800 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:04:22.668176 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Dec 16 13:04:22.668327 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Dec 16 13:04:22.670549 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Dec 16 13:04:22.672010 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 13:04:22.676914 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Dec 16 13:04:22.679998 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Dec 16 13:04:22.685250 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Dec 16 13:04:22.685274 kernel: pci 7870:00:00.0: enabling Extended Tags Dec 16 13:04:22.700252 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 13:04:22.700476 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Dec 16 13:04:22.700606 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Dec 16 13:04:22.706301 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Dec 16 13:04:22.716797 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Dec 16 13:04:22.720410 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5228f86c eth0: VF registering: eth1 Dec 16 13:04:22.720569 kernel: mana 7870:00:00.0 eth1: joined to eth0 Dec 16 13:04:22.724797 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Dec 16 13:04:22.800953 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:04:22.801550 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:04:22.807148 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:04:22.807493 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:04:22.813237 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:04:22.828488 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:04:23.514823 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:04:23.515479 disk-uuid[644]: The operation has completed successfully. Dec 16 13:04:23.561092 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:04:23.561158 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:04:23.594004 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:04:23.609466 sh[694]: Success Dec 16 13:04:23.637879 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:04:23.637923 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:04:23.639335 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:04:23.647796 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:04:23.913403 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:04:23.917871 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:04:23.930411 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:04:23.941794 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (707) Dec 16 13:04:23.941822 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:04:23.943804 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:04:24.255448 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:04:24.255488 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:04:24.256961 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:04:24.320514 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:04:24.323004 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:04:24.326671 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:04:24.329433 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:04:24.338385 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:04:24.353894 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (730) Dec 16 13:04:24.357814 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:04:24.357845 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:04:24.378336 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:04:24.378362 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:04:24.378369 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:04:24.382818 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:04:24.384096 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:04:24.387698 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:04:24.417594 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:04:24.419879 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:04:24.450400 systemd-networkd[876]: lo: Link UP Dec 16 13:04:24.450407 systemd-networkd[876]: lo: Gained carrier Dec 16 13:04:24.452469 systemd-networkd[876]: Enumeration completed Dec 16 13:04:24.452529 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:04:24.464840 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:04:24.464972 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:04:24.465049 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5228f86c eth0: Data path switched to VF: enP30832s1 Dec 16 13:04:24.452835 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:04:24.452838 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:04:24.458098 systemd[1]: Reached target network.target - Network. Dec 16 13:04:24.461220 systemd-networkd[876]: enP30832s1: Link UP Dec 16 13:04:24.461285 systemd-networkd[876]: eth0: Link UP Dec 16 13:04:24.461364 systemd-networkd[876]: eth0: Gained carrier Dec 16 13:04:24.461374 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:04:24.463919 systemd-networkd[876]: enP30832s1: Gained carrier Dec 16 13:04:24.473811 systemd-networkd[876]: eth0: DHCPv4 address 10.200.0.41/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:04:25.966328 ignition[819]: Ignition 2.22.0 Dec 16 13:04:25.966338 ignition[819]: Stage: fetch-offline Dec 16 13:04:25.968845 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:04:25.966425 ignition[819]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:25.966432 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:25.966513 ignition[819]: parsed url from cmdline: "" Dec 16 13:04:25.975595 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:04:25.966515 ignition[819]: no config URL provided Dec 16 13:04:25.966519 ignition[819]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:04:25.966524 ignition[819]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:04:25.966528 ignition[819]: failed to fetch config: resource requires networking Dec 16 13:04:25.967073 ignition[819]: Ignition finished successfully Dec 16 13:04:25.998520 ignition[885]: Ignition 2.22.0 Dec 16 13:04:25.998528 ignition[885]: Stage: fetch Dec 16 13:04:25.998718 ignition[885]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:25.998725 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:25.998811 ignition[885]: parsed url from cmdline: "" Dec 16 13:04:25.998813 ignition[885]: no config URL provided Dec 16 13:04:25.998822 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:04:25.998827 ignition[885]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:04:25.998844 ignition[885]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 13:04:26.101178 ignition[885]: GET result: OK Dec 16 13:04:26.101243 ignition[885]: config has been read from IMDS userdata Dec 16 13:04:26.101263 ignition[885]: parsing config with SHA512: fa8af685a4941477c83211f0890d328a40af4a327f8ca70b625b815b96a41da99e2958157668bd70cbe68319de60bbd9425daa8193c8aa7aaf69bda652c24ec0 Dec 16 13:04:26.106631 unknown[885]: fetched base config from "system" Dec 16 13:04:26.108373 unknown[885]: fetched base config from "system" Dec 16 13:04:26.108387 unknown[885]: fetched user config from "azure" Dec 16 13:04:26.108876 ignition[885]: fetch: fetch complete Dec 16 13:04:26.110660 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:04:26.108880 ignition[885]: fetch: fetch passed Dec 16 13:04:26.114101 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:04:26.108918 ignition[885]: Ignition finished successfully Dec 16 13:04:26.136642 ignition[892]: Ignition 2.22.0 Dec 16 13:04:26.136651 ignition[892]: Stage: kargs Dec 16 13:04:26.136870 ignition[892]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:26.136877 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:26.137609 ignition[892]: kargs: kargs passed Dec 16 13:04:26.141150 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:04:26.137636 ignition[892]: Ignition finished successfully Dec 16 13:04:26.147135 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:04:26.170068 ignition[899]: Ignition 2.22.0 Dec 16 13:04:26.170076 ignition[899]: Stage: disks Dec 16 13:04:26.170244 ignition[899]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:26.171908 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:04:26.170250 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:26.170994 ignition[899]: disks: disks passed Dec 16 13:04:26.171021 ignition[899]: Ignition finished successfully Dec 16 13:04:26.180335 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:04:26.181715 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:04:26.183321 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:04:26.185992 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:04:26.188810 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:04:26.194692 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:04:26.255935 systemd-fsck[907]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Dec 16 13:04:26.259879 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:04:26.263672 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:04:26.430860 systemd-networkd[876]: eth0: Gained IPv6LL Dec 16 13:04:26.502586 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:04:26.507001 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:04:26.503239 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:04:26.524030 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:04:26.528347 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:04:26.540888 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 13:04:26.542866 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:04:26.542891 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:04:26.548717 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:04:26.558562 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (916) Dec 16 13:04:26.558579 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:04:26.558288 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:04:26.562885 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:04:26.568246 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:04:26.568298 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:04:26.569281 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:04:26.570682 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:04:27.040240 coreos-metadata[918]: Dec 16 13:04:27.040 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:04:27.047827 coreos-metadata[918]: Dec 16 13:04:27.047 INFO Fetch successful Dec 16 13:04:27.048878 coreos-metadata[918]: Dec 16 13:04:27.048 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:04:27.056268 coreos-metadata[918]: Dec 16 13:04:27.056 INFO Fetch successful Dec 16 13:04:27.075346 coreos-metadata[918]: Dec 16 13:04:27.075 INFO wrote hostname ci-4459.2.2-a-371cb0f3db to /sysroot/etc/hostname Dec 16 13:04:27.076865 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:04:27.316814 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:04:27.349672 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:04:27.367712 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:04:27.371666 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:04:28.262657 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:04:28.266851 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:04:28.277117 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:04:28.284556 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:04:28.286622 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:04:28.307296 ignition[1037]: INFO : Ignition 2.22.0 Dec 16 13:04:28.307296 ignition[1037]: INFO : Stage: mount Dec 16 13:04:28.310886 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:28.310886 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:28.310886 ignition[1037]: INFO : mount: mount passed Dec 16 13:04:28.310886 ignition[1037]: INFO : Ignition finished successfully Dec 16 13:04:28.310211 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:04:28.313817 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:04:28.323857 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:04:28.328524 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:04:28.350790 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (1050) Dec 16 13:04:28.352816 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:04:28.352944 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:04:28.358059 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:04:28.358087 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 13:04:28.358097 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:04:28.360514 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:04:28.389476 ignition[1067]: INFO : Ignition 2.22.0 Dec 16 13:04:28.389476 ignition[1067]: INFO : Stage: files Dec 16 13:04:28.394846 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:28.394846 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:28.394846 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:04:28.394846 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:04:28.394846 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:04:28.459061 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:04:28.460855 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:04:28.460855 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:04:28.459342 unknown[1067]: wrote ssh authorized keys file for user: core Dec 16 13:04:28.474972 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:04:28.478856 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:04:28.521153 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:04:28.609236 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:04:28.612843 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:04:28.612843 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:04:28.612843 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:04:28.612843 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:04:28.612843 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:04:28.612843 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:04:28.612843 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:04:28.612843 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:04:28.635961 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:04:28.635961 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:04:28.635961 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:04:28.635961 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:04:28.635961 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:04:28.635961 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 13:04:29.151299 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:04:32.698108 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:04:32.698108 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:04:32.761224 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:04:32.767820 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:04:32.767820 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:04:32.775107 ignition[1067]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:04:32.775107 ignition[1067]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:04:32.775107 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:04:32.775107 ignition[1067]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:04:32.775107 ignition[1067]: INFO : files: files passed Dec 16 13:04:32.775107 ignition[1067]: INFO : Ignition finished successfully Dec 16 13:04:32.771654 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:04:32.781191 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:04:32.792304 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:04:32.796808 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:04:32.796886 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:04:32.809246 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:04:32.817876 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:04:32.812701 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:04:32.828267 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:04:32.820985 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:04:32.825886 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:04:32.850878 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:04:32.850949 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:04:32.853997 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:04:32.854354 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:04:32.854446 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:04:32.855273 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:04:32.874960 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:04:32.878168 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:04:32.897323 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:04:32.897539 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:04:32.903940 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:04:32.907938 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:04:32.908059 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:04:32.912884 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:04:32.915304 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:04:32.917819 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:04:32.921923 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:04:32.925945 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:04:32.928559 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:04:32.932925 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:04:32.935169 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:04:32.938940 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:04:32.942928 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:04:32.946921 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:04:32.947137 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:04:32.947258 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:04:32.952625 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:04:32.956258 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:04:32.961494 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:04:32.961769 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:04:32.965650 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:04:32.965768 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:04:32.972425 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:04:32.972543 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:04:32.976663 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:04:32.976786 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:04:32.980308 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 13:04:32.980383 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 13:04:32.985371 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:04:32.991843 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:04:32.991977 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:04:33.004958 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:04:33.008585 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:04:33.010082 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:04:33.013591 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:04:33.013718 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:04:33.021990 ignition[1121]: INFO : Ignition 2.22.0 Dec 16 13:04:33.021990 ignition[1121]: INFO : Stage: umount Dec 16 13:04:33.021990 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:04:33.021990 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 13:04:33.021990 ignition[1121]: INFO : umount: umount passed Dec 16 13:04:33.021990 ignition[1121]: INFO : Ignition finished successfully Dec 16 13:04:33.022227 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:04:33.027554 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:04:33.036520 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:04:33.036588 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:04:33.043527 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:04:33.043572 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:04:33.048873 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:04:33.048921 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:04:33.052232 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:04:33.052276 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:04:33.055126 systemd[1]: Stopped target network.target - Network. Dec 16 13:04:33.056143 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:04:33.056201 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:04:33.058677 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:04:33.059252 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:04:33.060228 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:04:33.071104 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:04:33.073163 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:04:33.073332 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:04:33.073365 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:04:33.077841 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:04:33.077868 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:04:33.079672 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:04:33.079713 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:04:33.081835 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:04:33.081867 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:04:33.083963 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:04:33.087945 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:04:33.092558 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:04:33.092974 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:04:33.093051 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:04:33.096306 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:04:33.096478 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:04:33.096545 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:04:33.099990 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:04:33.100080 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:04:33.106419 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:04:33.107128 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:04:33.110875 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:04:33.111730 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:04:33.124480 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:04:33.124534 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:04:33.128601 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:04:33.138844 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:04:33.139427 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:04:33.140539 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:04:33.140580 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:04:33.141136 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:04:33.141346 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:04:33.141569 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:04:33.141600 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:04:33.144654 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:04:33.159201 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:04:33.159243 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:04:33.161348 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:04:33.161867 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:04:33.168025 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:04:33.168091 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:04:33.173030 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:04:33.173056 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:04:33.177537 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:04:33.187127 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5228f86c eth0: Data path switched from VF: enP30832s1 Dec 16 13:04:33.188625 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:04:33.177608 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:04:33.187056 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:04:33.187105 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:04:33.192820 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:04:33.192867 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:04:33.200323 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:04:33.203393 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:04:33.204822 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:04:33.210418 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:04:33.210465 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:04:33.216115 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:33.216157 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:33.223166 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:04:33.223216 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:04:33.223248 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:04:33.223513 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:04:33.223588 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:04:33.224565 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:04:33.224630 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:04:33.225141 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:04:33.226873 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:04:33.244602 systemd[1]: Switching root. Dec 16 13:04:33.321021 systemd-journald[186]: Journal stopped Dec 16 13:04:37.234136 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Dec 16 13:04:37.234161 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:04:37.234174 kernel: SELinux: policy capability open_perms=1 Dec 16 13:04:37.234182 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:04:37.234189 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:04:37.234197 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:04:37.234206 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:04:37.234214 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:04:37.234223 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:04:37.234231 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:04:37.234241 kernel: audit: type=1403 audit(1765890274.512:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:04:37.234250 systemd[1]: Successfully loaded SELinux policy in 245.577ms. Dec 16 13:04:37.234259 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.476ms. Dec 16 13:04:37.234269 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:04:37.234281 systemd[1]: Detected virtualization microsoft. Dec 16 13:04:37.234290 systemd[1]: Detected architecture x86-64. Dec 16 13:04:37.234298 systemd[1]: Detected first boot. Dec 16 13:04:37.234307 systemd[1]: Hostname set to . Dec 16 13:04:37.234316 systemd[1]: Initializing machine ID from random generator. Dec 16 13:04:37.234325 zram_generator::config[1163]: No configuration found. Dec 16 13:04:37.234336 kernel: Guest personality initialized and is inactive Dec 16 13:04:37.234344 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Dec 16 13:04:37.234352 kernel: Initialized host personality Dec 16 13:04:37.234360 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:04:37.234369 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:04:37.234379 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:04:37.234388 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:04:37.234397 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:04:37.234407 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:04:37.234416 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:04:37.234426 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:04:37.234436 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:04:37.234445 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:04:37.234455 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:04:37.234464 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:04:37.234475 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:04:37.234484 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:04:37.234493 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:04:37.234502 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:04:37.234511 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:04:37.234523 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:04:37.234533 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:04:37.234542 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:04:37.234553 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:04:37.234562 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:04:37.234571 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:04:37.234581 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:04:37.234590 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:04:37.234600 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:04:37.234609 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:04:37.234620 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:04:37.234630 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:04:37.234640 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:04:37.234649 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:04:37.234659 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:04:37.234668 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:04:37.234680 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:04:37.234690 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:04:37.234699 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:04:37.234708 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:04:37.234718 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:04:37.234727 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:04:37.234736 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:04:37.234748 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:04:37.234757 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:37.234766 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:04:37.234787 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:04:37.234795 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:04:37.234804 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:04:37.234812 systemd[1]: Reached target machines.target - Containers. Dec 16 13:04:37.234820 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:04:37.234829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:04:37.234837 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:04:37.236592 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:04:37.236608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:04:37.236617 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:04:37.236626 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:04:37.236761 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:04:37.236771 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:04:37.236892 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:04:37.236906 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:04:37.236915 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:04:37.236923 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:04:37.236978 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:04:37.236988 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:04:37.237046 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:04:37.237100 kernel: loop: module loaded Dec 16 13:04:37.237110 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:04:37.237171 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:04:37.237217 kernel: fuse: init (API version 7.41) Dec 16 13:04:37.237224 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:04:37.237233 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:04:37.237266 systemd-journald[1249]: Collecting audit messages is disabled. Dec 16 13:04:37.237289 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:04:37.237296 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:04:37.237305 systemd-journald[1249]: Journal started Dec 16 13:04:37.237325 systemd-journald[1249]: Runtime Journal (/run/log/journal/e46f177b1f65425fb489e162b34f1776) is 8M, max 158.6M, 150.6M free. Dec 16 13:04:36.883468 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:04:36.891190 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 13:04:36.891527 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:04:37.240412 systemd[1]: Stopped verity-setup.service. Dec 16 13:04:37.244803 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:37.249311 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:04:37.249619 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:04:37.252000 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:04:37.253888 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:04:37.256932 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:04:37.258264 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:04:37.260943 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:04:37.262226 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:04:37.263998 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:04:37.265619 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:04:37.265736 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:04:37.268984 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:04:37.269111 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:04:37.271974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:04:37.272094 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:04:37.273479 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:04:37.273602 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:04:37.277131 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:04:37.277280 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:04:37.279476 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:04:37.282068 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:04:37.283462 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:04:37.289700 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:04:37.293470 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:04:37.298246 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:04:37.299813 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:04:37.299839 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:04:37.305560 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:04:37.307863 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:04:37.309552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:04:37.310930 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:04:37.313539 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:04:37.315386 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:04:37.317864 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:04:37.319618 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:04:37.320879 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:04:37.325897 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:04:37.329877 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:04:37.335212 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:04:37.338120 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:04:37.340079 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:04:37.360816 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:04:37.362752 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:04:37.370233 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:04:37.387035 kernel: loop0: detected capacity change from 0 to 229808 Dec 16 13:04:37.387657 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:04:37.399988 kernel: ACPI: bus type drm_connector registered Dec 16 13:04:37.399845 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:04:37.399994 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:04:37.402914 systemd-journald[1249]: Time spent on flushing to /var/log/journal/e46f177b1f65425fb489e162b34f1776 is 35.314ms for 994 entries. Dec 16 13:04:37.402914 systemd-journald[1249]: System Journal (/var/log/journal/e46f177b1f65425fb489e162b34f1776) is 11.8M, max 2.6G, 2.6G free. Dec 16 13:04:37.486265 systemd-journald[1249]: Received client request to flush runtime journal. Dec 16 13:04:37.486295 systemd-journald[1249]: /var/log/journal/e46f177b1f65425fb489e162b34f1776/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Dec 16 13:04:37.486311 systemd-journald[1249]: Rotating system journal. Dec 16 13:04:37.447762 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:04:37.458922 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:04:37.487116 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:04:37.496795 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:04:37.502409 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:04:37.504459 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:04:37.530800 kernel: loop1: detected capacity change from 0 to 110984 Dec 16 13:04:37.559654 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Dec 16 13:04:37.559668 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Dec 16 13:04:37.562343 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:04:37.892689 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:04:37.903808 kernel: loop2: detected capacity change from 0 to 27936 Dec 16 13:04:37.936980 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:04:37.940699 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:04:37.963343 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Dec 16 13:04:38.140428 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:04:38.143836 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:04:38.190051 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:04:38.210886 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:04:38.289803 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 13:04:38.298836 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 13:04:38.303388 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 13:04:38.303434 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 13:04:38.306895 kernel: Console: switching to colour dummy device 80x25 Dec 16 13:04:38.307933 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:04:38.312820 kernel: hv_vmbus: registering driver hv_balloon Dec 16 13:04:38.315196 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 13:04:38.318797 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:04:38.320834 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 13:04:38.345956 kernel: loop3: detected capacity change from 0 to 128560 Dec 16 13:04:38.396698 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:38.407264 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:38.407400 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:38.414218 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:38.456025 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:04:38.456177 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:38.459476 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:04:38.461960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:04:38.553558 systemd-networkd[1333]: lo: Link UP Dec 16 13:04:38.553750 systemd-networkd[1333]: lo: Gained carrier Dec 16 13:04:38.554924 systemd-networkd[1333]: Enumeration completed Dec 16 13:04:38.555050 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:04:38.556907 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:04:38.559850 systemd-networkd[1333]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:04:38.559855 systemd-networkd[1333]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:04:38.559928 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:04:38.564810 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 13:04:38.570299 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 13:04:38.570470 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5228f86c eth0: Data path switched to VF: enP30832s1 Dec 16 13:04:38.570841 systemd-networkd[1333]: enP30832s1: Link UP Dec 16 13:04:38.570904 systemd-networkd[1333]: eth0: Link UP Dec 16 13:04:38.570907 systemd-networkd[1333]: eth0: Gained carrier Dec 16 13:04:38.570919 systemd-networkd[1333]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:04:38.577923 systemd-networkd[1333]: enP30832s1: Gained carrier Dec 16 13:04:38.584819 systemd-networkd[1333]: eth0: DHCPv4 address 10.200.0.41/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:04:38.587530 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 13:04:38.590887 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:04:38.607791 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 16 13:04:38.616485 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:04:38.620818 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:04:38.742815 kernel: loop4: detected capacity change from 0 to 229808 Dec 16 13:04:38.752826 kernel: loop5: detected capacity change from 0 to 110984 Dec 16 13:04:38.760823 kernel: loop6: detected capacity change from 0 to 27936 Dec 16 13:04:38.768820 kernel: loop7: detected capacity change from 0 to 128560 Dec 16 13:04:38.791674 (sd-merge)[1427]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 16 13:04:38.791996 (sd-merge)[1427]: Merged extensions into '/usr'. Dec 16 13:04:38.794485 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:04:38.794497 systemd[1]: Reloading... Dec 16 13:04:38.841834 zram_generator::config[1462]: No configuration found. Dec 16 13:04:39.213653 systemd[1]: Reloading finished in 418 ms. Dec 16 13:04:39.239099 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:04:39.254411 systemd[1]: Starting ensure-sysext.service... Dec 16 13:04:39.255532 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:04:39.265615 systemd[1]: Reload requested from client PID 1516 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:04:39.265690 systemd[1]: Reloading... Dec 16 13:04:39.306801 zram_generator::config[1551]: No configuration found. Dec 16 13:04:39.390943 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:04:39.390973 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:04:39.391170 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:04:39.391361 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:04:39.391960 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:04:39.392152 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Dec 16 13:04:39.392191 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Dec 16 13:04:39.411661 systemd-tmpfiles[1517]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:04:39.411668 systemd-tmpfiles[1517]: Skipping /boot Dec 16 13:04:39.417173 systemd-tmpfiles[1517]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:04:39.417181 systemd-tmpfiles[1517]: Skipping /boot Dec 16 13:04:39.466454 systemd[1]: Reloading finished in 200 ms. Dec 16 13:04:39.478836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:04:39.494385 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:04:39.502346 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:39.503144 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:04:39.508519 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:04:39.510978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:04:39.512974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:04:39.520681 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:04:39.523863 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:04:39.526072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:04:39.526333 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:04:39.528959 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:04:39.532867 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:04:39.537959 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:04:39.541873 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:39.543400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:04:39.547882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:04:39.550242 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:04:39.550379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:04:39.552486 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:04:39.552602 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:04:39.562442 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:04:39.569587 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:39.569946 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:04:39.572145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:04:39.580338 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:04:39.584017 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:04:39.588046 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:04:39.590966 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:04:39.591075 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:04:39.591208 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:04:39.593118 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:04:39.595068 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:04:39.595207 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:04:39.600881 systemd[1]: Finished ensure-sysext.service. Dec 16 13:04:39.610227 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:04:39.610649 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:04:39.612282 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:04:39.612406 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:04:39.615020 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:04:39.615136 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:04:39.616943 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:04:39.616990 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:04:39.814032 systemd-resolved[1617]: Positive Trust Anchors: Dec 16 13:04:39.814043 systemd-resolved[1617]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:04:39.814071 systemd-resolved[1617]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:04:39.835610 systemd-resolved[1617]: Using system hostname 'ci-4459.2.2-a-371cb0f3db'. Dec 16 13:04:39.836624 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:04:39.839941 systemd[1]: Reached target network.target - Network. Dec 16 13:04:39.842825 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:04:39.995296 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:04:40.318947 systemd-networkd[1333]: eth0: Gained IPv6LL Dec 16 13:04:40.320267 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:04:40.323983 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:04:40.390347 augenrules[1654]: No rules Dec 16 13:04:40.391035 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:04:40.391158 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:04:45.077825 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:04:45.082053 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:04:53.682018 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:04:53.930243 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:04:53.933953 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:04:53.945668 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:04:53.947127 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:04:53.948441 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:04:53.949687 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:04:53.953883 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:04:53.957964 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:04:53.960880 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:04:53.963844 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:04:53.966838 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:04:53.966872 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:04:53.969820 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:04:53.971761 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:04:53.973980 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:04:53.976564 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:04:53.979983 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:04:53.981595 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:04:53.992207 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:04:53.995282 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:04:53.998278 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:04:54.001752 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:04:54.002948 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:04:54.004860 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:04:54.004883 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:04:54.006472 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 13:04:54.009863 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:04:54.014922 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:04:54.018872 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:04:54.023938 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:04:54.026995 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:04:54.032889 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:04:54.034264 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:04:54.038963 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:04:54.041122 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Dec 16 13:04:54.042880 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 13:04:54.044966 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 13:04:54.047732 jq[1674]: false Dec 16 13:04:54.047897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:54.054098 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:04:54.058242 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:04:54.064083 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:04:54.068095 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:04:54.072150 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:04:54.078343 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:04:54.080864 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:04:54.081482 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:04:54.085369 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:04:54.098894 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:04:54.105327 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:04:54.105628 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:04:54.108972 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:04:54.109136 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:04:54.117547 jq[1689]: true Dec 16 13:04:54.131260 jq[1698]: true Dec 16 13:04:54.238026 KVP[1677]: KVP starting; pid is:1677 Dec 16 13:04:54.241223 extend-filesystems[1675]: Found /dev/nvme0n1p6 Dec 16 13:04:54.242819 chronyd[1666]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 13:04:54.243660 KVP[1677]: KVP LIC Version: 3.1 Dec 16 13:04:54.243864 kernel: hv_utils: KVP IC version 4.0 Dec 16 13:04:54.249122 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:04:54.389309 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:04:54.390017 (ntainerd)[1723]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:04:54.390150 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:04:54.445792 google_oslogin_nss_cache[1676]: oslogin_cache_refresh[1676]: Refreshing passwd entry cache Dec 16 13:04:54.445637 oslogin_cache_refresh[1676]: Refreshing passwd entry cache Dec 16 13:04:54.462409 google_oslogin_nss_cache[1676]: oslogin_cache_refresh[1676]: Failure getting users, quitting Dec 16 13:04:54.462409 google_oslogin_nss_cache[1676]: oslogin_cache_refresh[1676]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:04:54.462409 google_oslogin_nss_cache[1676]: oslogin_cache_refresh[1676]: Refreshing group entry cache Dec 16 13:04:54.462280 oslogin_cache_refresh[1676]: Failure getting users, quitting Dec 16 13:04:54.462294 oslogin_cache_refresh[1676]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:04:54.462326 oslogin_cache_refresh[1676]: Refreshing group entry cache Dec 16 13:04:54.474612 extend-filesystems[1675]: Found /dev/nvme0n1p9 Dec 16 13:04:54.478817 google_oslogin_nss_cache[1676]: oslogin_cache_refresh[1676]: Failure getting groups, quitting Dec 16 13:04:54.478817 google_oslogin_nss_cache[1676]: oslogin_cache_refresh[1676]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:04:54.478369 oslogin_cache_refresh[1676]: Failure getting groups, quitting Dec 16 13:04:54.478376 oslogin_cache_refresh[1676]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:04:54.479519 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:04:54.479710 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:04:54.511054 extend-filesystems[1675]: Checking size of /dev/nvme0n1p9 Dec 16 13:04:54.547733 systemd-logind[1687]: New seat seat0. Dec 16 13:04:54.548617 systemd-logind[1687]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:04:54.548735 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:04:54.555810 update_engine[1688]: I20251216 13:04:54.555441 1688 main.cc:92] Flatcar Update Engine starting Dec 16 13:04:54.679842 chronyd[1666]: Timezone right/UTC failed leap second check, ignoring Dec 16 13:04:54.680125 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 13:04:54.679990 chronyd[1666]: Loaded seccomp filter (level 2) Dec 16 13:04:54.799326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:54.812946 (kubelet)[1750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:54.826346 extend-filesystems[1675]: Old size kept for /dev/nvme0n1p9 Dec 16 13:04:54.828402 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:04:54.828526 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:04:54.849956 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:04:55.023335 tar[1692]: linux-amd64/LICENSE Dec 16 13:04:55.079471 tar[1692]: linux-amd64/helm Dec 16 13:04:55.458120 tar[1692]: linux-amd64/README.md Dec 16 13:04:55.474486 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:04:55.491947 kubelet[1750]: E1216 13:04:55.491921 1750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:55.495799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:55.495908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:55.496227 systemd[1]: kubelet.service: Consumed 835ms CPU time, 269.4M memory peak. Dec 16 13:04:55.503639 sshd_keygen[1743]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:04:55.517809 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:04:55.519907 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:04:55.528937 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 13:04:55.535851 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:04:55.536032 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:04:55.542933 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:04:55.549253 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 13:04:55.824254 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:04:56.425737 update_engine[1688]: I20251216 13:04:55.840020 1688 update_check_scheduler.cc:74] Next update check in 4m43s Dec 16 13:04:56.425951 coreos-metadata[1668]: Dec 16 13:04:56.139 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 13:04:56.425951 coreos-metadata[1668]: Dec 16 13:04:56.141 INFO Fetch successful Dec 16 13:04:56.425951 coreos-metadata[1668]: Dec 16 13:04:56.141 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 13:04:56.425951 coreos-metadata[1668]: Dec 16 13:04:56.144 INFO Fetch successful Dec 16 13:04:56.425951 coreos-metadata[1668]: Dec 16 13:04:56.144 INFO Fetching http://168.63.129.16/machine/f8194591-fdbe-4b20-ba62-03fd66740c5c/52ff72dc%2D500a%2D4694%2Da2b7%2Dd274df8bb5b8.%5Fci%2D4459.2.2%2Da%2D371cb0f3db?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 13:04:56.425951 coreos-metadata[1668]: Dec 16 13:04:56.194 INFO Fetch successful Dec 16 13:04:56.425951 coreos-metadata[1668]: Dec 16 13:04:56.194 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 13:04:56.425951 coreos-metadata[1668]: Dec 16 13:04:56.201 INFO Fetch successful Dec 16 13:04:55.837486 dbus-daemon[1669]: [system] SELinux support is enabled Dec 16 13:04:55.826485 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:04:55.844901 dbus-daemon[1669]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:04:55.831644 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:04:55.833936 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:04:55.838006 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:04:55.842986 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:04:55.843005 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:04:55.846887 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:04:55.846899 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:04:55.849896 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:04:55.853857 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:04:56.214921 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:04:56.217099 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:04:56.793284 locksmithd[1805]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:04:57.283209 bash[1715]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:04:57.284086 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:04:57.287391 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:04:58.188530 containerd[1723]: time="2025-12-16T13:04:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:04:58.189160 containerd[1723]: time="2025-12-16T13:04:58.189134231Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:04:58.195198 containerd[1723]: time="2025-12-16T13:04:58.195169686Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.624µs" Dec 16 13:04:58.195198 containerd[1723]: time="2025-12-16T13:04:58.195189802Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:04:58.195277 containerd[1723]: time="2025-12-16T13:04:58.195204615Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:04:58.195342 containerd[1723]: time="2025-12-16T13:04:58.195326297Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:04:58.195342 containerd[1723]: time="2025-12-16T13:04:58.195339232Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:04:58.195378 containerd[1723]: time="2025-12-16T13:04:58.195355929Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:04:58.195407 containerd[1723]: time="2025-12-16T13:04:58.195393535Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:04:58.195407 containerd[1723]: time="2025-12-16T13:04:58.195403879Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:04:58.195577 containerd[1723]: time="2025-12-16T13:04:58.195559300Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:04:58.195577 containerd[1723]: time="2025-12-16T13:04:58.195570950Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:04:58.195618 containerd[1723]: time="2025-12-16T13:04:58.195579606Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:04:58.195618 containerd[1723]: time="2025-12-16T13:04:58.195585963Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:04:58.195654 containerd[1723]: time="2025-12-16T13:04:58.195638860Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:04:58.195781 containerd[1723]: time="2025-12-16T13:04:58.195764718Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:04:58.195813 containerd[1723]: time="2025-12-16T13:04:58.195798444Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:04:58.195813 containerd[1723]: time="2025-12-16T13:04:58.195809265Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:04:58.195852 containerd[1723]: time="2025-12-16T13:04:58.195834692Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:04:58.196063 containerd[1723]: time="2025-12-16T13:04:58.196049297Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:04:58.196119 containerd[1723]: time="2025-12-16T13:04:58.196091110Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:04:58.835979 containerd[1723]: time="2025-12-16T13:04:58.835939937Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:04:58.836055 containerd[1723]: time="2025-12-16T13:04:58.835993948Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:04:58.836055 containerd[1723]: time="2025-12-16T13:04:58.836008180Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:04:58.836055 containerd[1723]: time="2025-12-16T13:04:58.836018970Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:04:58.836055 containerd[1723]: time="2025-12-16T13:04:58.836028743Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:04:58.836055 containerd[1723]: time="2025-12-16T13:04:58.836037456Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:04:58.836055 containerd[1723]: time="2025-12-16T13:04:58.836047316Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:04:58.836216 containerd[1723]: time="2025-12-16T13:04:58.836056842Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:04:58.836216 containerd[1723]: time="2025-12-16T13:04:58.836066919Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:04:58.836216 containerd[1723]: time="2025-12-16T13:04:58.836094731Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:04:58.836216 containerd[1723]: time="2025-12-16T13:04:58.836104299Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:04:58.836216 containerd[1723]: time="2025-12-16T13:04:58.836116242Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:04:58.836216 containerd[1723]: time="2025-12-16T13:04:58.836203978Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:04:58.836335 containerd[1723]: time="2025-12-16T13:04:58.836219569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:04:58.836335 containerd[1723]: time="2025-12-16T13:04:58.836231960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:04:58.836335 containerd[1723]: time="2025-12-16T13:04:58.836242123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:04:58.836335 containerd[1723]: time="2025-12-16T13:04:58.836251472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:04:58.836335 containerd[1723]: time="2025-12-16T13:04:58.836264581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:04:58.836335 containerd[1723]: time="2025-12-16T13:04:58.836274307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:04:58.836335 containerd[1723]: time="2025-12-16T13:04:58.836283181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:04:58.836335 containerd[1723]: time="2025-12-16T13:04:58.836292268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:04:58.836335 containerd[1723]: time="2025-12-16T13:04:58.836301109Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:04:58.836335 containerd[1723]: time="2025-12-16T13:04:58.836309628Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:04:58.836711 containerd[1723]: time="2025-12-16T13:04:58.836346665Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:04:58.836711 containerd[1723]: time="2025-12-16T13:04:58.836357145Z" level=info msg="Start snapshots syncer" Dec 16 13:04:58.836711 containerd[1723]: time="2025-12-16T13:04:58.836379356Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:04:58.836828 containerd[1723]: time="2025-12-16T13:04:58.836768342Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:04:58.837024 containerd[1723]: time="2025-12-16T13:04:58.836848778Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:04:58.837024 containerd[1723]: time="2025-12-16T13:04:58.836894611Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:04:58.837024 containerd[1723]: time="2025-12-16T13:04:58.836980641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:04:58.837024 containerd[1723]: time="2025-12-16T13:04:58.837011970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:04:58.837091 containerd[1723]: time="2025-12-16T13:04:58.837023141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:04:58.837091 containerd[1723]: time="2025-12-16T13:04:58.837035132Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:04:58.837091 containerd[1723]: time="2025-12-16T13:04:58.837052380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:04:58.837091 containerd[1723]: time="2025-12-16T13:04:58.837062987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:04:58.837091 containerd[1723]: time="2025-12-16T13:04:58.837075234Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:04:58.837173 containerd[1723]: time="2025-12-16T13:04:58.837100604Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:04:58.837173 containerd[1723]: time="2025-12-16T13:04:58.837114492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:04:58.837173 containerd[1723]: time="2025-12-16T13:04:58.837127459Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:04:58.837173 containerd[1723]: time="2025-12-16T13:04:58.837151820Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:04:58.837240 containerd[1723]: time="2025-12-16T13:04:58.837168534Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:04:58.837240 containerd[1723]: time="2025-12-16T13:04:58.837180221Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:04:58.837240 containerd[1723]: time="2025-12-16T13:04:58.837192109Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:04:58.837240 containerd[1723]: time="2025-12-16T13:04:58.837200985Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:04:58.837240 containerd[1723]: time="2025-12-16T13:04:58.837212922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:04:58.837240 containerd[1723]: time="2025-12-16T13:04:58.837229851Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:04:58.837331 containerd[1723]: time="2025-12-16T13:04:58.837243178Z" level=info msg="runtime interface created" Dec 16 13:04:58.837331 containerd[1723]: time="2025-12-16T13:04:58.837251164Z" level=info msg="created NRI interface" Dec 16 13:04:58.837331 containerd[1723]: time="2025-12-16T13:04:58.837259440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:04:58.837331 containerd[1723]: time="2025-12-16T13:04:58.837273632Z" level=info msg="Connect containerd service" Dec 16 13:04:58.837331 containerd[1723]: time="2025-12-16T13:04:58.837294741Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:04:58.838546 containerd[1723]: time="2025-12-16T13:04:58.838427265Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:04:59.265293 containerd[1723]: time="2025-12-16T13:04:59.265241104Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:04:59.265293 containerd[1723]: time="2025-12-16T13:04:59.265278356Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:04:59.265509 containerd[1723]: time="2025-12-16T13:04:59.265297164Z" level=info msg="Start subscribing containerd event" Dec 16 13:04:59.265509 containerd[1723]: time="2025-12-16T13:04:59.265317639Z" level=info msg="Start recovering state" Dec 16 13:04:59.265509 containerd[1723]: time="2025-12-16T13:04:59.265388344Z" level=info msg="Start event monitor" Dec 16 13:04:59.265509 containerd[1723]: time="2025-12-16T13:04:59.265396766Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:04:59.265509 containerd[1723]: time="2025-12-16T13:04:59.265407537Z" level=info msg="Start streaming server" Dec 16 13:04:59.265509 containerd[1723]: time="2025-12-16T13:04:59.265415525Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:04:59.265509 containerd[1723]: time="2025-12-16T13:04:59.265421452Z" level=info msg="runtime interface starting up..." Dec 16 13:04:59.265509 containerd[1723]: time="2025-12-16T13:04:59.265425733Z" level=info msg="starting plugins..." Dec 16 13:04:59.265509 containerd[1723]: time="2025-12-16T13:04:59.265435282Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:04:59.267383 containerd[1723]: time="2025-12-16T13:04:59.267004795Z" level=info msg="containerd successfully booted in 1.078850s" Dec 16 13:04:59.265565 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:04:59.267998 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:04:59.269914 systemd[1]: Startup finished in 2.710s (kernel) + 13.640s (initrd) + 24.999s (userspace) = 41.350s. Dec 16 13:04:59.712543 login[1802]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Dec 16 13:04:59.713058 login[1801]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:04:59.721565 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:04:59.722574 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:04:59.728440 systemd-logind[1687]: New session 2 of user core. Dec 16 13:04:59.775215 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:04:59.777964 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:04:59.791102 (systemd)[1840]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:04:59.793431 systemd-logind[1687]: New session c1 of user core. Dec 16 13:05:00.004430 systemd[1840]: Queued start job for default target default.target. Dec 16 13:05:00.005675 waagent[1799]: 2025-12-16T13:05:00.005624Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 13:05:00.010668 waagent[1799]: 2025-12-16T13:05:00.006228Z INFO Daemon Daemon OS: flatcar 4459.2.2 Dec 16 13:05:00.010668 waagent[1799]: 2025-12-16T13:05:00.006490Z INFO Daemon Daemon Python: 3.11.13 Dec 16 13:05:00.010668 waagent[1799]: 2025-12-16T13:05:00.006879Z INFO Daemon Daemon Run daemon Dec 16 13:05:00.010668 waagent[1799]: 2025-12-16T13:05:00.007114Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Dec 16 13:05:00.010668 waagent[1799]: 2025-12-16T13:05:00.007319Z INFO Daemon Daemon Using waagent for provisioning Dec 16 13:05:00.010668 waagent[1799]: 2025-12-16T13:05:00.007455Z INFO Daemon Daemon Activate resource disk Dec 16 13:05:00.010668 waagent[1799]: 2025-12-16T13:05:00.007793Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 13:05:00.010668 waagent[1799]: 2025-12-16T13:05:00.009316Z INFO Daemon Daemon Found device: None Dec 16 13:05:00.010668 waagent[1799]: 2025-12-16T13:05:00.009426Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 13:05:00.010668 waagent[1799]: 2025-12-16T13:05:00.009476Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 13:05:00.010668 waagent[1799]: 2025-12-16T13:05:00.010082Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:05:00.010668 waagent[1799]: 2025-12-16T13:05:00.010314Z INFO Daemon Daemon Running default provisioning handler Dec 16 13:05:00.013194 systemd[1840]: Created slice app.slice - User Application Slice. Dec 16 13:05:00.013217 systemd[1840]: Reached target paths.target - Paths. Dec 16 13:05:00.013243 systemd[1840]: Reached target timers.target - Timers. Dec 16 13:05:00.017870 systemd[1840]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:05:00.024597 systemd[1840]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:05:00.024713 systemd[1840]: Reached target sockets.target - Sockets. Dec 16 13:05:00.024742 systemd[1840]: Reached target basic.target - Basic System. Dec 16 13:05:00.024902 systemd[1840]: Reached target default.target - Main User Target. Dec 16 13:05:00.024925 systemd[1840]: Startup finished in 226ms. Dec 16 13:05:00.025930 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:05:00.029469 waagent[1799]: 2025-12-16T13:05:00.029433Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 13:05:00.031704 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:05:00.036609 waagent[1799]: 2025-12-16T13:05:00.035093Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 13:05:00.036609 waagent[1799]: 2025-12-16T13:05:00.035550Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 13:05:00.036609 waagent[1799]: 2025-12-16T13:05:00.035818Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 13:05:00.088399 waagent[1799]: 2025-12-16T13:05:00.088364Z INFO Daemon Daemon Successfully mounted dvd Dec 16 13:05:00.115043 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 13:05:00.116669 waagent[1799]: 2025-12-16T13:05:00.116642Z INFO Daemon Daemon Detect protocol endpoint Dec 16 13:05:00.118579 waagent[1799]: 2025-12-16T13:05:00.117195Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 13:05:00.118579 waagent[1799]: 2025-12-16T13:05:00.117475Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 13:05:00.118579 waagent[1799]: 2025-12-16T13:05:00.117966Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 13:05:00.118579 waagent[1799]: 2025-12-16T13:05:00.118094Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 13:05:00.118579 waagent[1799]: 2025-12-16T13:05:00.118280Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 13:05:00.127350 waagent[1799]: 2025-12-16T13:05:00.127320Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 13:05:00.128072 waagent[1799]: 2025-12-16T13:05:00.127684Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 13:05:00.128072 waagent[1799]: 2025-12-16T13:05:00.127847Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 13:05:00.197368 waagent[1799]: 2025-12-16T13:05:00.197322Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 13:05:00.198699 waagent[1799]: 2025-12-16T13:05:00.198655Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 13:05:00.201406 waagent[1799]: 2025-12-16T13:05:00.201373Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:05:00.220989 waagent[1799]: 2025-12-16T13:05:00.220961Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 13:05:00.223391 waagent[1799]: 2025-12-16T13:05:00.221700Z INFO Daemon Dec 16 13:05:00.223391 waagent[1799]: 2025-12-16T13:05:00.222094Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6d499cfd-2f46-44eb-90e2-fd5e32f5b247 eTag: 6928215333695227979 source: Fabric] Dec 16 13:05:00.223391 waagent[1799]: 2025-12-16T13:05:00.222337Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 13:05:00.223391 waagent[1799]: 2025-12-16T13:05:00.222641Z INFO Daemon Dec 16 13:05:00.223391 waagent[1799]: 2025-12-16T13:05:00.222829Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:05:00.230874 waagent[1799]: 2025-12-16T13:05:00.230848Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 13:05:00.390294 waagent[1799]: 2025-12-16T13:05:00.390252Z INFO Daemon Downloaded certificate {'thumbprint': 'C636EE4AF2EE60B8FE71D23A6AC7166718C7360B', 'hasPrivateKey': True} Dec 16 13:05:00.392567 waagent[1799]: 2025-12-16T13:05:00.392536Z INFO Daemon Fetch goal state completed Dec 16 13:05:00.435485 waagent[1799]: 2025-12-16T13:05:00.435455Z INFO Daemon Daemon Starting provisioning Dec 16 13:05:00.436356 waagent[1799]: 2025-12-16T13:05:00.436290Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 13:05:00.436684 waagent[1799]: 2025-12-16T13:05:00.436660Z INFO Daemon Daemon Set hostname [ci-4459.2.2-a-371cb0f3db] Dec 16 13:05:00.539304 waagent[1799]: 2025-12-16T13:05:00.539266Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-a-371cb0f3db] Dec 16 13:05:00.540359 waagent[1799]: 2025-12-16T13:05:00.539877Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 13:05:00.540359 waagent[1799]: 2025-12-16T13:05:00.540181Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 13:05:00.547000 systemd-networkd[1333]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:05:00.547005 systemd-networkd[1333]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:05:00.547027 systemd-networkd[1333]: eth0: DHCP lease lost Dec 16 13:05:00.547668 waagent[1799]: 2025-12-16T13:05:00.547629Z INFO Daemon Daemon Create user account if not exists Dec 16 13:05:00.550766 waagent[1799]: 2025-12-16T13:05:00.548198Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 13:05:00.550766 waagent[1799]: 2025-12-16T13:05:00.548442Z INFO Daemon Daemon Configure sudoer Dec 16 13:05:00.570814 systemd-networkd[1333]: eth0: DHCPv4 address 10.200.0.41/24, gateway 10.200.0.1 acquired from 168.63.129.16 Dec 16 13:05:00.577349 waagent[1799]: 2025-12-16T13:05:00.577301Z INFO Daemon Daemon Configure sshd Dec 16 13:05:00.584113 waagent[1799]: 2025-12-16T13:05:00.584071Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 13:05:00.586692 waagent[1799]: 2025-12-16T13:05:00.586623Z INFO Daemon Daemon Deploy ssh public key. Dec 16 13:05:00.713825 login[1802]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 13:05:00.717205 systemd-logind[1687]: New session 1 of user core. Dec 16 13:05:00.721918 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:05:01.686622 waagent[1799]: 2025-12-16T13:05:01.686576Z INFO Daemon Daemon Provisioning complete Dec 16 13:05:01.695598 waagent[1799]: 2025-12-16T13:05:01.695569Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 13:05:01.696963 waagent[1799]: 2025-12-16T13:05:01.696932Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 13:05:01.698706 waagent[1799]: 2025-12-16T13:05:01.698681Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 13:05:01.786555 waagent[1891]: 2025-12-16T13:05:01.786495Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 13:05:01.786730 waagent[1891]: 2025-12-16T13:05:01.786576Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Dec 16 13:05:01.786730 waagent[1891]: 2025-12-16T13:05:01.786614Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 13:05:01.786730 waagent[1891]: 2025-12-16T13:05:01.786650Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 16 13:05:01.829723 waagent[1891]: 2025-12-16T13:05:01.829677Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 13:05:01.829880 waagent[1891]: 2025-12-16T13:05:01.829852Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:05:01.829946 waagent[1891]: 2025-12-16T13:05:01.829907Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:05:01.836819 waagent[1891]: 2025-12-16T13:05:01.836758Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 13:05:01.844068 waagent[1891]: 2025-12-16T13:05:01.844041Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 13:05:01.844362 waagent[1891]: 2025-12-16T13:05:01.844336Z INFO ExtHandler Dec 16 13:05:01.844397 waagent[1891]: 2025-12-16T13:05:01.844381Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f88bfa66-4274-46af-a129-b43295e05320 eTag: 6928215333695227979 source: Fabric] Dec 16 13:05:01.844583 waagent[1891]: 2025-12-16T13:05:01.844560Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 13:05:01.844877 waagent[1891]: 2025-12-16T13:05:01.844853Z INFO ExtHandler Dec 16 13:05:01.844907 waagent[1891]: 2025-12-16T13:05:01.844890Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 13:05:01.848540 waagent[1891]: 2025-12-16T13:05:01.848514Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 13:05:01.910871 waagent[1891]: 2025-12-16T13:05:01.910829Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C636EE4AF2EE60B8FE71D23A6AC7166718C7360B', 'hasPrivateKey': True} Dec 16 13:05:01.911156 waagent[1891]: 2025-12-16T13:05:01.911132Z INFO ExtHandler Fetch goal state completed Dec 16 13:05:01.924105 waagent[1891]: 2025-12-16T13:05:01.924067Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Dec 16 13:05:01.927690 waagent[1891]: 2025-12-16T13:05:01.927646Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1891 Dec 16 13:05:01.927798 waagent[1891]: 2025-12-16T13:05:01.927746Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 13:05:01.928016 waagent[1891]: 2025-12-16T13:05:01.927995Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 13:05:01.928889 waagent[1891]: 2025-12-16T13:05:01.928864Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 13:05:01.929130 waagent[1891]: 2025-12-16T13:05:01.929109Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 13:05:01.929209 waagent[1891]: 2025-12-16T13:05:01.929192Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 13:05:01.929542 waagent[1891]: 2025-12-16T13:05:01.929522Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 13:05:02.067116 waagent[1891]: 2025-12-16T13:05:02.067092Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 13:05:02.067238 waagent[1891]: 2025-12-16T13:05:02.067218Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 13:05:02.071615 waagent[1891]: 2025-12-16T13:05:02.071305Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 13:05:02.075366 systemd[1]: Reload requested from client PID 1906 ('systemctl') (unit waagent.service)... Dec 16 13:05:02.075377 systemd[1]: Reloading... Dec 16 13:05:02.142809 zram_generator::config[1945]: No configuration found. Dec 16 13:05:02.145795 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#290 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Dec 16 13:05:02.294256 systemd[1]: Reloading finished in 218 ms. Dec 16 13:05:02.320832 waagent[1891]: 2025-12-16T13:05:02.318649Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 13:05:02.320832 waagent[1891]: 2025-12-16T13:05:02.318857Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 13:05:02.627286 waagent[1891]: 2025-12-16T13:05:02.627219Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 13:05:02.627449 waagent[1891]: 2025-12-16T13:05:02.627426Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 13:05:02.627930 waagent[1891]: 2025-12-16T13:05:02.627905Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 13:05:02.628248 waagent[1891]: 2025-12-16T13:05:02.628211Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 13:05:02.628395 waagent[1891]: 2025-12-16T13:05:02.628376Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:05:02.628456 waagent[1891]: 2025-12-16T13:05:02.628439Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:05:02.628567 waagent[1891]: 2025-12-16T13:05:02.628519Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 13:05:02.628567 waagent[1891]: 2025-12-16T13:05:02.628549Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 13:05:02.628842 waagent[1891]: 2025-12-16T13:05:02.628820Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 13:05:02.628989 waagent[1891]: 2025-12-16T13:05:02.628970Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 13:05:02.629202 waagent[1891]: 2025-12-16T13:05:02.629152Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 13:05:02.629240 waagent[1891]: 2025-12-16T13:05:02.629219Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 13:05:02.629329 waagent[1891]: 2025-12-16T13:05:02.629313Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 13:05:02.629678 waagent[1891]: 2025-12-16T13:05:02.629662Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 13:05:02.629892 waagent[1891]: 2025-12-16T13:05:02.629874Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 13:05:02.629892 waagent[1891]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 13:05:02.629892 waagent[1891]: eth0 00000000 0100C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 13:05:02.629892 waagent[1891]: eth0 0000C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 13:05:02.629892 waagent[1891]: eth0 0100C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:05:02.629892 waagent[1891]: eth0 10813FA8 0100C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:05:02.629892 waagent[1891]: eth0 FEA9FEA9 0100C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 13:05:02.630354 waagent[1891]: 2025-12-16T13:05:02.630325Z INFO EnvHandler ExtHandler Configure routes Dec 16 13:05:02.631201 waagent[1891]: 2025-12-16T13:05:02.631145Z INFO EnvHandler ExtHandler Gateway:None Dec 16 13:05:02.631249 waagent[1891]: 2025-12-16T13:05:02.631222Z INFO EnvHandler ExtHandler Routes:None Dec 16 13:05:02.635375 waagent[1891]: 2025-12-16T13:05:02.635188Z INFO ExtHandler ExtHandler Dec 16 13:05:02.635424 waagent[1891]: 2025-12-16T13:05:02.635401Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ecb4896c-2ab4-44fa-928c-4b3904ca1a1c correlation fda29dd0-ffe6-4e36-acdd-223fe263c130 created: 2025-12-16T13:03:49.484016Z] Dec 16 13:05:02.635904 waagent[1891]: 2025-12-16T13:05:02.635883Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 13:05:02.636215 waagent[1891]: 2025-12-16T13:05:02.636195Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Dec 16 13:05:02.664298 waagent[1891]: 2025-12-16T13:05:02.663946Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 13:05:02.664298 waagent[1891]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 13:05:02.664298 waagent[1891]: 2025-12-16T13:05:02.664248Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 72C50675-35EE-46D7-BEAE-F2B159BF787C;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 13:05:02.706604 waagent[1891]: 2025-12-16T13:05:02.706561Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 13:05:02.706604 waagent[1891]: Executing ['ip', '-a', '-o', 'link']: Dec 16 13:05:02.706604 waagent[1891]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 13:05:02.706604 waagent[1891]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:28:f8:6c brd ff:ff:ff:ff:ff:ff\ alias Network Device Dec 16 13:05:02.706604 waagent[1891]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:28:f8:6c brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Dec 16 13:05:02.706604 waagent[1891]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 13:05:02.706604 waagent[1891]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 13:05:02.706604 waagent[1891]: 2: eth0 inet 10.200.0.41/24 metric 1024 brd 10.200.0.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 13:05:02.706604 waagent[1891]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 13:05:02.706604 waagent[1891]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 13:05:02.706604 waagent[1891]: 2: eth0 inet6 fe80::7e1e:52ff:fe28:f86c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 13:05:02.735516 waagent[1891]: 2025-12-16T13:05:02.735478Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 13:05:02.735516 waagent[1891]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:02.735516 waagent[1891]: pkts bytes target prot opt in out source destination Dec 16 13:05:02.735516 waagent[1891]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:02.735516 waagent[1891]: pkts bytes target prot opt in out source destination Dec 16 13:05:02.735516 waagent[1891]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:02.735516 waagent[1891]: pkts bytes target prot opt in out source destination Dec 16 13:05:02.735516 waagent[1891]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:05:02.735516 waagent[1891]: 2 294 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:05:02.735516 waagent[1891]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:05:02.737875 waagent[1891]: 2025-12-16T13:05:02.737840Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 13:05:02.737875 waagent[1891]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:02.737875 waagent[1891]: pkts bytes target prot opt in out source destination Dec 16 13:05:02.737875 waagent[1891]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:02.737875 waagent[1891]: pkts bytes target prot opt in out source destination Dec 16 13:05:02.737875 waagent[1891]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 13:05:02.737875 waagent[1891]: pkts bytes target prot opt in out source destination Dec 16 13:05:02.737875 waagent[1891]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 13:05:02.737875 waagent[1891]: 4 406 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 13:05:02.737875 waagent[1891]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 13:05:05.549307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:05:05.550351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:05.886395 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:05:05.887131 systemd[1]: Started sshd@0-10.200.0.41:22-10.200.16.10:48358.service - OpenSSH per-connection server daemon (10.200.16.10:48358). Dec 16 13:05:07.031254 sshd[2040]: Accepted publickey for core from 10.200.16.10 port 48358 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:07.032024 sshd-session[2040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:07.034963 systemd-logind[1687]: New session 3 of user core. Dec 16 13:05:07.040889 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:05:07.511538 systemd[1]: Started sshd@1-10.200.0.41:22-10.200.16.10:48366.service - OpenSSH per-connection server daemon (10.200.16.10:48366). Dec 16 13:05:08.062070 sshd[2046]: Accepted publickey for core from 10.200.16.10 port 48366 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:08.062806 sshd-session[2046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:08.066159 systemd-logind[1687]: New session 4 of user core. Dec 16 13:05:08.071916 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:05:08.451057 sshd[2049]: Connection closed by 10.200.16.10 port 48366 Dec 16 13:05:08.451514 sshd-session[2046]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:08.453377 systemd[1]: sshd@1-10.200.0.41:22-10.200.16.10:48366.service: Deactivated successfully. Dec 16 13:05:08.454433 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:05:08.455368 systemd-logind[1687]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:05:08.456200 systemd-logind[1687]: Removed session 4. Dec 16 13:05:08.547494 systemd[1]: Started sshd@2-10.200.0.41:22-10.200.16.10:48382.service - OpenSSH per-connection server daemon (10.200.16.10:48382). Dec 16 13:05:09.101605 sshd[2055]: Accepted publickey for core from 10.200.16.10 port 48382 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:09.102289 sshd-session[2055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:09.105003 systemd-logind[1687]: New session 5 of user core. Dec 16 13:05:09.110908 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:05:09.491226 sshd[2058]: Connection closed by 10.200.16.10 port 48382 Dec 16 13:05:09.491633 sshd-session[2055]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:09.493762 systemd[1]: sshd@2-10.200.0.41:22-10.200.16.10:48382.service: Deactivated successfully. Dec 16 13:05:09.494875 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:05:09.495420 systemd-logind[1687]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:05:09.496271 systemd-logind[1687]: Removed session 5. Dec 16 13:05:09.586474 systemd[1]: Started sshd@3-10.200.0.41:22-10.200.16.10:48396.service - OpenSSH per-connection server daemon (10.200.16.10:48396). Dec 16 13:05:10.132127 sshd[2064]: Accepted publickey for core from 10.200.16.10 port 48396 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:10.132838 sshd-session[2064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:10.136056 systemd-logind[1687]: New session 6 of user core. Dec 16 13:05:10.141893 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:05:10.521237 sshd[2067]: Connection closed by 10.200.16.10 port 48396 Dec 16 13:05:10.521691 sshd-session[2064]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:10.523551 systemd[1]: sshd@3-10.200.0.41:22-10.200.16.10:48396.service: Deactivated successfully. Dec 16 13:05:10.524998 systemd-logind[1687]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:05:10.525089 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:05:10.526256 systemd-logind[1687]: Removed session 6. Dec 16 13:05:10.616482 systemd[1]: Started sshd@4-10.200.0.41:22-10.200.16.10:38548.service - OpenSSH per-connection server daemon (10.200.16.10:38548). Dec 16 13:05:10.837308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:10.846007 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:05:10.877412 kubelet[2081]: E1216 13:05:10.877369 2081 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:05:10.880049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:05:10.880179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:05:10.880459 systemd[1]: kubelet.service: Consumed 114ms CPU time, 109.1M memory peak. Dec 16 13:05:11.162397 sshd[2073]: Accepted publickey for core from 10.200.16.10 port 38548 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:05:11.163153 sshd-session[2073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:11.166406 systemd-logind[1687]: New session 7 of user core. Dec 16 13:05:11.170888 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:05:11.604585 sudo[2089]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:05:11.604792 sudo[2089]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:05:13.240051 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:05:13.246149 (dockerd)[2108]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:05:14.951943 dockerd[2108]: time="2025-12-16T13:05:14.951908380Z" level=info msg="Starting up" Dec 16 13:05:14.952493 dockerd[2108]: time="2025-12-16T13:05:14.952475205Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:05:14.962611 dockerd[2108]: time="2025-12-16T13:05:14.962577885Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:05:15.625935 dockerd[2108]: time="2025-12-16T13:05:15.625914804Z" level=info msg="Loading containers: start." Dec 16 13:05:15.706797 kernel: Initializing XFRM netlink socket Dec 16 13:05:16.025064 systemd-networkd[1333]: docker0: Link UP Dec 16 13:05:16.045788 dockerd[2108]: time="2025-12-16T13:05:16.045755478Z" level=info msg="Loading containers: done." Dec 16 13:05:16.056143 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2421656388-merged.mount: Deactivated successfully. Dec 16 13:05:16.484763 dockerd[2108]: time="2025-12-16T13:05:16.484737821Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:05:16.484844 dockerd[2108]: time="2025-12-16T13:05:16.484802634Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:05:16.484867 dockerd[2108]: time="2025-12-16T13:05:16.484859750Z" level=info msg="Initializing buildkit" Dec 16 13:05:16.630328 dockerd[2108]: time="2025-12-16T13:05:16.630294889Z" level=info msg="Completed buildkit initialization" Dec 16 13:05:16.636324 dockerd[2108]: time="2025-12-16T13:05:16.636295678Z" level=info msg="Daemon has completed initialization" Dec 16 13:05:16.636569 dockerd[2108]: time="2025-12-16T13:05:16.636344877Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:05:16.636498 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:05:17.707405 containerd[1723]: time="2025-12-16T13:05:17.707379944Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 13:05:18.371824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount136338150.mount: Deactivated successfully. Dec 16 13:05:18.459052 chronyd[1666]: Selected source PHC0 Dec 16 13:05:19.564436 containerd[1723]: time="2025-12-16T13:05:19.564407915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:19.566885 containerd[1723]: time="2025-12-16T13:05:19.566864831Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114150" Dec 16 13:05:19.569524 containerd[1723]: time="2025-12-16T13:05:19.569490228Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:19.573587 containerd[1723]: time="2025-12-16T13:05:19.573553937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:19.574410 containerd[1723]: time="2025-12-16T13:05:19.574049929Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.86664005s" Dec 16 13:05:19.574410 containerd[1723]: time="2025-12-16T13:05:19.574076371Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 13:05:19.574640 containerd[1723]: time="2025-12-16T13:05:19.574616405Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 13:05:20.810630 containerd[1723]: time="2025-12-16T13:05:20.810603340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:20.813019 containerd[1723]: time="2025-12-16T13:05:20.812990473Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016713" Dec 16 13:05:20.815956 containerd[1723]: time="2025-12-16T13:05:20.815905599Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:20.824099 containerd[1723]: time="2025-12-16T13:05:20.824053785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:20.824742 containerd[1723]: time="2025-12-16T13:05:20.824720964Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.250076996s" Dec 16 13:05:20.824794 containerd[1723]: time="2025-12-16T13:05:20.824743223Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 13:05:20.825143 containerd[1723]: time="2025-12-16T13:05:20.825052787Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 13:05:21.049350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:05:21.050624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:21.539517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:21.545041 (kubelet)[2384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:05:21.579391 kubelet[2384]: E1216 13:05:21.579346 2384 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:05:21.580924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:05:21.581103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:05:21.581414 systemd[1]: kubelet.service: Consumed 112ms CPU time, 108.7M memory peak. Dec 16 13:05:22.290735 containerd[1723]: time="2025-12-16T13:05:22.290708338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:22.294824 containerd[1723]: time="2025-12-16T13:05:22.294803554Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158034" Dec 16 13:05:22.297864 containerd[1723]: time="2025-12-16T13:05:22.297828098Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:22.302030 containerd[1723]: time="2025-12-16T13:05:22.301991999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:22.302730 containerd[1723]: time="2025-12-16T13:05:22.302528281Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.4774536s" Dec 16 13:05:22.302730 containerd[1723]: time="2025-12-16T13:05:22.302551041Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 13:05:22.303137 containerd[1723]: time="2025-12-16T13:05:22.303119166Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 13:05:23.155404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1832033844.mount: Deactivated successfully. Dec 16 13:05:23.503213 containerd[1723]: time="2025-12-16T13:05:23.503155138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:23.509765 containerd[1723]: time="2025-12-16T13:05:23.509745101Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31929990" Dec 16 13:05:23.512460 containerd[1723]: time="2025-12-16T13:05:23.512427296Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:23.516492 containerd[1723]: time="2025-12-16T13:05:23.516143069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:23.516492 containerd[1723]: time="2025-12-16T13:05:23.516414119Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.213272561s" Dec 16 13:05:23.516492 containerd[1723]: time="2025-12-16T13:05:23.516435481Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 13:05:23.516821 containerd[1723]: time="2025-12-16T13:05:23.516804096Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 13:05:24.037078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975578810.mount: Deactivated successfully. Dec 16 13:05:24.955117 containerd[1723]: time="2025-12-16T13:05:24.955086099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:24.963949 containerd[1723]: time="2025-12-16T13:05:24.963910339Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Dec 16 13:05:24.966531 containerd[1723]: time="2025-12-16T13:05:24.966499979Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:24.970741 containerd[1723]: time="2025-12-16T13:05:24.970588838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:24.971236 containerd[1723]: time="2025-12-16T13:05:24.971208979Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.454386158s" Dec 16 13:05:24.971236 containerd[1723]: time="2025-12-16T13:05:24.971233272Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 13:05:24.971759 containerd[1723]: time="2025-12-16T13:05:24.971742981Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:05:25.398278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236661523.mount: Deactivated successfully. Dec 16 13:05:25.420791 containerd[1723]: time="2025-12-16T13:05:25.420762545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:05:25.424343 containerd[1723]: time="2025-12-16T13:05:25.424318578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Dec 16 13:05:25.427014 containerd[1723]: time="2025-12-16T13:05:25.426982781Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:05:25.434431 containerd[1723]: time="2025-12-16T13:05:25.434397139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:05:25.434796 containerd[1723]: time="2025-12-16T13:05:25.434708482Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 462.945901ms" Dec 16 13:05:25.434796 containerd[1723]: time="2025-12-16T13:05:25.434729395Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:05:25.435177 containerd[1723]: time="2025-12-16T13:05:25.435062561Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 13:05:25.947315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount872239913.mount: Deactivated successfully. Dec 16 13:05:26.446679 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 16 13:05:27.468994 containerd[1723]: time="2025-12-16T13:05:27.468967545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:27.471570 containerd[1723]: time="2025-12-16T13:05:27.471483891Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58925893" Dec 16 13:05:27.473977 containerd[1723]: time="2025-12-16T13:05:27.473959008Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:27.477530 containerd[1723]: time="2025-12-16T13:05:27.477507701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:27.478209 containerd[1723]: time="2025-12-16T13:05:27.478033832Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.042951442s" Dec 16 13:05:27.478209 containerd[1723]: time="2025-12-16T13:05:27.478055220Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 13:05:29.855312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:29.855448 systemd[1]: kubelet.service: Consumed 112ms CPU time, 108.7M memory peak. Dec 16 13:05:29.857302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:29.881842 systemd[1]: Reload requested from client PID 2543 ('systemctl') (unit session-7.scope)... Dec 16 13:05:29.881915 systemd[1]: Reloading... Dec 16 13:05:29.943807 zram_generator::config[2586]: No configuration found. Dec 16 13:05:30.112061 systemd[1]: Reloading finished in 229 ms. Dec 16 13:05:30.130972 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:05:30.131021 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:05:30.131344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:30.132470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:30.598728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:30.606054 (kubelet)[2657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:05:30.636576 kubelet[2657]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:05:30.636576 kubelet[2657]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:05:30.636576 kubelet[2657]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:05:30.636820 kubelet[2657]: I1216 13:05:30.636621 2657 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:05:30.975343 kubelet[2657]: I1216 13:05:30.975289 2657 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:05:30.975343 kubelet[2657]: I1216 13:05:30.975304 2657 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:05:30.975595 kubelet[2657]: I1216 13:05:30.975464 2657 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:05:30.997802 kubelet[2657]: E1216 13:05:30.996895 2657 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.0.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.0.41:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:05:30.998055 kubelet[2657]: I1216 13:05:30.998042 2657 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:05:31.003519 kubelet[2657]: I1216 13:05:31.003507 2657 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:05:31.006619 kubelet[2657]: I1216 13:05:31.006608 2657 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:05:31.006849 kubelet[2657]: I1216 13:05:31.006837 2657 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:05:31.006988 kubelet[2657]: I1216 13:05:31.006887 2657 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-371cb0f3db","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:05:31.007089 kubelet[2657]: I1216 13:05:31.007084 2657 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:05:31.007116 kubelet[2657]: I1216 13:05:31.007112 2657 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:05:31.007204 kubelet[2657]: I1216 13:05:31.007200 2657 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:05:31.009387 kubelet[2657]: I1216 13:05:31.009375 2657 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:05:31.009455 kubelet[2657]: I1216 13:05:31.009449 2657 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:05:31.009506 kubelet[2657]: I1216 13:05:31.009501 2657 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:05:31.009547 kubelet[2657]: I1216 13:05:31.009543 2657 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:05:31.012892 kubelet[2657]: E1216 13:05:31.012713 2657 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-371cb0f3db&limit=500&resourceVersion=0\": dial tcp 10.200.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:05:31.014684 kubelet[2657]: E1216 13:05:31.014667 2657 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:05:31.014843 kubelet[2657]: I1216 13:05:31.014833 2657 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:05:31.015323 kubelet[2657]: I1216 13:05:31.015315 2657 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:05:31.015960 kubelet[2657]: W1216 13:05:31.015951 2657 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:05:31.017978 kubelet[2657]: I1216 13:05:31.017968 2657 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:05:31.018095 kubelet[2657]: I1216 13:05:31.018089 2657 server.go:1289] "Started kubelet" Dec 16 13:05:31.020097 kubelet[2657]: I1216 13:05:31.020073 2657 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:05:31.020873 kubelet[2657]: I1216 13:05:31.020799 2657 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:05:31.022808 kubelet[2657]: I1216 13:05:31.022448 2657 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:05:31.022808 kubelet[2657]: I1216 13:05:31.022700 2657 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:05:31.023986 kubelet[2657]: E1216 13:05:31.022901 2657 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.0.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.0.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-371cb0f3db.1881b3e664a0fba3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-371cb0f3db,UID:ci-4459.2.2-a-371cb0f3db,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-371cb0f3db,},FirstTimestamp:2025-12-16 13:05:31.018066851 +0000 UTC m=+0.409315615,LastTimestamp:2025-12-16 13:05:31.018066851 +0000 UTC m=+0.409315615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-371cb0f3db,}" Dec 16 13:05:31.026812 kubelet[2657]: I1216 13:05:31.025667 2657 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:05:31.026812 kubelet[2657]: I1216 13:05:31.026453 2657 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:05:31.028545 kubelet[2657]: E1216 13:05:31.028532 2657 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:05:31.029329 kubelet[2657]: E1216 13:05:31.029317 2657 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" Dec 16 13:05:31.029410 kubelet[2657]: I1216 13:05:31.029406 2657 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:05:31.029638 kubelet[2657]: I1216 13:05:31.029629 2657 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:05:31.029711 kubelet[2657]: I1216 13:05:31.029707 2657 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:05:31.030045 kubelet[2657]: E1216 13:05:31.030031 2657 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:05:31.030710 kubelet[2657]: I1216 13:05:31.030697 2657 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:05:31.030849 kubelet[2657]: I1216 13:05:31.030837 2657 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:05:31.031857 kubelet[2657]: I1216 13:05:31.031846 2657 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:05:31.046272 kubelet[2657]: I1216 13:05:31.046248 2657 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:05:31.046989 kubelet[2657]: I1216 13:05:31.046968 2657 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:05:31.046989 kubelet[2657]: I1216 13:05:31.046989 2657 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:05:31.047062 kubelet[2657]: I1216 13:05:31.047003 2657 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:05:31.047062 kubelet[2657]: I1216 13:05:31.047010 2657 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:05:31.047062 kubelet[2657]: E1216 13:05:31.047037 2657 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:05:31.051287 kubelet[2657]: E1216 13:05:31.051261 2657 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:05:31.055712 kubelet[2657]: E1216 13:05:31.054916 2657 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-371cb0f3db?timeout=10s\": dial tcp 10.200.0.41:6443: connect: connection refused" interval="200ms" Dec 16 13:05:31.057956 kubelet[2657]: I1216 13:05:31.057941 2657 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:05:31.058021 kubelet[2657]: I1216 13:05:31.057984 2657 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:05:31.058021 kubelet[2657]: I1216 13:05:31.057996 2657 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:05:31.062826 kubelet[2657]: I1216 13:05:31.062815 2657 policy_none.go:49] "None policy: Start" Dec 16 13:05:31.062826 kubelet[2657]: I1216 13:05:31.062828 2657 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:05:31.062892 kubelet[2657]: I1216 13:05:31.062837 2657 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:05:31.068937 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:05:31.074987 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:05:31.077064 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:05:31.087236 kubelet[2657]: E1216 13:05:31.087217 2657 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:05:31.087350 kubelet[2657]: I1216 13:05:31.087340 2657 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:05:31.087573 kubelet[2657]: I1216 13:05:31.087353 2657 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:05:31.087887 kubelet[2657]: I1216 13:05:31.087873 2657 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:05:31.089042 kubelet[2657]: E1216 13:05:31.088864 2657 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:05:31.089042 kubelet[2657]: E1216 13:05:31.088969 2657 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-a-371cb0f3db\" not found" Dec 16 13:05:31.156890 systemd[1]: Created slice kubepods-burstable-podece36b2c7344bab1676fdd7257a63b3a.slice - libcontainer container kubepods-burstable-podece36b2c7344bab1676fdd7257a63b3a.slice. Dec 16 13:05:31.165745 kubelet[2657]: E1216 13:05:31.165723 2657 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.169295 systemd[1]: Created slice kubepods-burstable-pod10e9950f8a2d78927d217b36cab5a2ad.slice - libcontainer container kubepods-burstable-pod10e9950f8a2d78927d217b36cab5a2ad.slice. Dec 16 13:05:31.170660 kubelet[2657]: E1216 13:05:31.170643 2657 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.172479 systemd[1]: Created slice kubepods-burstable-pod9e526fec080023de62bb654218b2bdd7.slice - libcontainer container kubepods-burstable-pod9e526fec080023de62bb654218b2bdd7.slice. Dec 16 13:05:31.173908 kubelet[2657]: E1216 13:05:31.173893 2657 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.189125 kubelet[2657]: I1216 13:05:31.189112 2657 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.189345 kubelet[2657]: E1216 13:05:31.189329 2657 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.41:6443/api/v1/nodes\": dial tcp 10.200.0.41:6443: connect: connection refused" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.256155 kubelet[2657]: E1216 13:05:31.256082 2657 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-371cb0f3db?timeout=10s\": dial tcp 10.200.0.41:6443: connect: connection refused" interval="400ms" Dec 16 13:05:31.331419 kubelet[2657]: I1216 13:05:31.331379 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10e9950f8a2d78927d217b36cab5a2ad-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-371cb0f3db\" (UID: \"10e9950f8a2d78927d217b36cab5a2ad\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.331490 kubelet[2657]: I1216 13:05:31.331422 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e526fec080023de62bb654218b2bdd7-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-371cb0f3db\" (UID: \"9e526fec080023de62bb654218b2bdd7\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.331490 kubelet[2657]: I1216 13:05:31.331439 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e526fec080023de62bb654218b2bdd7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-371cb0f3db\" (UID: \"9e526fec080023de62bb654218b2bdd7\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.331490 kubelet[2657]: I1216 13:05:31.331454 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ece36b2c7344bab1676fdd7257a63b3a-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-371cb0f3db\" (UID: \"ece36b2c7344bab1676fdd7257a63b3a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.331490 kubelet[2657]: I1216 13:05:31.331475 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ece36b2c7344bab1676fdd7257a63b3a-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-371cb0f3db\" (UID: \"ece36b2c7344bab1676fdd7257a63b3a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.331569 kubelet[2657]: I1216 13:05:31.331489 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ece36b2c7344bab1676fdd7257a63b3a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-371cb0f3db\" (UID: \"ece36b2c7344bab1676fdd7257a63b3a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.331569 kubelet[2657]: I1216 13:05:31.331504 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e526fec080023de62bb654218b2bdd7-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-371cb0f3db\" (UID: \"9e526fec080023de62bb654218b2bdd7\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.331569 kubelet[2657]: I1216 13:05:31.331523 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ece36b2c7344bab1676fdd7257a63b3a-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-371cb0f3db\" (UID: \"ece36b2c7344bab1676fdd7257a63b3a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.331569 kubelet[2657]: I1216 13:05:31.331538 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ece36b2c7344bab1676fdd7257a63b3a-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-371cb0f3db\" (UID: \"ece36b2c7344bab1676fdd7257a63b3a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.390527 kubelet[2657]: I1216 13:05:31.390512 2657 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.390724 kubelet[2657]: E1216 13:05:31.390705 2657 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.41:6443/api/v1/nodes\": dial tcp 10.200.0.41:6443: connect: connection refused" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.466876 containerd[1723]: time="2025-12-16T13:05:31.466848561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-371cb0f3db,Uid:ece36b2c7344bab1676fdd7257a63b3a,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:31.471215 containerd[1723]: time="2025-12-16T13:05:31.471187962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-371cb0f3db,Uid:10e9950f8a2d78927d217b36cab5a2ad,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:31.474960 containerd[1723]: time="2025-12-16T13:05:31.474939385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-371cb0f3db,Uid:9e526fec080023de62bb654218b2bdd7,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:31.543947 containerd[1723]: time="2025-12-16T13:05:31.543859127Z" level=info msg="connecting to shim bc4328a38998b5044e52170e7eefcd1e128f59aa904b6df8af9415ac177034f8" address="unix:///run/containerd/s/9be1448d09edec9095322faca7e512e604350fdb58f87446805cf845ea127fee" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:31.547353 containerd[1723]: time="2025-12-16T13:05:31.547328482Z" level=info msg="connecting to shim c9a01ca79f75b222e527ae9552b96c484036caa329f32eba0d0b4a738d9fa248" address="unix:///run/containerd/s/98d52d9f7ae5c0108d7f9edf94dbcbc2060a78d63a86b43f7829c581311ada69" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:31.567967 containerd[1723]: time="2025-12-16T13:05:31.567943845Z" level=info msg="connecting to shim 94c73b5415858552c15ca4038bfb74b00babe4127ddcdfeb673f3d541ba3bc43" address="unix:///run/containerd/s/1ab4896c598d416c88a6bb688b94fa0913a4dd586b8997ca6a522b9d011630f8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:31.591949 systemd[1]: Started cri-containerd-bc4328a38998b5044e52170e7eefcd1e128f59aa904b6df8af9415ac177034f8.scope - libcontainer container bc4328a38998b5044e52170e7eefcd1e128f59aa904b6df8af9415ac177034f8. Dec 16 13:05:31.593163 systemd[1]: Started cri-containerd-c9a01ca79f75b222e527ae9552b96c484036caa329f32eba0d0b4a738d9fa248.scope - libcontainer container c9a01ca79f75b222e527ae9552b96c484036caa329f32eba0d0b4a738d9fa248. Dec 16 13:05:31.597616 systemd[1]: Started cri-containerd-94c73b5415858552c15ca4038bfb74b00babe4127ddcdfeb673f3d541ba3bc43.scope - libcontainer container 94c73b5415858552c15ca4038bfb74b00babe4127ddcdfeb673f3d541ba3bc43. Dec 16 13:05:31.653575 containerd[1723]: time="2025-12-16T13:05:31.653434615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-371cb0f3db,Uid:ece36b2c7344bab1676fdd7257a63b3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc4328a38998b5044e52170e7eefcd1e128f59aa904b6df8af9415ac177034f8\"" Dec 16 13:05:31.657136 kubelet[2657]: E1216 13:05:31.656421 2657 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-371cb0f3db?timeout=10s\": dial tcp 10.200.0.41:6443: connect: connection refused" interval="800ms" Dec 16 13:05:31.664492 containerd[1723]: time="2025-12-16T13:05:31.664471161Z" level=info msg="CreateContainer within sandbox \"bc4328a38998b5044e52170e7eefcd1e128f59aa904b6df8af9415ac177034f8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:05:31.674951 containerd[1723]: time="2025-12-16T13:05:31.674936515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-371cb0f3db,Uid:10e9950f8a2d78927d217b36cab5a2ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9a01ca79f75b222e527ae9552b96c484036caa329f32eba0d0b4a738d9fa248\"" Dec 16 13:05:31.678506 containerd[1723]: time="2025-12-16T13:05:31.678490400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-371cb0f3db,Uid:9e526fec080023de62bb654218b2bdd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"94c73b5415858552c15ca4038bfb74b00babe4127ddcdfeb673f3d541ba3bc43\"" Dec 16 13:05:31.682654 containerd[1723]: time="2025-12-16T13:05:31.682637429Z" level=info msg="CreateContainer within sandbox \"c9a01ca79f75b222e527ae9552b96c484036caa329f32eba0d0b4a738d9fa248\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:05:31.685628 containerd[1723]: time="2025-12-16T13:05:31.685613108Z" level=info msg="Container a0d67c0feb98639e3f0867474dc3955a885a38aec3bbddf92fc5c2cca7cc8b45: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:31.686992 containerd[1723]: time="2025-12-16T13:05:31.686978558Z" level=info msg="CreateContainer within sandbox \"94c73b5415858552c15ca4038bfb74b00babe4127ddcdfeb673f3d541ba3bc43\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:05:31.717828 containerd[1723]: time="2025-12-16T13:05:31.717773351Z" level=info msg="CreateContainer within sandbox \"bc4328a38998b5044e52170e7eefcd1e128f59aa904b6df8af9415ac177034f8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a0d67c0feb98639e3f0867474dc3955a885a38aec3bbddf92fc5c2cca7cc8b45\"" Dec 16 13:05:31.718255 containerd[1723]: time="2025-12-16T13:05:31.718237123Z" level=info msg="StartContainer for \"a0d67c0feb98639e3f0867474dc3955a885a38aec3bbddf92fc5c2cca7cc8b45\"" Dec 16 13:05:31.719048 containerd[1723]: time="2025-12-16T13:05:31.719028841Z" level=info msg="connecting to shim a0d67c0feb98639e3f0867474dc3955a885a38aec3bbddf92fc5c2cca7cc8b45" address="unix:///run/containerd/s/9be1448d09edec9095322faca7e512e604350fdb58f87446805cf845ea127fee" protocol=ttrpc version=3 Dec 16 13:05:31.721642 containerd[1723]: time="2025-12-16T13:05:31.721573644Z" level=info msg="Container e681aedac6837eca93ae5569af3c354809afdd8bcbe7979cf68b6e06bb123e21: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:31.734947 systemd[1]: Started cri-containerd-a0d67c0feb98639e3f0867474dc3955a885a38aec3bbddf92fc5c2cca7cc8b45.scope - libcontainer container a0d67c0feb98639e3f0867474dc3955a885a38aec3bbddf92fc5c2cca7cc8b45. Dec 16 13:05:31.741420 containerd[1723]: time="2025-12-16T13:05:31.740923632Z" level=info msg="Container 9720d124dabba7b232ec2f37305a1a75add07e26a5e8a04381db3d2552c6f3ea: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:31.745657 containerd[1723]: time="2025-12-16T13:05:31.745638845Z" level=info msg="CreateContainer within sandbox \"c9a01ca79f75b222e527ae9552b96c484036caa329f32eba0d0b4a738d9fa248\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e681aedac6837eca93ae5569af3c354809afdd8bcbe7979cf68b6e06bb123e21\"" Dec 16 13:05:31.746815 containerd[1723]: time="2025-12-16T13:05:31.746577674Z" level=info msg="StartContainer for \"e681aedac6837eca93ae5569af3c354809afdd8bcbe7979cf68b6e06bb123e21\"" Dec 16 13:05:31.747411 containerd[1723]: time="2025-12-16T13:05:31.747391070Z" level=info msg="connecting to shim e681aedac6837eca93ae5569af3c354809afdd8bcbe7979cf68b6e06bb123e21" address="unix:///run/containerd/s/98d52d9f7ae5c0108d7f9edf94dbcbc2060a78d63a86b43f7829c581311ada69" protocol=ttrpc version=3 Dec 16 13:05:31.765793 containerd[1723]: time="2025-12-16T13:05:31.764647712Z" level=info msg="CreateContainer within sandbox \"94c73b5415858552c15ca4038bfb74b00babe4127ddcdfeb673f3d541ba3bc43\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9720d124dabba7b232ec2f37305a1a75add07e26a5e8a04381db3d2552c6f3ea\"" Dec 16 13:05:31.766174 containerd[1723]: time="2025-12-16T13:05:31.766158249Z" level=info msg="StartContainer for \"9720d124dabba7b232ec2f37305a1a75add07e26a5e8a04381db3d2552c6f3ea\"" Dec 16 13:05:31.766989 systemd[1]: Started cri-containerd-e681aedac6837eca93ae5569af3c354809afdd8bcbe7979cf68b6e06bb123e21.scope - libcontainer container e681aedac6837eca93ae5569af3c354809afdd8bcbe7979cf68b6e06bb123e21. Dec 16 13:05:31.768179 containerd[1723]: time="2025-12-16T13:05:31.768156247Z" level=info msg="connecting to shim 9720d124dabba7b232ec2f37305a1a75add07e26a5e8a04381db3d2552c6f3ea" address="unix:///run/containerd/s/1ab4896c598d416c88a6bb688b94fa0913a4dd586b8997ca6a522b9d011630f8" protocol=ttrpc version=3 Dec 16 13:05:31.793047 kubelet[2657]: I1216 13:05:31.793031 2657 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.793923 kubelet[2657]: E1216 13:05:31.793771 2657 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.0.41:6443/api/v1/nodes\": dial tcp 10.200.0.41:6443: connect: connection refused" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:31.795828 containerd[1723]: time="2025-12-16T13:05:31.795762095Z" level=info msg="StartContainer for \"a0d67c0feb98639e3f0867474dc3955a885a38aec3bbddf92fc5c2cca7cc8b45\" returns successfully" Dec 16 13:05:31.796465 systemd[1]: Started cri-containerd-9720d124dabba7b232ec2f37305a1a75add07e26a5e8a04381db3d2552c6f3ea.scope - libcontainer container 9720d124dabba7b232ec2f37305a1a75add07e26a5e8a04381db3d2552c6f3ea. Dec 16 13:05:31.836338 containerd[1723]: time="2025-12-16T13:05:31.836315954Z" level=info msg="StartContainer for \"e681aedac6837eca93ae5569af3c354809afdd8bcbe7979cf68b6e06bb123e21\" returns successfully" Dec 16 13:05:31.882678 containerd[1723]: time="2025-12-16T13:05:31.882650866Z" level=info msg="StartContainer for \"9720d124dabba7b232ec2f37305a1a75add07e26a5e8a04381db3d2552c6f3ea\" returns successfully" Dec 16 13:05:31.887244 kubelet[2657]: E1216 13:05:31.887220 2657 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-371cb0f3db&limit=500&resourceVersion=0\": dial tcp 10.200.0.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:05:32.062931 kubelet[2657]: E1216 13:05:32.062913 2657 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:32.064453 kubelet[2657]: E1216 13:05:32.064437 2657 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:32.067194 kubelet[2657]: E1216 13:05:32.067173 2657 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:32.596500 kubelet[2657]: I1216 13:05:32.596481 2657 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:33.069680 kubelet[2657]: E1216 13:05:33.069370 2657 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:33.069680 kubelet[2657]: E1216 13:05:33.069605 2657 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:33.393965 kubelet[2657]: E1216 13:05:33.393316 2657 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-a-371cb0f3db\" not found" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:33.511741 kubelet[2657]: E1216 13:05:33.511532 2657 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4459.2.2-a-371cb0f3db.1881b3e664a0fba3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-371cb0f3db,UID:ci-4459.2.2-a-371cb0f3db,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-371cb0f3db,},FirstTimestamp:2025-12-16 13:05:31.018066851 +0000 UTC m=+0.409315615,LastTimestamp:2025-12-16 13:05:31.018066851 +0000 UTC m=+0.409315615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-371cb0f3db,}" Dec 16 13:05:33.545203 kubelet[2657]: I1216 13:05:33.544828 2657 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:33.545203 kubelet[2657]: E1216 13:05:33.544851 2657 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.2.2-a-371cb0f3db\": node \"ci-4459.2.2-a-371cb0f3db\" not found" Dec 16 13:05:33.577162 kubelet[2657]: E1216 13:05:33.577142 2657 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" Dec 16 13:05:33.678078 kubelet[2657]: E1216 13:05:33.678021 2657 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" Dec 16 13:05:33.779124 kubelet[2657]: E1216 13:05:33.779107 2657 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" Dec 16 13:05:33.879935 kubelet[2657]: E1216 13:05:33.879918 2657 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" Dec 16 13:05:33.980492 kubelet[2657]: E1216 13:05:33.980414 2657 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" Dec 16 13:05:34.081296 kubelet[2657]: E1216 13:05:34.081267 2657 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" Dec 16 13:05:34.182041 kubelet[2657]: E1216 13:05:34.182017 2657 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-371cb0f3db\" not found" Dec 16 13:05:34.233891 kubelet[2657]: I1216 13:05:34.233845 2657 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:34.238188 kubelet[2657]: E1216 13:05:34.238152 2657 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-371cb0f3db\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:34.238188 kubelet[2657]: I1216 13:05:34.238185 2657 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:34.239481 kubelet[2657]: E1216 13:05:34.239461 2657 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-371cb0f3db\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:34.239481 kubelet[2657]: I1216 13:05:34.239478 2657 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:34.240744 kubelet[2657]: E1216 13:05:34.240723 2657 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-371cb0f3db\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:35.015425 kubelet[2657]: I1216 13:05:35.015404 2657 apiserver.go:52] "Watching apiserver" Dec 16 13:05:35.030190 kubelet[2657]: I1216 13:05:35.030173 2657 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:05:35.548670 systemd[1]: Reload requested from client PID 2930 ('systemctl') (unit session-7.scope)... Dec 16 13:05:35.548689 systemd[1]: Reloading... Dec 16 13:05:35.623814 zram_generator::config[2977]: No configuration found. Dec 16 13:05:35.784012 systemd[1]: Reloading finished in 234 ms. Dec 16 13:05:35.812204 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:35.829437 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:05:35.829653 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:35.829698 systemd[1]: kubelet.service: Consumed 651ms CPU time, 128.1M memory peak. Dec 16 13:05:35.830988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:05:36.292489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:05:36.301140 (kubelet)[3044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:05:36.332801 kubelet[3044]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:05:36.332801 kubelet[3044]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:05:36.332801 kubelet[3044]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:05:36.332801 kubelet[3044]: I1216 13:05:36.332787 3044 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:05:36.339803 kubelet[3044]: I1216 13:05:36.339767 3044 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:05:36.340964 kubelet[3044]: I1216 13:05:36.340946 3044 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:05:36.341308 kubelet[3044]: I1216 13:05:36.341182 3044 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:05:36.342573 kubelet[3044]: I1216 13:05:36.342556 3044 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:05:36.345904 kubelet[3044]: I1216 13:05:36.345470 3044 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:05:36.348833 kubelet[3044]: I1216 13:05:36.348818 3044 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:05:36.352727 kubelet[3044]: I1216 13:05:36.352715 3044 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:05:36.352958 kubelet[3044]: I1216 13:05:36.352940 3044 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:05:36.353094 kubelet[3044]: I1216 13:05:36.352993 3044 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-371cb0f3db","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:05:36.353183 kubelet[3044]: I1216 13:05:36.353178 3044 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:05:36.353206 kubelet[3044]: I1216 13:05:36.353204 3044 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:05:36.353254 kubelet[3044]: I1216 13:05:36.353251 3044 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:05:36.353388 kubelet[3044]: I1216 13:05:36.353383 3044 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:05:36.353426 kubelet[3044]: I1216 13:05:36.353422 3044 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:05:36.353476 kubelet[3044]: I1216 13:05:36.353471 3044 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:05:36.353518 kubelet[3044]: I1216 13:05:36.353513 3044 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:05:36.358278 kubelet[3044]: I1216 13:05:36.357903 3044 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:05:36.358462 kubelet[3044]: I1216 13:05:36.358448 3044 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:05:36.363767 kubelet[3044]: I1216 13:05:36.362765 3044 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:05:36.363767 kubelet[3044]: I1216 13:05:36.362820 3044 server.go:1289] "Started kubelet" Dec 16 13:05:36.365793 kubelet[3044]: I1216 13:05:36.364979 3044 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:05:36.372370 kubelet[3044]: I1216 13:05:36.371940 3044 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:05:36.373413 kubelet[3044]: I1216 13:05:36.372759 3044 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:05:36.377868 kubelet[3044]: I1216 13:05:36.377832 3044 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:05:36.378194 kubelet[3044]: I1216 13:05:36.378178 3044 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:05:36.378632 kubelet[3044]: I1216 13:05:36.378615 3044 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:05:36.381057 kubelet[3044]: I1216 13:05:36.381037 3044 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:05:36.383320 kubelet[3044]: I1216 13:05:36.383303 3044 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:05:36.383382 kubelet[3044]: I1216 13:05:36.383372 3044 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:05:36.384543 kubelet[3044]: I1216 13:05:36.384517 3044 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:05:36.384601 kubelet[3044]: I1216 13:05:36.384594 3044 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:05:36.387665 kubelet[3044]: I1216 13:05:36.387621 3044 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:05:36.387995 kubelet[3044]: I1216 13:05:36.387980 3044 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:05:36.389408 kubelet[3044]: I1216 13:05:36.389303 3044 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:05:36.389408 kubelet[3044]: I1216 13:05:36.389317 3044 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:05:36.389408 kubelet[3044]: I1216 13:05:36.389333 3044 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:05:36.389408 kubelet[3044]: I1216 13:05:36.389342 3044 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:05:36.389408 kubelet[3044]: E1216 13:05:36.389370 3044 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:05:36.391042 kubelet[3044]: E1216 13:05:36.391025 3044 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:05:36.431467 kubelet[3044]: I1216 13:05:36.431456 3044 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:05:36.431570 kubelet[3044]: I1216 13:05:36.431562 3044 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:05:36.431608 kubelet[3044]: I1216 13:05:36.431605 3044 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:05:36.431698 kubelet[3044]: I1216 13:05:36.431694 3044 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:05:36.431731 kubelet[3044]: I1216 13:05:36.431722 3044 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:05:36.431752 kubelet[3044]: I1216 13:05:36.431749 3044 policy_none.go:49] "None policy: Start" Dec 16 13:05:36.431774 kubelet[3044]: I1216 13:05:36.431771 3044 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:05:36.431811 kubelet[3044]: I1216 13:05:36.431806 3044 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:05:36.431885 kubelet[3044]: I1216 13:05:36.431875 3044 state_mem.go:75] "Updated machine memory state" Dec 16 13:05:36.434359 kubelet[3044]: E1216 13:05:36.434348 3044 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:05:36.434772 kubelet[3044]: I1216 13:05:36.434721 3044 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:05:36.434772 kubelet[3044]: I1216 13:05:36.434732 3044 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:05:36.435197 kubelet[3044]: I1216 13:05:36.435188 3044 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:05:36.436720 kubelet[3044]: E1216 13:05:36.436708 3044 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:05:36.490236 kubelet[3044]: I1216 13:05:36.490214 3044 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:36.490430 kubelet[3044]: I1216 13:05:36.490421 3044 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:36.491803 kubelet[3044]: I1216 13:05:36.490446 3044 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:36.499825 kubelet[3044]: I1216 13:05:36.499761 3044 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:36.510391 kubelet[3044]: I1216 13:05:36.510376 3044 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:36.510733 kubelet[3044]: I1216 13:05:36.510704 3044 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:36.541165 kubelet[3044]: I1216 13:05:36.541146 3044 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:36.556415 kubelet[3044]: I1216 13:05:36.556391 3044 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:36.556803 kubelet[3044]: I1216 13:05:36.556515 3044 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:36.685860 kubelet[3044]: I1216 13:05:36.685843 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e526fec080023de62bb654218b2bdd7-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-371cb0f3db\" (UID: \"9e526fec080023de62bb654218b2bdd7\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:36.685979 kubelet[3044]: I1216 13:05:36.685874 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e526fec080023de62bb654218b2bdd7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-371cb0f3db\" (UID: \"9e526fec080023de62bb654218b2bdd7\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:36.685979 kubelet[3044]: I1216 13:05:36.685894 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ece36b2c7344bab1676fdd7257a63b3a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-371cb0f3db\" (UID: \"ece36b2c7344bab1676fdd7257a63b3a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:36.685979 kubelet[3044]: I1216 13:05:36.685909 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10e9950f8a2d78927d217b36cab5a2ad-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-371cb0f3db\" (UID: \"10e9950f8a2d78927d217b36cab5a2ad\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:36.685979 kubelet[3044]: I1216 13:05:36.685923 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e526fec080023de62bb654218b2bdd7-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-371cb0f3db\" (UID: \"9e526fec080023de62bb654218b2bdd7\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:36.685979 kubelet[3044]: I1216 13:05:36.685936 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ece36b2c7344bab1676fdd7257a63b3a-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-371cb0f3db\" (UID: \"ece36b2c7344bab1676fdd7257a63b3a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:36.686092 kubelet[3044]: I1216 13:05:36.685951 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ece36b2c7344bab1676fdd7257a63b3a-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-371cb0f3db\" (UID: \"ece36b2c7344bab1676fdd7257a63b3a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:36.686092 kubelet[3044]: I1216 13:05:36.685966 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ece36b2c7344bab1676fdd7257a63b3a-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-371cb0f3db\" (UID: \"ece36b2c7344bab1676fdd7257a63b3a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:36.686092 kubelet[3044]: I1216 13:05:36.685979 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ece36b2c7344bab1676fdd7257a63b3a-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-371cb0f3db\" (UID: \"ece36b2c7344bab1676fdd7257a63b3a\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:37.364984 kubelet[3044]: I1216 13:05:37.364957 3044 apiserver.go:52] "Watching apiserver" Dec 16 13:05:37.375395 sudo[2089]: pam_unix(sudo:session): session closed for user root Dec 16 13:05:37.385579 kubelet[3044]: I1216 13:05:37.385546 3044 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:05:37.415751 kubelet[3044]: I1216 13:05:37.415526 3044 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:37.415751 kubelet[3044]: I1216 13:05:37.415666 3044 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:37.429102 kubelet[3044]: I1216 13:05:37.428939 3044 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:37.429102 kubelet[3044]: E1216 13:05:37.428976 3044 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-371cb0f3db\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:37.429227 kubelet[3044]: I1216 13:05:37.429219 3044 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 13:05:37.429283 kubelet[3044]: E1216 13:05:37.429276 3044 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-371cb0f3db\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" Dec 16 13:05:37.461840 sshd[2088]: Connection closed by 10.200.16.10 port 38548 Dec 16 13:05:37.463714 sshd-session[2073]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:37.467747 systemd[1]: sshd@4-10.200.0.41:22-10.200.16.10:38548.service: Deactivated successfully. Dec 16 13:05:37.470179 kubelet[3044]: I1216 13:05:37.470134 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-371cb0f3db" podStartSLOduration=1.470122531 podStartE2EDuration="1.470122531s" podCreationTimestamp="2025-12-16 13:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:37.449425648 +0000 UTC m=+1.145255591" watchObservedRunningTime="2025-12-16 13:05:37.470122531 +0000 UTC m=+1.165952501" Dec 16 13:05:37.470436 kubelet[3044]: I1216 13:05:37.470316 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-a-371cb0f3db" podStartSLOduration=1.470308888 podStartE2EDuration="1.470308888s" podCreationTimestamp="2025-12-16 13:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:37.470077989 +0000 UTC m=+1.165907915" watchObservedRunningTime="2025-12-16 13:05:37.470308888 +0000 UTC m=+1.166138820" Dec 16 13:05:37.471336 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:05:37.471833 systemd[1]: session-7.scope: Consumed 2.787s CPU time, 232M memory peak. Dec 16 13:05:37.475031 systemd-logind[1687]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:05:37.476208 systemd-logind[1687]: Removed session 7. Dec 16 13:05:37.503555 kubelet[3044]: I1216 13:05:37.503523 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-a-371cb0f3db" podStartSLOduration=1.503512836 podStartE2EDuration="1.503512836s" podCreationTimestamp="2025-12-16 13:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:37.491262056 +0000 UTC m=+1.187091973" watchObservedRunningTime="2025-12-16 13:05:37.503512836 +0000 UTC m=+1.199342766" Dec 16 13:05:40.848883 kubelet[3044]: I1216 13:05:40.848860 3044 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:05:40.849461 kubelet[3044]: I1216 13:05:40.849309 3044 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:05:40.849490 containerd[1723]: time="2025-12-16T13:05:40.849161559Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:05:40.923143 update_engine[1688]: I20251216 13:05:40.923096 1688 update_attempter.cc:509] Updating boot flags... Dec 16 13:05:41.819820 kubelet[3044]: I1216 13:05:41.819373 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/cabd1fee-909f-4334-bf00-6d04aa84e48e-flannel-cfg\") pod \"kube-flannel-ds-rn84s\" (UID: \"cabd1fee-909f-4334-bf00-6d04aa84e48e\") " pod="kube-flannel/kube-flannel-ds-rn84s" Dec 16 13:05:41.819820 kubelet[3044]: I1216 13:05:41.819403 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/cabd1fee-909f-4334-bf00-6d04aa84e48e-cni\") pod \"kube-flannel-ds-rn84s\" (UID: \"cabd1fee-909f-4334-bf00-6d04aa84e48e\") " pod="kube-flannel/kube-flannel-ds-rn84s" Dec 16 13:05:41.819820 kubelet[3044]: I1216 13:05:41.819422 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/cabd1fee-909f-4334-bf00-6d04aa84e48e-cni-plugin\") pod \"kube-flannel-ds-rn84s\" (UID: \"cabd1fee-909f-4334-bf00-6d04aa84e48e\") " pod="kube-flannel/kube-flannel-ds-rn84s" Dec 16 13:05:41.819820 kubelet[3044]: I1216 13:05:41.819435 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cabd1fee-909f-4334-bf00-6d04aa84e48e-xtables-lock\") pod \"kube-flannel-ds-rn84s\" (UID: \"cabd1fee-909f-4334-bf00-6d04aa84e48e\") " pod="kube-flannel/kube-flannel-ds-rn84s" Dec 16 13:05:41.819820 kubelet[3044]: I1216 13:05:41.819452 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwqgc\" (UniqueName: \"kubernetes.io/projected/cabd1fee-909f-4334-bf00-6d04aa84e48e-kube-api-access-cwqgc\") pod \"kube-flannel-ds-rn84s\" (UID: \"cabd1fee-909f-4334-bf00-6d04aa84e48e\") " pod="kube-flannel/kube-flannel-ds-rn84s" Dec 16 13:05:41.820038 kubelet[3044]: I1216 13:05:41.819467 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cabd1fee-909f-4334-bf00-6d04aa84e48e-run\") pod \"kube-flannel-ds-rn84s\" (UID: \"cabd1fee-909f-4334-bf00-6d04aa84e48e\") " pod="kube-flannel/kube-flannel-ds-rn84s" Dec 16 13:05:41.824877 systemd[1]: Created slice kubepods-burstable-podcabd1fee_909f_4334_bf00_6d04aa84e48e.slice - libcontainer container kubepods-burstable-podcabd1fee_909f_4334_bf00_6d04aa84e48e.slice. Dec 16 13:05:41.837008 systemd[1]: Created slice kubepods-besteffort-pod91169430_4400_4c10_9d3b_6339f54d367a.slice - libcontainer container kubepods-besteffort-pod91169430_4400_4c10_9d3b_6339f54d367a.slice. Dec 16 13:05:41.920019 kubelet[3044]: I1216 13:05:41.920000 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91169430-4400-4c10-9d3b-6339f54d367a-xtables-lock\") pod \"kube-proxy-6qw62\" (UID: \"91169430-4400-4c10-9d3b-6339f54d367a\") " pod="kube-system/kube-proxy-6qw62" Dec 16 13:05:41.920019 kubelet[3044]: I1216 13:05:41.920022 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91169430-4400-4c10-9d3b-6339f54d367a-lib-modules\") pod \"kube-proxy-6qw62\" (UID: \"91169430-4400-4c10-9d3b-6339f54d367a\") " pod="kube-system/kube-proxy-6qw62" Dec 16 13:05:41.920304 kubelet[3044]: I1216 13:05:41.920045 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91169430-4400-4c10-9d3b-6339f54d367a-kube-proxy\") pod \"kube-proxy-6qw62\" (UID: \"91169430-4400-4c10-9d3b-6339f54d367a\") " pod="kube-system/kube-proxy-6qw62" Dec 16 13:05:41.920304 kubelet[3044]: I1216 13:05:41.920061 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbztr\" (UniqueName: \"kubernetes.io/projected/91169430-4400-4c10-9d3b-6339f54d367a-kube-api-access-mbztr\") pod \"kube-proxy-6qw62\" (UID: \"91169430-4400-4c10-9d3b-6339f54d367a\") " pod="kube-system/kube-proxy-6qw62" Dec 16 13:05:42.128377 containerd[1723]: time="2025-12-16T13:05:42.128310193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rn84s,Uid:cabd1fee-909f-4334-bf00-6d04aa84e48e,Namespace:kube-flannel,Attempt:0,}" Dec 16 13:05:42.152753 containerd[1723]: time="2025-12-16T13:05:42.152709404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6qw62,Uid:91169430-4400-4c10-9d3b-6339f54d367a,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:42.168364 containerd[1723]: time="2025-12-16T13:05:42.168340380Z" level=info msg="connecting to shim f398afa36d5952939f675b690a079299b5a2fee3dd2b88784ff6260ea9a0a694" address="unix:///run/containerd/s/4753dbc9929910f1e8b3d80f578679cb164069bf3454e23b3306528de4b2a74f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:42.194995 systemd[1]: Started cri-containerd-f398afa36d5952939f675b690a079299b5a2fee3dd2b88784ff6260ea9a0a694.scope - libcontainer container f398afa36d5952939f675b690a079299b5a2fee3dd2b88784ff6260ea9a0a694. Dec 16 13:05:42.200541 containerd[1723]: time="2025-12-16T13:05:42.199685998Z" level=info msg="connecting to shim 07154521d52b79777252b343d2ab79561b4620eed898231e715e2270610f93d0" address="unix:///run/containerd/s/4c4810c61d8c454eef20557ae078fe5ac800802cf5c506ccb2e99fa607581935" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:42.222893 systemd[1]: Started cri-containerd-07154521d52b79777252b343d2ab79561b4620eed898231e715e2270610f93d0.scope - libcontainer container 07154521d52b79777252b343d2ab79561b4620eed898231e715e2270610f93d0. Dec 16 13:05:42.256453 containerd[1723]: time="2025-12-16T13:05:42.256438503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rn84s,Uid:cabd1fee-909f-4334-bf00-6d04aa84e48e,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"f398afa36d5952939f675b690a079299b5a2fee3dd2b88784ff6260ea9a0a694\"" Dec 16 13:05:42.257486 containerd[1723]: time="2025-12-16T13:05:42.257469277Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Dec 16 13:05:42.261310 containerd[1723]: time="2025-12-16T13:05:42.261293396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6qw62,Uid:91169430-4400-4c10-9d3b-6339f54d367a,Namespace:kube-system,Attempt:0,} returns sandbox id \"07154521d52b79777252b343d2ab79561b4620eed898231e715e2270610f93d0\"" Dec 16 13:05:42.275320 containerd[1723]: time="2025-12-16T13:05:42.275298390Z" level=info msg="CreateContainer within sandbox \"07154521d52b79777252b343d2ab79561b4620eed898231e715e2270610f93d0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:05:42.292557 containerd[1723]: time="2025-12-16T13:05:42.292536098Z" level=info msg="Container ef47cf91d2385b512fc1468a8f2eb1b31b02943dcc858c4be930b2c9e58f194a: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:42.306092 containerd[1723]: time="2025-12-16T13:05:42.306072219Z" level=info msg="CreateContainer within sandbox \"07154521d52b79777252b343d2ab79561b4620eed898231e715e2270610f93d0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef47cf91d2385b512fc1468a8f2eb1b31b02943dcc858c4be930b2c9e58f194a\"" Dec 16 13:05:42.307078 containerd[1723]: time="2025-12-16T13:05:42.306592394Z" level=info msg="StartContainer for \"ef47cf91d2385b512fc1468a8f2eb1b31b02943dcc858c4be930b2c9e58f194a\"" Dec 16 13:05:42.307821 containerd[1723]: time="2025-12-16T13:05:42.307799249Z" level=info msg="connecting to shim ef47cf91d2385b512fc1468a8f2eb1b31b02943dcc858c4be930b2c9e58f194a" address="unix:///run/containerd/s/4c4810c61d8c454eef20557ae078fe5ac800802cf5c506ccb2e99fa607581935" protocol=ttrpc version=3 Dec 16 13:05:42.324902 systemd[1]: Started cri-containerd-ef47cf91d2385b512fc1468a8f2eb1b31b02943dcc858c4be930b2c9e58f194a.scope - libcontainer container ef47cf91d2385b512fc1468a8f2eb1b31b02943dcc858c4be930b2c9e58f194a. Dec 16 13:05:42.384833 containerd[1723]: time="2025-12-16T13:05:42.384748594Z" level=info msg="StartContainer for \"ef47cf91d2385b512fc1468a8f2eb1b31b02943dcc858c4be930b2c9e58f194a\" returns successfully" Dec 16 13:05:42.443358 kubelet[3044]: I1216 13:05:42.443073 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6qw62" podStartSLOduration=1.443061122 podStartE2EDuration="1.443061122s" podCreationTimestamp="2025-12-16 13:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:42.442678191 +0000 UTC m=+6.138508117" watchObservedRunningTime="2025-12-16 13:05:42.443061122 +0000 UTC m=+6.138891050" Dec 16 13:05:43.990120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2164135154.mount: Deactivated successfully. Dec 16 13:05:44.052452 containerd[1723]: time="2025-12-16T13:05:44.052425883Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:44.054747 containerd[1723]: time="2025-12-16T13:05:44.054718026Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Dec 16 13:05:44.057838 containerd[1723]: time="2025-12-16T13:05:44.057806251Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:44.061805 containerd[1723]: time="2025-12-16T13:05:44.061482308Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:44.062309 containerd[1723]: time="2025-12-16T13:05:44.062080149Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 1.804516009s" Dec 16 13:05:44.062309 containerd[1723]: time="2025-12-16T13:05:44.062104140Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Dec 16 13:05:44.068043 containerd[1723]: time="2025-12-16T13:05:44.068021620Z" level=info msg="CreateContainer within sandbox \"f398afa36d5952939f675b690a079299b5a2fee3dd2b88784ff6260ea9a0a694\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 16 13:05:44.086606 containerd[1723]: time="2025-12-16T13:05:44.086175138Z" level=info msg="Container adf2b9ebf8b2f8c9c1e82564667a640375b04012fb4221e427680b94a37f51e4: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:44.101038 containerd[1723]: time="2025-12-16T13:05:44.101016723Z" level=info msg="CreateContainer within sandbox \"f398afa36d5952939f675b690a079299b5a2fee3dd2b88784ff6260ea9a0a694\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"adf2b9ebf8b2f8c9c1e82564667a640375b04012fb4221e427680b94a37f51e4\"" Dec 16 13:05:44.101455 containerd[1723]: time="2025-12-16T13:05:44.101355987Z" level=info msg="StartContainer for \"adf2b9ebf8b2f8c9c1e82564667a640375b04012fb4221e427680b94a37f51e4\"" Dec 16 13:05:44.102187 containerd[1723]: time="2025-12-16T13:05:44.102156460Z" level=info msg="connecting to shim adf2b9ebf8b2f8c9c1e82564667a640375b04012fb4221e427680b94a37f51e4" address="unix:///run/containerd/s/4753dbc9929910f1e8b3d80f578679cb164069bf3454e23b3306528de4b2a74f" protocol=ttrpc version=3 Dec 16 13:05:44.119912 systemd[1]: Started cri-containerd-adf2b9ebf8b2f8c9c1e82564667a640375b04012fb4221e427680b94a37f51e4.scope - libcontainer container adf2b9ebf8b2f8c9c1e82564667a640375b04012fb4221e427680b94a37f51e4. Dec 16 13:05:44.139143 systemd[1]: cri-containerd-adf2b9ebf8b2f8c9c1e82564667a640375b04012fb4221e427680b94a37f51e4.scope: Deactivated successfully. Dec 16 13:05:44.143240 containerd[1723]: time="2025-12-16T13:05:44.143208197Z" level=info msg="received container exit event container_id:\"adf2b9ebf8b2f8c9c1e82564667a640375b04012fb4221e427680b94a37f51e4\" id:\"adf2b9ebf8b2f8c9c1e82564667a640375b04012fb4221e427680b94a37f51e4\" pid:3406 exited_at:{seconds:1765890344 nanos:140295788}" Dec 16 13:05:44.144459 containerd[1723]: time="2025-12-16T13:05:44.144440027Z" level=info msg="StartContainer for \"adf2b9ebf8b2f8c9c1e82564667a640375b04012fb4221e427680b94a37f51e4\" returns successfully" Dec 16 13:05:44.157963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adf2b9ebf8b2f8c9c1e82564667a640375b04012fb4221e427680b94a37f51e4-rootfs.mount: Deactivated successfully. Dec 16 13:05:46.435180 containerd[1723]: time="2025-12-16T13:05:46.435151886Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Dec 16 13:05:49.000436 containerd[1723]: time="2025-12-16T13:05:49.000402718Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:49.004678 containerd[1723]: time="2025-12-16T13:05:49.004448890Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Dec 16 13:05:49.007387 containerd[1723]: time="2025-12-16T13:05:49.007365971Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:49.011457 containerd[1723]: time="2025-12-16T13:05:49.011433116Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:49.012201 containerd[1723]: time="2025-12-16T13:05:49.012183086Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 2.576990127s" Dec 16 13:05:49.012304 containerd[1723]: time="2025-12-16T13:05:49.012258565Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Dec 16 13:05:49.019114 containerd[1723]: time="2025-12-16T13:05:49.019075913Z" level=info msg="CreateContainer within sandbox \"f398afa36d5952939f675b690a079299b5a2fee3dd2b88784ff6260ea9a0a694\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:05:49.040534 containerd[1723]: time="2025-12-16T13:05:49.040079026Z" level=info msg="Container f01d3152f29df030b32d6e4aa7ce225251498fddc3e9f289d273d0e717005ecd: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:49.053431 containerd[1723]: time="2025-12-16T13:05:49.053407861Z" level=info msg="CreateContainer within sandbox \"f398afa36d5952939f675b690a079299b5a2fee3dd2b88784ff6260ea9a0a694\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f01d3152f29df030b32d6e4aa7ce225251498fddc3e9f289d273d0e717005ecd\"" Dec 16 13:05:49.053906 containerd[1723]: time="2025-12-16T13:05:49.053881534Z" level=info msg="StartContainer for \"f01d3152f29df030b32d6e4aa7ce225251498fddc3e9f289d273d0e717005ecd\"" Dec 16 13:05:49.054441 containerd[1723]: time="2025-12-16T13:05:49.054422421Z" level=info msg="connecting to shim f01d3152f29df030b32d6e4aa7ce225251498fddc3e9f289d273d0e717005ecd" address="unix:///run/containerd/s/4753dbc9929910f1e8b3d80f578679cb164069bf3454e23b3306528de4b2a74f" protocol=ttrpc version=3 Dec 16 13:05:49.077911 systemd[1]: Started cri-containerd-f01d3152f29df030b32d6e4aa7ce225251498fddc3e9f289d273d0e717005ecd.scope - libcontainer container f01d3152f29df030b32d6e4aa7ce225251498fddc3e9f289d273d0e717005ecd. Dec 16 13:05:49.098622 systemd[1]: cri-containerd-f01d3152f29df030b32d6e4aa7ce225251498fddc3e9f289d273d0e717005ecd.scope: Deactivated successfully. Dec 16 13:05:49.103771 containerd[1723]: time="2025-12-16T13:05:49.103736633Z" level=info msg="received container exit event container_id:\"f01d3152f29df030b32d6e4aa7ce225251498fddc3e9f289d273d0e717005ecd\" id:\"f01d3152f29df030b32d6e4aa7ce225251498fddc3e9f289d273d0e717005ecd\" pid:3483 exited_at:{seconds:1765890349 nanos:99546872}" Dec 16 13:05:49.106711 containerd[1723]: time="2025-12-16T13:05:49.106320284Z" level=info msg="StartContainer for \"f01d3152f29df030b32d6e4aa7ce225251498fddc3e9f289d273d0e717005ecd\" returns successfully" Dec 16 13:05:49.119488 kubelet[3044]: I1216 13:05:49.119474 3044 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:05:49.125988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f01d3152f29df030b32d6e4aa7ce225251498fddc3e9f289d273d0e717005ecd-rootfs.mount: Deactivated successfully. Dec 16 13:05:49.238355 systemd[1]: Created slice kubepods-burstable-pod9e963575_ea3f_424f_93b9_1f67f3dfc750.slice - libcontainer container kubepods-burstable-pod9e963575_ea3f_424f_93b9_1f67f3dfc750.slice. Dec 16 13:05:49.243940 systemd[1]: Created slice kubepods-burstable-podfd703ac8_9ce5_47d4_8251_45440c7a541b.slice - libcontainer container kubepods-burstable-podfd703ac8_9ce5_47d4_8251_45440c7a541b.slice. Dec 16 13:05:49.266231 kubelet[3044]: I1216 13:05:49.266119 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd703ac8-9ce5-47d4-8251-45440c7a541b-config-volume\") pod \"coredns-674b8bbfcf-7j8cx\" (UID: \"fd703ac8-9ce5-47d4-8251-45440c7a541b\") " pod="kube-system/coredns-674b8bbfcf-7j8cx" Dec 16 13:05:49.266231 kubelet[3044]: I1216 13:05:49.266145 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e963575-ea3f-424f-93b9-1f67f3dfc750-config-volume\") pod \"coredns-674b8bbfcf-jfggp\" (UID: \"9e963575-ea3f-424f-93b9-1f67f3dfc750\") " pod="kube-system/coredns-674b8bbfcf-jfggp" Dec 16 13:05:49.266231 kubelet[3044]: I1216 13:05:49.266162 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p7tj\" (UniqueName: \"kubernetes.io/projected/fd703ac8-9ce5-47d4-8251-45440c7a541b-kube-api-access-4p7tj\") pod \"coredns-674b8bbfcf-7j8cx\" (UID: \"fd703ac8-9ce5-47d4-8251-45440c7a541b\") " pod="kube-system/coredns-674b8bbfcf-7j8cx" Dec 16 13:05:49.266231 kubelet[3044]: I1216 13:05:49.266177 3044 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwfh9\" (UniqueName: \"kubernetes.io/projected/9e963575-ea3f-424f-93b9-1f67f3dfc750-kube-api-access-cwfh9\") pod \"coredns-674b8bbfcf-jfggp\" (UID: \"9e963575-ea3f-424f-93b9-1f67f3dfc750\") " pod="kube-system/coredns-674b8bbfcf-jfggp" Dec 16 13:05:49.544328 containerd[1723]: time="2025-12-16T13:05:49.544275628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jfggp,Uid:9e963575-ea3f-424f-93b9-1f67f3dfc750,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:49.549034 containerd[1723]: time="2025-12-16T13:05:49.549001880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7j8cx,Uid:fd703ac8-9ce5-47d4-8251-45440c7a541b,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:49.603797 containerd[1723]: time="2025-12-16T13:05:49.603706995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jfggp,Uid:9e963575-ea3f-424f-93b9-1f67f3dfc750,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc9078037f7b0a0b4cf317442f48ca630005049a16d36aa819bc85b066aec13e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 13:05:49.603893 kubelet[3044]: E1216 13:05:49.603867 3044 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc9078037f7b0a0b4cf317442f48ca630005049a16d36aa819bc85b066aec13e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 13:05:49.603928 kubelet[3044]: E1216 13:05:49.603911 3044 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc9078037f7b0a0b4cf317442f48ca630005049a16d36aa819bc85b066aec13e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-jfggp" Dec 16 13:05:49.603964 kubelet[3044]: E1216 13:05:49.603927 3044 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc9078037f7b0a0b4cf317442f48ca630005049a16d36aa819bc85b066aec13e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-jfggp" Dec 16 13:05:49.603986 kubelet[3044]: E1216 13:05:49.603959 3044 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jfggp_kube-system(9e963575-ea3f-424f-93b9-1f67f3dfc750)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jfggp_kube-system(9e963575-ea3f-424f-93b9-1f67f3dfc750)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc9078037f7b0a0b4cf317442f48ca630005049a16d36aa819bc85b066aec13e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-jfggp" podUID="9e963575-ea3f-424f-93b9-1f67f3dfc750" Dec 16 13:05:49.608148 containerd[1723]: time="2025-12-16T13:05:49.608116158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7j8cx,Uid:fd703ac8-9ce5-47d4-8251-45440c7a541b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8281eb3fccce191c365ae3b87cdf44434180808809709ba2a8e9c389af34698a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 13:05:49.608282 kubelet[3044]: E1216 13:05:49.608256 3044 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8281eb3fccce191c365ae3b87cdf44434180808809709ba2a8e9c389af34698a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 13:05:49.608318 kubelet[3044]: E1216 13:05:49.608292 3044 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8281eb3fccce191c365ae3b87cdf44434180808809709ba2a8e9c389af34698a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-7j8cx" Dec 16 13:05:49.608318 kubelet[3044]: E1216 13:05:49.608310 3044 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8281eb3fccce191c365ae3b87cdf44434180808809709ba2a8e9c389af34698a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-7j8cx" Dec 16 13:05:49.608361 kubelet[3044]: E1216 13:05:49.608341 3044 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7j8cx_kube-system(fd703ac8-9ce5-47d4-8251-45440c7a541b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7j8cx_kube-system(fd703ac8-9ce5-47d4-8251-45440c7a541b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8281eb3fccce191c365ae3b87cdf44434180808809709ba2a8e9c389af34698a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-7j8cx" podUID="fd703ac8-9ce5-47d4-8251-45440c7a541b" Dec 16 13:05:50.452116 containerd[1723]: time="2025-12-16T13:05:50.452090282Z" level=info msg="CreateContainer within sandbox \"f398afa36d5952939f675b690a079299b5a2fee3dd2b88784ff6260ea9a0a694\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 16 13:05:50.474767 containerd[1723]: time="2025-12-16T13:05:50.474746620Z" level=info msg="Container 6bb56579a8a87a539f42e0a07bac139d5a0096a3952b16ef2ad50390f8db6998: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:50.487515 containerd[1723]: time="2025-12-16T13:05:50.487494848Z" level=info msg="CreateContainer within sandbox \"f398afa36d5952939f675b690a079299b5a2fee3dd2b88784ff6260ea9a0a694\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"6bb56579a8a87a539f42e0a07bac139d5a0096a3952b16ef2ad50390f8db6998\"" Dec 16 13:05:50.488379 containerd[1723]: time="2025-12-16T13:05:50.487844839Z" level=info msg="StartContainer for \"6bb56579a8a87a539f42e0a07bac139d5a0096a3952b16ef2ad50390f8db6998\"" Dec 16 13:05:50.489096 containerd[1723]: time="2025-12-16T13:05:50.489062961Z" level=info msg="connecting to shim 6bb56579a8a87a539f42e0a07bac139d5a0096a3952b16ef2ad50390f8db6998" address="unix:///run/containerd/s/4753dbc9929910f1e8b3d80f578679cb164069bf3454e23b3306528de4b2a74f" protocol=ttrpc version=3 Dec 16 13:05:50.503910 systemd[1]: Started cri-containerd-6bb56579a8a87a539f42e0a07bac139d5a0096a3952b16ef2ad50390f8db6998.scope - libcontainer container 6bb56579a8a87a539f42e0a07bac139d5a0096a3952b16ef2ad50390f8db6998. Dec 16 13:05:50.541076 containerd[1723]: time="2025-12-16T13:05:50.541052131Z" level=info msg="StartContainer for \"6bb56579a8a87a539f42e0a07bac139d5a0096a3952b16ef2ad50390f8db6998\" returns successfully" Dec 16 13:05:51.661491 systemd-networkd[1333]: flannel.1: Link UP Dec 16 13:05:51.661496 systemd-networkd[1333]: flannel.1: Gained carrier Dec 16 13:05:52.894904 systemd-networkd[1333]: flannel.1: Gained IPv6LL Dec 16 13:06:02.390757 containerd[1723]: time="2025-12-16T13:06:02.390409719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7j8cx,Uid:fd703ac8-9ce5-47d4-8251-45440c7a541b,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:02.390757 containerd[1723]: time="2025-12-16T13:06:02.390638954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jfggp,Uid:9e963575-ea3f-424f-93b9-1f67f3dfc750,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:02.418752 systemd-networkd[1333]: cni0: Link UP Dec 16 13:06:02.418757 systemd-networkd[1333]: cni0: Gained carrier Dec 16 13:06:02.420895 systemd-networkd[1333]: cni0: Lost carrier Dec 16 13:06:02.462744 systemd-networkd[1333]: veth4d179a2c: Link UP Dec 16 13:06:02.465085 kernel: cni0: port 1(veth4d179a2c) entered blocking state Dec 16 13:06:02.465143 kernel: cni0: port 1(veth4d179a2c) entered disabled state Dec 16 13:06:02.466204 kernel: veth4d179a2c: entered allmulticast mode Dec 16 13:06:02.466871 kernel: veth4d179a2c: entered promiscuous mode Dec 16 13:06:02.467262 systemd-networkd[1333]: veth5b53a25e: Link UP Dec 16 13:06:02.469273 kernel: cni0: port 2(veth5b53a25e) entered blocking state Dec 16 13:06:02.469320 kernel: cni0: port 2(veth5b53a25e) entered disabled state Dec 16 13:06:02.469924 kernel: veth5b53a25e: entered allmulticast mode Dec 16 13:06:02.470820 kernel: veth5b53a25e: entered promiscuous mode Dec 16 13:06:02.480689 systemd-networkd[1333]: veth5b53a25e: Gained carrier Dec 16 13:06:02.480914 kernel: cni0: port 2(veth5b53a25e) entered blocking state Dec 16 13:06:02.480943 kernel: cni0: port 2(veth5b53a25e) entered forwarding state Dec 16 13:06:02.481392 systemd-networkd[1333]: cni0: Gained carrier Dec 16 13:06:02.483289 containerd[1723]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Dec 16 13:06:02.483289 containerd[1723]: delegateAdd: netconf sent to delegate plugin: Dec 16 13:06:02.484326 kernel: cni0: port 1(veth4d179a2c) entered blocking state Dec 16 13:06:02.484365 kernel: cni0: port 1(veth4d179a2c) entered forwarding state Dec 16 13:06:02.484391 systemd-networkd[1333]: veth4d179a2c: Gained carrier Dec 16 13:06:02.486207 containerd[1723]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Dec 16 13:06:02.486207 containerd[1723]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000082950), "name":"cbr0", "type":"bridge"} Dec 16 13:06:02.486207 containerd[1723]: delegateAdd: netconf sent to delegate plugin: Dec 16 13:06:02.545818 containerd[1723]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-16T13:06:02.545791481Z" level=info msg="connecting to shim 46c3d6b812aad0e10f916e937be9e9a52902639e71380ed22c85ef421b4a70a3" address="unix:///run/containerd/s/5710980fcb2c395c878be5cd715727e82745c282348b2e27ca96eac278abc68c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:02.562803 containerd[1723]: time="2025-12-16T13:06:02.562465001Z" level=info msg="connecting to shim 2438a2d44153d07d78ed05324d7cb4b0500751f0d8d8e84f3f041558eaf4966a" address="unix:///run/containerd/s/c750fe7936ccf6cacb17dd1f3684fa1c5772c4b55b0cda1335d10e6ad4c6b48f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:02.576938 systemd[1]: Started cri-containerd-46c3d6b812aad0e10f916e937be9e9a52902639e71380ed22c85ef421b4a70a3.scope - libcontainer container 46c3d6b812aad0e10f916e937be9e9a52902639e71380ed22c85ef421b4a70a3. Dec 16 13:06:02.580426 systemd[1]: Started cri-containerd-2438a2d44153d07d78ed05324d7cb4b0500751f0d8d8e84f3f041558eaf4966a.scope - libcontainer container 2438a2d44153d07d78ed05324d7cb4b0500751f0d8d8e84f3f041558eaf4966a. Dec 16 13:06:02.626928 containerd[1723]: time="2025-12-16T13:06:02.626907982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jfggp,Uid:9e963575-ea3f-424f-93b9-1f67f3dfc750,Namespace:kube-system,Attempt:0,} returns sandbox id \"46c3d6b812aad0e10f916e937be9e9a52902639e71380ed22c85ef421b4a70a3\"" Dec 16 13:06:02.629921 containerd[1723]: time="2025-12-16T13:06:02.629872040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7j8cx,Uid:fd703ac8-9ce5-47d4-8251-45440c7a541b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2438a2d44153d07d78ed05324d7cb4b0500751f0d8d8e84f3f041558eaf4966a\"" Dec 16 13:06:02.634969 containerd[1723]: time="2025-12-16T13:06:02.634931826Z" level=info msg="CreateContainer within sandbox \"46c3d6b812aad0e10f916e937be9e9a52902639e71380ed22c85ef421b4a70a3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:06:02.642926 containerd[1723]: time="2025-12-16T13:06:02.642865379Z" level=info msg="CreateContainer within sandbox \"2438a2d44153d07d78ed05324d7cb4b0500751f0d8d8e84f3f041558eaf4966a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:06:02.658304 containerd[1723]: time="2025-12-16T13:06:02.658284740Z" level=info msg="Container f6de76c776c00c44df4833a856ebe50da909d83cbbd184a1e097c5497f362e4c: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:02.665588 containerd[1723]: time="2025-12-16T13:06:02.665569156Z" level=info msg="Container 193b1a51d95ca65c8e1835e838a53e1c2a8f99cff33895409b7b793ebb41a43c: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:02.683558 containerd[1723]: time="2025-12-16T13:06:02.683537129Z" level=info msg="CreateContainer within sandbox \"46c3d6b812aad0e10f916e937be9e9a52902639e71380ed22c85ef421b4a70a3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6de76c776c00c44df4833a856ebe50da909d83cbbd184a1e097c5497f362e4c\"" Dec 16 13:06:02.683933 containerd[1723]: time="2025-12-16T13:06:02.683913219Z" level=info msg="StartContainer for \"f6de76c776c00c44df4833a856ebe50da909d83cbbd184a1e097c5497f362e4c\"" Dec 16 13:06:02.684855 containerd[1723]: time="2025-12-16T13:06:02.684831465Z" level=info msg="connecting to shim f6de76c776c00c44df4833a856ebe50da909d83cbbd184a1e097c5497f362e4c" address="unix:///run/containerd/s/5710980fcb2c395c878be5cd715727e82745c282348b2e27ca96eac278abc68c" protocol=ttrpc version=3 Dec 16 13:06:02.687195 containerd[1723]: time="2025-12-16T13:06:02.687100554Z" level=info msg="CreateContainer within sandbox \"2438a2d44153d07d78ed05324d7cb4b0500751f0d8d8e84f3f041558eaf4966a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"193b1a51d95ca65c8e1835e838a53e1c2a8f99cff33895409b7b793ebb41a43c\"" Dec 16 13:06:02.688198 containerd[1723]: time="2025-12-16T13:06:02.688177277Z" level=info msg="StartContainer for \"193b1a51d95ca65c8e1835e838a53e1c2a8f99cff33895409b7b793ebb41a43c\"" Dec 16 13:06:02.690447 containerd[1723]: time="2025-12-16T13:06:02.690421146Z" level=info msg="connecting to shim 193b1a51d95ca65c8e1835e838a53e1c2a8f99cff33895409b7b793ebb41a43c" address="unix:///run/containerd/s/c750fe7936ccf6cacb17dd1f3684fa1c5772c4b55b0cda1335d10e6ad4c6b48f" protocol=ttrpc version=3 Dec 16 13:06:02.703871 systemd[1]: Started cri-containerd-f6de76c776c00c44df4833a856ebe50da909d83cbbd184a1e097c5497f362e4c.scope - libcontainer container f6de76c776c00c44df4833a856ebe50da909d83cbbd184a1e097c5497f362e4c. Dec 16 13:06:02.706753 systemd[1]: Started cri-containerd-193b1a51d95ca65c8e1835e838a53e1c2a8f99cff33895409b7b793ebb41a43c.scope - libcontainer container 193b1a51d95ca65c8e1835e838a53e1c2a8f99cff33895409b7b793ebb41a43c. Dec 16 13:06:02.741351 containerd[1723]: time="2025-12-16T13:06:02.741111695Z" level=info msg="StartContainer for \"193b1a51d95ca65c8e1835e838a53e1c2a8f99cff33895409b7b793ebb41a43c\" returns successfully" Dec 16 13:06:02.741662 containerd[1723]: time="2025-12-16T13:06:02.741618415Z" level=info msg="StartContainer for \"f6de76c776c00c44df4833a856ebe50da909d83cbbd184a1e097c5497f362e4c\" returns successfully" Dec 16 13:06:03.480990 kubelet[3044]: I1216 13:06:03.480755 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7j8cx" podStartSLOduration=22.480729024 podStartE2EDuration="22.480729024s" podCreationTimestamp="2025-12-16 13:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:03.480584342 +0000 UTC m=+27.176414271" watchObservedRunningTime="2025-12-16 13:06:03.480729024 +0000 UTC m=+27.176558955" Dec 16 13:06:03.481259 kubelet[3044]: I1216 13:06:03.481213 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-rn84s" podStartSLOduration=15.725049378 podStartE2EDuration="22.481199346s" podCreationTimestamp="2025-12-16 13:05:41 +0000 UTC" firstStartedPulling="2025-12-16 13:05:42.257215851 +0000 UTC m=+5.953045774" lastFinishedPulling="2025-12-16 13:05:49.013365826 +0000 UTC m=+12.709195742" observedRunningTime="2025-12-16 13:05:51.45799547 +0000 UTC m=+15.153825399" watchObservedRunningTime="2025-12-16 13:06:03.481199346 +0000 UTC m=+27.177029275" Dec 16 13:06:03.517727 kubelet[3044]: I1216 13:06:03.517649 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jfggp" podStartSLOduration=22.517636757 podStartE2EDuration="22.517636757s" podCreationTimestamp="2025-12-16 13:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:03.49550477 +0000 UTC m=+27.191334705" watchObservedRunningTime="2025-12-16 13:06:03.517636757 +0000 UTC m=+27.213466687" Dec 16 13:06:03.710890 systemd-networkd[1333]: veth4d179a2c: Gained IPv6LL Dec 16 13:06:04.222918 systemd-networkd[1333]: veth5b53a25e: Gained IPv6LL Dec 16 13:06:04.414886 systemd-networkd[1333]: cni0: Gained IPv6LL Dec 16 13:07:06.820240 systemd[1]: Started sshd@5-10.200.0.41:22-10.200.16.10:42676.service - OpenSSH per-connection server daemon (10.200.16.10:42676). Dec 16 13:07:07.369112 sshd[4185]: Accepted publickey for core from 10.200.16.10 port 42676 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:07.369956 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:07.373558 systemd-logind[1687]: New session 8 of user core. Dec 16 13:07:07.381902 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:07:07.809762 sshd[4202]: Connection closed by 10.200.16.10 port 42676 Dec 16 13:07:07.810091 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:07.812407 systemd[1]: sshd@5-10.200.0.41:22-10.200.16.10:42676.service: Deactivated successfully. Dec 16 13:07:07.813829 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:07:07.814442 systemd-logind[1687]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:07:07.815643 systemd-logind[1687]: Removed session 8. Dec 16 13:07:09.684850 waagent[1891]: 2025-12-16T13:07:09.684773Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Dec 16 13:07:09.697476 waagent[1891]: 2025-12-16T13:07:09.697440Z INFO ExtHandler Dec 16 13:07:09.697551 waagent[1891]: 2025-12-16T13:07:09.697493Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Dec 16 13:07:09.741023 waagent[1891]: 2025-12-16T13:07:09.740998Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 13:07:09.795248 waagent[1891]: 2025-12-16T13:07:09.795204Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C636EE4AF2EE60B8FE71D23A6AC7166718C7360B', 'hasPrivateKey': True} Dec 16 13:07:09.795539 waagent[1891]: 2025-12-16T13:07:09.795515Z INFO ExtHandler Fetch goal state completed Dec 16 13:07:09.795764 waagent[1891]: 2025-12-16T13:07:09.795745Z INFO ExtHandler ExtHandler Dec 16 13:07:09.795814 waagent[1891]: 2025-12-16T13:07:09.795799Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: d2c3242d-ab74-4fd8-90ac-3368adafae5d correlation fda29dd0-ffe6-4e36-acdd-223fe263c130 created: 2025-12-16T13:07:07.132912Z] Dec 16 13:07:09.795991 waagent[1891]: 2025-12-16T13:07:09.795973Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 13:07:09.796281 waagent[1891]: 2025-12-16T13:07:09.796262Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Dec 16 13:07:12.906251 systemd[1]: Started sshd@6-10.200.0.41:22-10.200.16.10:56284.service - OpenSSH per-connection server daemon (10.200.16.10:56284). Dec 16 13:07:13.453301 sshd[4243]: Accepted publickey for core from 10.200.16.10 port 56284 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:13.454151 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:13.457699 systemd-logind[1687]: New session 9 of user core. Dec 16 13:07:13.464908 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:07:13.879383 sshd[4246]: Connection closed by 10.200.16.10 port 56284 Dec 16 13:07:13.879719 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:13.882202 systemd[1]: sshd@6-10.200.0.41:22-10.200.16.10:56284.service: Deactivated successfully. Dec 16 13:07:13.883508 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:07:13.884186 systemd-logind[1687]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:07:13.885267 systemd-logind[1687]: Removed session 9. Dec 16 13:07:15.823811 waagent[1891]: 2025-12-16T13:07:15.823081Z INFO ExtHandler Dec 16 13:07:15.823811 waagent[1891]: 2025-12-16T13:07:15.823194Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a91de1df-7ca9-4511-84e6-2b926e441280 eTag: 10403645345337068068 source: Fabric] Dec 16 13:07:15.823811 waagent[1891]: 2025-12-16T13:07:15.823513Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 13:07:18.978096 systemd[1]: Started sshd@7-10.200.0.41:22-10.200.16.10:56292.service - OpenSSH per-connection server daemon (10.200.16.10:56292). Dec 16 13:07:19.523444 sshd[4279]: Accepted publickey for core from 10.200.16.10 port 56292 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:19.524213 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:19.527735 systemd-logind[1687]: New session 10 of user core. Dec 16 13:07:19.533895 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:07:19.954384 sshd[4282]: Connection closed by 10.200.16.10 port 56292 Dec 16 13:07:19.954930 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:19.957845 systemd-logind[1687]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:07:19.958009 systemd[1]: sshd@7-10.200.0.41:22-10.200.16.10:56292.service: Deactivated successfully. Dec 16 13:07:19.959618 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:07:19.961107 systemd-logind[1687]: Removed session 10. Dec 16 13:07:20.050862 systemd[1]: Started sshd@8-10.200.0.41:22-10.200.16.10:51972.service - OpenSSH per-connection server daemon (10.200.16.10:51972). Dec 16 13:07:20.596855 sshd[4296]: Accepted publickey for core from 10.200.16.10 port 51972 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:20.597536 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:20.600828 systemd-logind[1687]: New session 11 of user core. Dec 16 13:07:20.605015 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:07:21.046549 sshd[4299]: Connection closed by 10.200.16.10 port 51972 Dec 16 13:07:21.047026 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:21.049385 systemd[1]: sshd@8-10.200.0.41:22-10.200.16.10:51972.service: Deactivated successfully. Dec 16 13:07:21.050653 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:07:21.051286 systemd-logind[1687]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:07:21.052414 systemd-logind[1687]: Removed session 11. Dec 16 13:07:21.141834 systemd[1]: Started sshd@9-10.200.0.41:22-10.200.16.10:51986.service - OpenSSH per-connection server daemon (10.200.16.10:51986). Dec 16 13:07:21.687932 sshd[4309]: Accepted publickey for core from 10.200.16.10 port 51986 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:21.688700 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:21.692054 systemd-logind[1687]: New session 12 of user core. Dec 16 13:07:21.697900 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:07:22.115311 sshd[4312]: Connection closed by 10.200.16.10 port 51986 Dec 16 13:07:22.115616 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:22.117797 systemd-logind[1687]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:07:22.118337 systemd[1]: sshd@9-10.200.0.41:22-10.200.16.10:51986.service: Deactivated successfully. Dec 16 13:07:22.119497 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:07:22.120761 systemd-logind[1687]: Removed session 12. Dec 16 13:07:27.216128 systemd[1]: Started sshd@10-10.200.0.41:22-10.200.16.10:51990.service - OpenSSH per-connection server daemon (10.200.16.10:51990). Dec 16 13:07:27.759439 sshd[4364]: Accepted publickey for core from 10.200.16.10 port 51990 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:27.760266 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:27.764293 systemd-logind[1687]: New session 13 of user core. Dec 16 13:07:27.772911 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:07:28.187579 sshd[4367]: Connection closed by 10.200.16.10 port 51990 Dec 16 13:07:28.187916 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:28.190535 systemd-logind[1687]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:07:28.190711 systemd[1]: sshd@10-10.200.0.41:22-10.200.16.10:51990.service: Deactivated successfully. Dec 16 13:07:28.192209 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:07:28.193508 systemd-logind[1687]: Removed session 13. Dec 16 13:07:28.294827 systemd[1]: Started sshd@11-10.200.0.41:22-10.200.16.10:52000.service - OpenSSH per-connection server daemon (10.200.16.10:52000). Dec 16 13:07:28.837421 sshd[4379]: Accepted publickey for core from 10.200.16.10 port 52000 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:28.838408 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:28.841825 systemd-logind[1687]: New session 14 of user core. Dec 16 13:07:28.844873 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:07:29.300368 sshd[4382]: Connection closed by 10.200.16.10 port 52000 Dec 16 13:07:29.300660 sshd-session[4379]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:29.302919 systemd[1]: sshd@11-10.200.0.41:22-10.200.16.10:52000.service: Deactivated successfully. Dec 16 13:07:29.304166 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:07:29.305072 systemd-logind[1687]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:07:29.306084 systemd-logind[1687]: Removed session 14. Dec 16 13:07:29.396766 systemd[1]: Started sshd@12-10.200.0.41:22-10.200.16.10:52008.service - OpenSSH per-connection server daemon (10.200.16.10:52008). Dec 16 13:07:29.941427 sshd[4392]: Accepted publickey for core from 10.200.16.10 port 52008 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:29.942228 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:29.945596 systemd-logind[1687]: New session 15 of user core. Dec 16 13:07:29.950897 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:07:30.935017 sshd[4395]: Connection closed by 10.200.16.10 port 52008 Dec 16 13:07:30.934894 sshd-session[4392]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:30.938607 systemd[1]: sshd@12-10.200.0.41:22-10.200.16.10:52008.service: Deactivated successfully. Dec 16 13:07:30.939758 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:07:30.942116 systemd-logind[1687]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:07:30.942727 systemd-logind[1687]: Removed session 15. Dec 16 13:07:31.031959 systemd[1]: Started sshd@13-10.200.0.41:22-10.200.16.10:43090.service - OpenSSH per-connection server daemon (10.200.16.10:43090). Dec 16 13:07:31.577798 sshd[4413]: Accepted publickey for core from 10.200.16.10 port 43090 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:31.578546 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:31.581845 systemd-logind[1687]: New session 16 of user core. Dec 16 13:07:31.589921 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:07:32.089934 sshd[4418]: Connection closed by 10.200.16.10 port 43090 Dec 16 13:07:32.090239 sshd-session[4413]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:32.092453 systemd[1]: sshd@13-10.200.0.41:22-10.200.16.10:43090.service: Deactivated successfully. Dec 16 13:07:32.093583 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:07:32.094356 systemd-logind[1687]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:07:32.095277 systemd-logind[1687]: Removed session 16. Dec 16 13:07:32.192002 systemd[1]: Started sshd@14-10.200.0.41:22-10.200.16.10:43100.service - OpenSSH per-connection server daemon (10.200.16.10:43100). Dec 16 13:07:32.738140 sshd[4448]: Accepted publickey for core from 10.200.16.10 port 43100 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:32.738919 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:32.742145 systemd-logind[1687]: New session 17 of user core. Dec 16 13:07:32.745958 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:07:33.163150 sshd[4451]: Connection closed by 10.200.16.10 port 43100 Dec 16 13:07:33.163490 sshd-session[4448]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:33.165541 systemd[1]: sshd@14-10.200.0.41:22-10.200.16.10:43100.service: Deactivated successfully. Dec 16 13:07:33.166749 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:07:33.167589 systemd-logind[1687]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:07:33.168550 systemd-logind[1687]: Removed session 17. Dec 16 13:07:38.267268 systemd[1]: Started sshd@15-10.200.0.41:22-10.200.16.10:43114.service - OpenSSH per-connection server daemon (10.200.16.10:43114). Dec 16 13:07:38.812612 sshd[4487]: Accepted publickey for core from 10.200.16.10 port 43114 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:38.813369 sshd-session[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:38.816388 systemd-logind[1687]: New session 18 of user core. Dec 16 13:07:38.827908 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:07:39.238624 sshd[4490]: Connection closed by 10.200.16.10 port 43114 Dec 16 13:07:39.238999 sshd-session[4487]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:39.240973 systemd[1]: sshd@15-10.200.0.41:22-10.200.16.10:43114.service: Deactivated successfully. Dec 16 13:07:39.242762 systemd-logind[1687]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:07:39.242879 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:07:39.244515 systemd-logind[1687]: Removed session 18. Dec 16 13:07:44.342264 systemd[1]: Started sshd@16-10.200.0.41:22-10.200.16.10:60172.service - OpenSSH per-connection server daemon (10.200.16.10:60172). Dec 16 13:07:44.889662 sshd[4524]: Accepted publickey for core from 10.200.16.10 port 60172 ssh2: RSA SHA256:72HAH21zS0CsiJVpcb9N8kDUf03FTmXjwPA42xyCzNY Dec 16 13:07:44.890477 sshd-session[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:07:44.894124 systemd-logind[1687]: New session 19 of user core. Dec 16 13:07:44.897902 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:07:45.319790 sshd[4527]: Connection closed by 10.200.16.10 port 60172 Dec 16 13:07:45.320120 sshd-session[4524]: pam_unix(sshd:session): session closed for user core Dec 16 13:07:45.322432 systemd[1]: sshd@16-10.200.0.41:22-10.200.16.10:60172.service: Deactivated successfully. Dec 16 13:07:45.323830 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:07:45.324711 systemd-logind[1687]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:07:45.325484 systemd-logind[1687]: Removed session 19.