Jan 23 01:10:58.990910 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:10:58.990937 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:10:58.990950 kernel: BIOS-provided physical RAM map: Jan 23 01:10:58.990956 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:10:58.990963 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 23 01:10:58.990969 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 23 01:10:58.990977 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 23 01:10:58.990984 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 23 01:10:58.990991 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 23 01:10:58.990999 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 23 01:10:58.991006 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 23 01:10:58.991012 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 23 01:10:58.991019 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 23 01:10:58.991025 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 23 01:10:58.991034 kernel: NX (Execute Disable) protection: active Jan 23 01:10:58.991042 kernel: APIC: Static calls initialized Jan 23 01:10:58.991049 kernel: efi: EFI v2.7 by Microsoft Jan 23 01:10:58.991057 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eac2298 RNG=0x3ffd2018 Jan 23 01:10:58.991064 kernel: random: crng init done Jan 23 01:10:58.991071 kernel: secureboot: Secure boot disabled Jan 23 01:10:58.991078 kernel: SMBIOS 3.1.0 present. Jan 23 01:10:58.991085 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 23 01:10:58.991113 kernel: DMI: Memory slots populated: 2/2 Jan 23 01:10:58.991120 kernel: Hypervisor detected: Microsoft Hyper-V Jan 23 01:10:58.991127 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 23 01:10:58.991134 kernel: Hyper-V: Nested features: 0x3e0101 Jan 23 01:10:58.991143 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 23 01:10:58.991149 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 23 01:10:58.991157 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 23 01:10:58.991164 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 23 01:10:58.991171 kernel: tsc: Detected 2299.999 MHz processor Jan 23 01:10:58.991178 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:10:58.991186 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:10:58.991193 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 23 01:10:58.991201 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 01:10:58.991208 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:10:58.991217 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 23 01:10:58.991224 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 23 01:10:58.991232 kernel: Using GB pages for direct mapping Jan 23 01:10:58.991239 kernel: ACPI: Early table checksum verification disabled Jan 23 01:10:58.991250 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 23 01:10:58.991258 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 01:10:58.991267 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 01:10:58.991274 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 01:10:58.991282 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 23 01:10:58.991290 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 01:10:58.991297 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 01:10:58.991305 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 01:10:58.991312 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 23 01:10:58.991321 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 23 01:10:58.991329 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 01:10:58.991337 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 23 01:10:58.991344 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 23 01:10:58.991352 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 23 01:10:58.991359 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 23 01:10:58.991367 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 23 01:10:58.991375 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 23 01:10:58.991382 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 23 01:10:58.991391 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 23 01:10:58.991399 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 23 01:10:58.991407 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 01:10:58.991414 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 23 01:10:58.991422 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 23 01:10:58.991430 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 23 01:10:58.991437 kernel: Zone ranges: Jan 23 01:10:58.991445 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:10:58.991453 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 01:10:58.991462 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 23 01:10:58.991469 kernel: Device empty Jan 23 01:10:58.991477 kernel: Movable zone start for each node Jan 23 01:10:58.991485 kernel: Early memory node ranges Jan 23 01:10:58.991492 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 01:10:58.991500 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 23 01:10:58.991507 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 23 01:10:58.991515 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 23 01:10:58.991522 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 23 01:10:58.991531 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 23 01:10:58.991539 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:10:58.991547 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 01:10:58.991554 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 23 01:10:58.991562 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 23 01:10:58.991570 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 23 01:10:58.991577 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 23 01:10:58.991585 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 01:10:58.991592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:10:58.991601 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:10:58.991609 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 23 01:10:58.991617 kernel: TSC deadline timer available Jan 23 01:10:58.991624 kernel: CPU topo: Max. logical packages: 1 Jan 23 01:10:58.991632 kernel: CPU topo: Max. logical dies: 1 Jan 23 01:10:58.991639 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:10:58.991647 kernel: CPU topo: Max. threads per core: 2 Jan 23 01:10:58.991654 kernel: CPU topo: Num. cores per package: 1 Jan 23 01:10:58.991662 kernel: CPU topo: Num. threads per package: 2 Jan 23 01:10:58.991669 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 01:10:58.991678 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 23 01:10:58.991686 kernel: Booting paravirtualized kernel on Hyper-V Jan 23 01:10:58.991694 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:10:58.991702 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 01:10:58.991709 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 01:10:58.991717 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 01:10:58.991725 kernel: pcpu-alloc: [0] 0 1 Jan 23 01:10:58.991732 kernel: Hyper-V: PV spinlocks enabled Jan 23 01:10:58.991742 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:10:58.991750 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:10:58.991759 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 23 01:10:58.991766 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 01:10:58.991774 kernel: Fallback order for Node 0: 0 Jan 23 01:10:58.991781 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 23 01:10:58.991789 kernel: Policy zone: Normal Jan 23 01:10:58.991796 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:10:58.991804 kernel: software IO TLB: area num 2. Jan 23 01:10:58.991813 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 01:10:58.991821 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:10:58.991828 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:10:58.991836 kernel: Dynamic Preempt: voluntary Jan 23 01:10:58.991844 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:10:58.991852 kernel: rcu: RCU event tracing is enabled. Jan 23 01:10:58.991866 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 01:10:58.991876 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:10:58.991884 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:10:58.991892 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:10:58.991901 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:10:58.991910 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 01:10:58.991919 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:10:58.991927 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:10:58.991935 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:10:58.991944 kernel: Using NULL legacy PIC Jan 23 01:10:58.991953 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 23 01:10:58.991962 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:10:58.991970 kernel: Console: colour dummy device 80x25 Jan 23 01:10:58.991979 kernel: printk: legacy console [tty1] enabled Jan 23 01:10:58.991987 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:10:58.991995 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 23 01:10:58.992003 kernel: ACPI: Core revision 20240827 Jan 23 01:10:58.992011 kernel: Failed to register legacy timer interrupt Jan 23 01:10:58.992020 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:10:58.992029 kernel: x2apic enabled Jan 23 01:10:58.992038 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:10:58.992046 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 01:10:58.992054 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 01:10:58.992062 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 23 01:10:58.992071 kernel: Hyper-V: Using IPI hypercalls Jan 23 01:10:58.992079 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 23 01:10:58.992096 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 23 01:10:58.992113 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 23 01:10:58.992123 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 23 01:10:58.992131 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 23 01:10:58.992140 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 23 01:10:58.992148 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Jan 23 01:10:58.992157 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Jan 23 01:10:58.992165 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 01:10:58.992173 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 01:10:58.992181 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 01:10:58.992189 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:10:58.992199 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 01:10:58.992207 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 01:10:58.992215 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 23 01:10:58.992224 kernel: RETBleed: Vulnerable Jan 23 01:10:58.992232 kernel: Speculative Store Bypass: Vulnerable Jan 23 01:10:58.992240 kernel: active return thunk: its_return_thunk Jan 23 01:10:58.992248 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 01:10:58.992256 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:10:58.992264 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:10:58.992272 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:10:58.992281 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 01:10:58.992290 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 01:10:58.992298 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 01:10:58.992306 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 23 01:10:58.992314 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 23 01:10:58.992323 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 23 01:10:58.992331 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:10:58.992339 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 01:10:58.992347 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 01:10:58.992355 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 01:10:58.992363 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 23 01:10:58.992371 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 23 01:10:58.992381 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 23 01:10:58.992389 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 23 01:10:58.992398 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:10:58.992406 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:10:58.992414 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:10:58.992422 kernel: landlock: Up and running. Jan 23 01:10:58.992430 kernel: SELinux: Initializing. Jan 23 01:10:58.992438 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 01:10:58.992446 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 01:10:58.992454 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 23 01:10:58.992462 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 23 01:10:58.992471 kernel: signal: max sigframe size: 11952 Jan 23 01:10:58.992481 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:10:58.992489 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:10:58.992498 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 01:10:58.992506 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 01:10:58.992514 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:10:58.992522 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:10:58.992531 kernel: .... node #0, CPUs: #1 Jan 23 01:10:58.992539 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 01:10:58.992547 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 23 01:10:58.992557 kernel: Memory: 8068832K/8383228K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 308180K reserved, 0K cma-reserved) Jan 23 01:10:58.992566 kernel: devtmpfs: initialized Jan 23 01:10:58.992574 kernel: x86/mm: Memory block size: 128MB Jan 23 01:10:58.992583 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 23 01:10:58.992591 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:10:58.992599 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 01:10:58.992608 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:10:58.992616 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:10:58.992624 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:10:58.992634 kernel: audit: type=2000 audit(1769130655.080:1): state=initialized audit_enabled=0 res=1 Jan 23 01:10:58.992642 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:10:58.992651 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:10:58.992659 kernel: cpuidle: using governor menu Jan 23 01:10:58.992667 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:10:58.992675 kernel: dca service started, version 1.12.1 Jan 23 01:10:58.992684 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 23 01:10:58.992692 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 23 01:10:58.992702 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:10:58.992710 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:10:58.992718 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:10:58.992726 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:10:58.992735 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:10:58.992743 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:10:58.992752 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:10:58.992760 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:10:58.992769 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 01:10:58.992779 kernel: ACPI: Interpreter enabled Jan 23 01:10:58.992788 kernel: ACPI: PM: (supports S0 S5) Jan 23 01:10:58.992796 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:10:58.992805 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:10:58.992814 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 23 01:10:58.992822 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 23 01:10:58.992831 kernel: iommu: Default domain type: Translated Jan 23 01:10:58.992839 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:10:58.992847 kernel: efivars: Registered efivars operations Jan 23 01:10:58.992857 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:10:58.992865 kernel: PCI: System does not support PCI Jan 23 01:10:58.992873 kernel: vgaarb: loaded Jan 23 01:10:58.992881 kernel: clocksource: Switched to clocksource tsc-early Jan 23 01:10:58.992890 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:10:58.992898 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:10:58.992906 kernel: pnp: PnP ACPI init Jan 23 01:10:58.992914 kernel: pnp: PnP ACPI: found 3 devices Jan 23 01:10:58.992922 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:10:58.992930 kernel: NET: Registered PF_INET protocol family Jan 23 01:10:58.992940 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 01:10:58.992949 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 23 01:10:58.992957 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:10:58.992965 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 01:10:58.992974 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 23 01:10:58.992982 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 23 01:10:58.992990 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 01:10:58.992999 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 01:10:58.993008 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:10:58.993016 kernel: NET: Registered PF_XDP protocol family Jan 23 01:10:58.993024 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:10:58.993033 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 01:10:58.993041 kernel: software IO TLB: mapped [mem 0x000000003a9a9000-0x000000003e9a9000] (64MB) Jan 23 01:10:58.993049 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 23 01:10:58.993058 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 23 01:10:58.993066 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Jan 23 01:10:58.993074 kernel: clocksource: Switched to clocksource tsc Jan 23 01:10:58.993084 kernel: Initialise system trusted keyrings Jan 23 01:10:58.993105 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 23 01:10:58.993113 kernel: Key type asymmetric registered Jan 23 01:10:58.993121 kernel: Asymmetric key parser 'x509' registered Jan 23 01:10:58.993129 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:10:58.993138 kernel: io scheduler mq-deadline registered Jan 23 01:10:58.993146 kernel: io scheduler kyber registered Jan 23 01:10:58.993154 kernel: io scheduler bfq registered Jan 23 01:10:58.993162 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:10:58.993172 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:10:58.993180 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:10:58.993189 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 23 01:10:58.993197 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:10:58.993205 kernel: i8042: PNP: No PS/2 controller found. Jan 23 01:10:58.993332 kernel: rtc_cmos 00:02: registered as rtc0 Jan 23 01:10:58.993405 kernel: rtc_cmos 00:02: setting system clock to 2026-01-23T01:10:58 UTC (1769130658) Jan 23 01:10:58.993472 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 23 01:10:58.993484 kernel: intel_pstate: Intel P-state driver initializing Jan 23 01:10:58.993492 kernel: efifb: probing for efifb Jan 23 01:10:58.993501 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 01:10:58.993509 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 01:10:58.993521 kernel: efifb: scrolling: redraw Jan 23 01:10:58.993529 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 01:10:58.993537 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 01:10:58.993544 kernel: fb0: EFI VGA frame buffer device Jan 23 01:10:58.993551 kernel: pstore: Using crash dump compression: deflate Jan 23 01:10:58.993560 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 01:10:58.993568 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:10:58.993576 kernel: Segment Routing with IPv6 Jan 23 01:10:58.993583 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:10:58.993591 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:10:58.993599 kernel: Key type dns_resolver registered Jan 23 01:10:58.993606 kernel: IPI shorthand broadcast: enabled Jan 23 01:10:58.993614 kernel: sched_clock: Marking stable (3057004803, 102003367)->(3477817735, -318809565) Jan 23 01:10:58.993621 kernel: registered taskstats version 1 Jan 23 01:10:58.993631 kernel: Loading compiled-in X.509 certificates Jan 23 01:10:58.993639 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:10:58.993648 kernel: Demotion targets for Node 0: null Jan 23 01:10:58.993656 kernel: Key type .fscrypt registered Jan 23 01:10:58.993665 kernel: Key type fscrypt-provisioning registered Jan 23 01:10:58.993673 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:10:58.993682 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:10:58.993690 kernel: ima: No architecture policies found Jan 23 01:10:58.993699 kernel: clk: Disabling unused clocks Jan 23 01:10:58.993708 kernel: Warning: unable to open an initial console. Jan 23 01:10:58.993716 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:10:58.993725 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:10:58.993733 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:10:58.993742 kernel: Run /init as init process Jan 23 01:10:58.993751 kernel: with arguments: Jan 23 01:10:58.993759 kernel: /init Jan 23 01:10:58.993768 kernel: with environment: Jan 23 01:10:58.993776 kernel: HOME=/ Jan 23 01:10:58.993786 kernel: TERM=linux Jan 23 01:10:58.993796 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:10:58.993806 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:10:58.993816 systemd[1]: Detected virtualization microsoft. Jan 23 01:10:58.993824 systemd[1]: Detected architecture x86-64. Jan 23 01:10:58.993832 systemd[1]: Running in initrd. Jan 23 01:10:58.993840 systemd[1]: No hostname configured, using default hostname. Jan 23 01:10:58.993849 systemd[1]: Hostname set to . Jan 23 01:10:58.993857 systemd[1]: Initializing machine ID from random generator. Jan 23 01:10:58.993865 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:10:58.993873 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:10:58.993881 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:10:58.993889 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:10:58.993897 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:10:58.995744 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:10:58.995803 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:10:58.995813 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:10:58.995821 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:10:58.995829 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:10:58.995837 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:10:58.995845 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:10:58.995853 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:10:58.995862 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:10:58.995870 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:10:58.995878 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:10:58.995885 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:10:58.995893 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:10:58.995901 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:10:58.995909 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:10:58.995917 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:10:58.995924 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:10:58.995933 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:10:58.995941 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:10:58.995949 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:10:58.995957 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:10:58.995965 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:10:58.995973 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:10:58.995981 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:10:58.995988 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:10:58.996004 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:58.996014 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:10:58.996044 systemd-journald[186]: Collecting audit messages is disabled. Jan 23 01:10:58.996069 systemd-journald[186]: Journal started Jan 23 01:10:58.996106 systemd-journald[186]: Runtime Journal (/run/log/journal/499b6152783440f58907b9226480b2d7) is 8M, max 158.6M, 150.6M free. Jan 23 01:10:58.997326 systemd-modules-load[187]: Inserted module 'overlay' Jan 23 01:10:59.009190 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:10:59.009213 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:10:59.011895 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:10:59.020397 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:10:59.025049 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:10:59.038107 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:10:59.042126 kernel: Bridge firewalling registered Jan 23 01:10:59.042176 systemd-modules-load[187]: Inserted module 'br_netfilter' Jan 23 01:10:59.043030 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:10:59.044024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:10:59.051907 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:59.055361 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:10:59.060538 systemd-tmpfiles[198]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:10:59.069705 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:10:59.078195 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:10:59.078388 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:10:59.081209 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:10:59.083205 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:10:59.103581 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:10:59.109301 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:10:59.116197 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:10:59.130528 systemd-resolved[212]: Positive Trust Anchors: Jan 23 01:10:59.130736 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:10:59.130775 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:10:59.133928 systemd-resolved[212]: Defaulting to hostname 'linux'. Jan 23 01:10:59.163507 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:10:59.134754 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:10:59.138292 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:10:59.218112 kernel: SCSI subsystem initialized Jan 23 01:10:59.226105 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:10:59.235111 kernel: iscsi: registered transport (tcp) Jan 23 01:10:59.252273 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:10:59.252320 kernel: QLogic iSCSI HBA Driver Jan 23 01:10:59.266243 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:10:59.283574 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:10:59.289938 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:10:59.321013 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:10:59.323772 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:10:59.367108 kernel: raid6: avx512x4 gen() 43734 MB/s Jan 23 01:10:59.385105 kernel: raid6: avx512x2 gen() 42492 MB/s Jan 23 01:10:59.403099 kernel: raid6: avx512x1 gen() 27260 MB/s Jan 23 01:10:59.421103 kernel: raid6: avx2x4 gen() 34899 MB/s Jan 23 01:10:59.439103 kernel: raid6: avx2x2 gen() 38459 MB/s Jan 23 01:10:59.456881 kernel: raid6: avx2x1 gen() 30049 MB/s Jan 23 01:10:59.456904 kernel: raid6: using algorithm avx512x4 gen() 43734 MB/s Jan 23 01:10:59.476618 kernel: raid6: .... xor() 7291 MB/s, rmw enabled Jan 23 01:10:59.476649 kernel: raid6: using avx512x2 recovery algorithm Jan 23 01:10:59.495107 kernel: xor: automatically using best checksumming function avx Jan 23 01:10:59.615115 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:10:59.620600 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:10:59.628315 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:10:59.647867 systemd-udevd[435]: Using default interface naming scheme 'v255'. Jan 23 01:10:59.651762 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:10:59.657322 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:10:59.672720 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Jan 23 01:10:59.692443 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:10:59.695570 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:10:59.726316 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:10:59.732368 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:10:59.788112 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 01:10:59.793118 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:10:59.801593 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 01:10:59.801628 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 01:10:59.806103 kernel: PTP clock support registered Jan 23 01:10:59.813524 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 01:10:59.813561 kernel: hv_vmbus: registering driver hv_utils Jan 23 01:10:59.817477 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 01:10:59.817511 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 01:10:59.819884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:10:59.366470 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 01:10:59.375944 systemd-journald[186]: Time jumped backwards, rotating. Jan 23 01:10:59.362701 systemd-resolved[212]: Clock change detected. Flushing caches. Jan 23 01:10:59.363098 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:59.370002 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:59.385583 kernel: AES CTR mode by8 optimization enabled Jan 23 01:10:59.373238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:59.394843 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 01:10:59.399651 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:10:59.399722 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:59.410318 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 23 01:10:59.411919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:59.430873 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 01:10:59.435628 kernel: hv_vmbus: registering driver hv_pci Jan 23 01:10:59.435657 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 01:10:59.440885 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 01:10:59.444904 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 23 01:10:59.446862 kernel: scsi host0: storvsc_host_t Jan 23 01:10:59.450851 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 23 01:10:59.451029 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 01:10:59.457025 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 23 01:10:59.465417 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 01:10:59.465570 kernel: hv_netvsc f8615163-0000-1000-2000-002248a0ff9e (unnamed net_device) (uninitialized): VF slot 1 added Jan 23 01:10:59.465684 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 23 01:10:59.474170 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 23 01:10:59.480695 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:59.495561 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 01:10:59.495577 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 23 01:10:59.500930 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 01:10:59.510304 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 01:10:59.510455 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 23 01:10:59.515293 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 01:10:59.515511 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 01:10:59.518364 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 01:10:59.525202 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 23 01:10:59.525387 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 23 01:10:59.537950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 01:10:59.554944 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#44 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 01:10:59.690009 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 01:10:59.695845 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:11:00.008857 kernel: nvme nvme0: using unchecked data buffer Jan 23 01:11:00.262956 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 23 01:11:00.280118 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 23 01:11:00.280449 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 23 01:11:00.295610 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 23 01:11:00.303796 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:11:00.324402 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 23 01:11:00.330212 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:11:00.330920 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:11:00.337072 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:11:00.341906 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:11:00.345036 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:11:00.354643 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:11:00.371440 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:11:00.503889 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 23 01:11:00.504101 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 23 01:11:00.506695 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 23 01:11:00.508344 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 01:11:00.513879 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 23 01:11:00.517872 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 23 01:11:00.523059 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 23 01:11:00.523081 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 23 01:11:00.539305 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 01:11:00.539503 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 23 01:11:00.544102 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 23 01:11:00.548454 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 23 01:11:00.556859 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 23 01:11:00.560266 kernel: hv_netvsc f8615163-0000-1000-2000-002248a0ff9e eth0: VF registering: eth1 Jan 23 01:11:00.560444 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 23 01:11:00.563842 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 23 01:11:01.341519 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:11:01.341887 disk-uuid[654]: The operation has completed successfully. Jan 23 01:11:01.404897 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:11:01.404996 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:11:01.438965 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:11:01.450073 sh[701]: Success Jan 23 01:11:01.481093 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:11:01.481145 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:11:01.482757 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:11:01.492064 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 01:11:01.730711 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:11:01.735948 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:11:01.750014 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:11:01.763852 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (714) Jan 23 01:11:01.763901 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:11:01.765119 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:11:02.111465 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 01:11:02.111564 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:11:02.113315 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:11:02.144780 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:11:02.147263 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:11:02.148803 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:11:02.149571 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:11:02.155936 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:11:02.188860 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:4) scanned by mount (746) Jan 23 01:11:02.192731 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:11:02.192771 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:11:02.215068 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:11:02.215108 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 01:11:02.215119 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:11:02.219872 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:11:02.220617 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:11:02.228842 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:11:02.253792 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:11:02.256944 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:11:02.291865 systemd-networkd[883]: lo: Link UP Jan 23 01:11:02.291874 systemd-networkd[883]: lo: Gained carrier Jan 23 01:11:02.293105 systemd-networkd[883]: Enumeration completed Jan 23 01:11:02.298708 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 23 01:11:02.293493 systemd-networkd[883]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:11:02.314344 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 01:11:02.314493 kernel: hv_netvsc f8615163-0000-1000-2000-002248a0ff9e eth0: Data path switched to VF: enP30832s1 Jan 23 01:11:02.293496 systemd-networkd[883]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:11:02.294015 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:11:02.297508 systemd[1]: Reached target network.target - Network. Jan 23 01:11:02.304551 systemd-networkd[883]: enP30832s1: Link UP Jan 23 01:11:02.304613 systemd-networkd[883]: eth0: Link UP Jan 23 01:11:02.304703 systemd-networkd[883]: eth0: Gained carrier Jan 23 01:11:02.304715 systemd-networkd[883]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:11:02.308933 systemd-networkd[883]: enP30832s1: Gained carrier Jan 23 01:11:02.326864 systemd-networkd[883]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 23 01:11:03.472698 ignition[846]: Ignition 2.22.0 Jan 23 01:11:03.472715 ignition[846]: Stage: fetch-offline Jan 23 01:11:03.475906 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:11:03.473681 ignition[846]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:11:03.479584 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 01:11:03.473696 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 01:11:03.473799 ignition[846]: parsed url from cmdline: "" Jan 23 01:11:03.473802 ignition[846]: no config URL provided Jan 23 01:11:03.473810 ignition[846]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:11:03.473816 ignition[846]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:11:03.473838 ignition[846]: failed to fetch config: resource requires networking Jan 23 01:11:03.474523 ignition[846]: Ignition finished successfully Jan 23 01:11:03.512071 ignition[894]: Ignition 2.22.0 Jan 23 01:11:03.512082 ignition[894]: Stage: fetch Jan 23 01:11:03.512276 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:11:03.512283 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 01:11:03.512354 ignition[894]: parsed url from cmdline: "" Jan 23 01:11:03.512357 ignition[894]: no config URL provided Jan 23 01:11:03.512362 ignition[894]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:11:03.512367 ignition[894]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:11:03.512387 ignition[894]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 01:11:03.580985 ignition[894]: GET result: OK Jan 23 01:11:03.581032 ignition[894]: config has been read from IMDS userdata Jan 23 01:11:03.581048 ignition[894]: parsing config with SHA512: eeec159c1a3b336afbb5e426962f43439485ac9cd444bea84490bdc071d02439e7e0148df4d3aa2bd4c5204d35497ffa04cfe8ef3aff179670e50fb1f2eb329b Jan 23 01:11:03.586211 unknown[894]: fetched base config from "system" Jan 23 01:11:03.586999 unknown[894]: fetched base config from "system" Jan 23 01:11:03.587249 ignition[894]: fetch: fetch complete Jan 23 01:11:03.587006 unknown[894]: fetched user config from "azure" Jan 23 01:11:03.587253 ignition[894]: fetch: fetch passed Jan 23 01:11:03.591096 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 01:11:03.587298 ignition[894]: Ignition finished successfully Jan 23 01:11:03.593964 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:11:03.615958 ignition[901]: Ignition 2.22.0 Jan 23 01:11:03.615968 ignition[901]: Stage: kargs Jan 23 01:11:03.618481 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:11:03.616156 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:11:03.622732 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:11:03.616163 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 01:11:03.616744 ignition[901]: kargs: kargs passed Jan 23 01:11:03.616776 ignition[901]: Ignition finished successfully Jan 23 01:11:03.653479 ignition[907]: Ignition 2.22.0 Jan 23 01:11:03.653491 ignition[907]: Stage: disks Jan 23 01:11:03.655847 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:11:03.653698 ignition[907]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:11:03.659520 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:11:03.653706 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 01:11:03.662077 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:11:03.654335 ignition[907]: disks: disks passed Jan 23 01:11:03.668159 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:11:03.654369 ignition[907]: Ignition finished successfully Jan 23 01:11:03.672073 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:11:03.682872 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:11:03.685040 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:11:03.759848 systemd-fsck[916]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 23 01:11:03.764899 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:11:03.770858 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:11:04.043118 systemd-networkd[883]: eth0: Gained IPv6LL Jan 23 01:11:04.086841 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:11:04.087150 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:11:04.091261 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:11:04.111432 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:11:04.132909 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:11:04.137927 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 01:11:04.151907 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:4) scanned by mount (925) Jan 23 01:11:04.151929 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:11:04.151940 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:11:04.141175 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:11:04.157472 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:11:04.157494 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 01:11:04.157510 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:11:04.141207 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:11:04.159981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:11:04.163955 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:11:04.167521 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:11:04.742009 coreos-metadata[927]: Jan 23 01:11:04.741 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 01:11:04.755651 coreos-metadata[927]: Jan 23 01:11:04.755 INFO Fetch successful Jan 23 01:11:04.757894 coreos-metadata[927]: Jan 23 01:11:04.756 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 01:11:04.766326 coreos-metadata[927]: Jan 23 01:11:04.766 INFO Fetch successful Jan 23 01:11:04.783633 coreos-metadata[927]: Jan 23 01:11:04.783 INFO wrote hostname ci-4459.2.2-n-0e89f49345 to /sysroot/etc/hostname Jan 23 01:11:04.787124 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 01:11:05.041166 initrd-setup-root[955]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:11:05.097367 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:11:05.133989 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:11:05.139319 initrd-setup-root[976]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:11:06.316568 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:11:06.319911 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:11:06.320608 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:11:06.339785 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:11:06.342428 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:11:06.362158 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:11:06.367900 ignition[1043]: INFO : Ignition 2.22.0 Jan 23 01:11:06.367900 ignition[1043]: INFO : Stage: mount Jan 23 01:11:06.374936 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:11:06.374936 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 01:11:06.374936 ignition[1043]: INFO : mount: mount passed Jan 23 01:11:06.374936 ignition[1043]: INFO : Ignition finished successfully Jan 23 01:11:06.370449 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:11:06.372786 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:11:06.386817 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:11:06.411923 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:4) scanned by mount (1056) Jan 23 01:11:06.411957 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:11:06.414859 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:11:06.420663 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:11:06.420697 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 01:11:06.422103 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:11:06.424091 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:11:06.455253 ignition[1073]: INFO : Ignition 2.22.0 Jan 23 01:11:06.455253 ignition[1073]: INFO : Stage: files Jan 23 01:11:06.458866 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:11:06.458866 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 01:11:06.458866 ignition[1073]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:11:06.471650 ignition[1073]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:11:06.471650 ignition[1073]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:11:06.516750 ignition[1073]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:11:06.520907 ignition[1073]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:11:06.520907 ignition[1073]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:11:06.517167 unknown[1073]: wrote ssh authorized keys file for user: core Jan 23 01:11:06.535667 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:11:06.540892 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:11:06.544921 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:11:06.544921 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:11:06.544921 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:11:06.555872 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:11:06.555872 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:11:06.555872 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 01:11:06.975759 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 23 01:11:07.573277 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:11:07.577187 ignition[1073]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:11:07.577187 ignition[1073]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:11:07.577187 ignition[1073]: INFO : files: files passed Jan 23 01:11:07.577187 ignition[1073]: INFO : Ignition finished successfully Jan 23 01:11:07.575463 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:11:07.586763 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:11:07.596947 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:11:07.601034 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:11:07.602608 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:11:07.614556 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:11:07.614556 initrd-setup-root-after-ignition[1104]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:11:07.620919 initrd-setup-root-after-ignition[1108]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:11:07.620807 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:11:07.625100 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:11:07.630743 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:11:07.677381 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:11:07.677467 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:11:07.684931 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:11:07.687701 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:11:07.692025 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:11:07.692582 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:11:07.709021 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:11:07.710497 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:11:07.727310 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:11:07.731100 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:11:07.737220 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:11:07.741197 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:11:07.741297 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:11:07.748381 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:11:07.751386 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:11:07.753799 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:11:07.756604 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:11:07.760582 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:11:07.764122 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:11:07.769932 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:11:07.772219 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:11:07.775459 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:11:07.778341 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:11:07.782001 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:11:07.785913 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:11:07.786029 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:11:07.786539 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:11:07.786782 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:11:07.787011 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:11:07.787468 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:11:07.787553 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:11:07.787648 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:11:07.788083 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:11:07.788187 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:11:07.788596 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:11:07.788671 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:11:07.799864 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 01:11:07.799962 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 01:11:07.807735 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:11:07.814272 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:11:07.814427 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:11:07.820256 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:11:07.823518 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:11:07.824029 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:11:07.824872 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:11:07.824987 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:11:07.864371 ignition[1128]: INFO : Ignition 2.22.0 Jan 23 01:11:07.864371 ignition[1128]: INFO : Stage: umount Jan 23 01:11:07.864371 ignition[1128]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:11:07.864371 ignition[1128]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 01:11:07.864371 ignition[1128]: INFO : umount: umount passed Jan 23 01:11:07.864371 ignition[1128]: INFO : Ignition finished successfully Jan 23 01:11:07.848125 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:11:07.848224 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:11:07.868948 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:11:07.869042 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:11:07.889261 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:11:07.889682 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:11:07.889717 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:11:07.892036 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:11:07.892076 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:11:07.895910 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 01:11:07.895951 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 01:11:07.899892 systemd[1]: Stopped target network.target - Network. Jan 23 01:11:07.903875 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:11:07.903919 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:11:07.908892 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:11:07.912860 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:11:07.913850 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:11:07.916863 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:11:07.920859 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:11:07.925470 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:11:07.925509 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:11:07.930691 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:11:07.930715 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:11:07.934258 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:11:07.934315 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:11:07.937767 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:11:07.937803 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:11:07.941030 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:11:07.944939 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:11:07.954038 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:11:07.954134 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:11:07.963321 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:11:07.963474 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:11:07.963600 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:11:07.967687 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:11:07.968684 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:11:07.973936 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:11:07.973974 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:11:07.977235 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:11:07.981944 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:11:07.982002 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:11:07.986935 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:11:07.986985 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:11:07.989328 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:11:07.989363 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:11:08.033758 kernel: hv_netvsc f8615163-0000-1000-2000-002248a0ff9e eth0: Data path switched from VF: enP30832s1 Jan 23 01:11:08.033945 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 01:11:07.992922 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:11:07.992969 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:11:07.996298 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:11:08.009086 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:11:08.009139 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:11:08.013277 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:11:08.013456 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:11:08.022055 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:11:08.022115 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:11:08.026916 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:11:08.026944 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:11:08.032404 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:11:08.032568 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:11:08.038294 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:11:08.038349 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:11:08.057872 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:11:08.057929 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:11:08.063796 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:11:08.067112 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:11:08.067172 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:11:08.068675 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:11:08.068711 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:11:08.079509 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:11:08.079549 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:11:08.088479 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 01:11:08.088537 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 01:11:08.089896 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:11:08.090238 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:11:08.090318 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:11:08.097093 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:11:08.097166 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:11:08.214359 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:11:08.214459 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:11:08.219129 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:11:08.223888 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:11:08.223940 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:11:08.229539 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:11:08.251154 systemd[1]: Switching root. Jan 23 01:11:08.339732 systemd-journald[186]: Journal stopped Jan 23 01:11:12.831761 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Jan 23 01:11:12.831789 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:11:12.831803 kernel: SELinux: policy capability open_perms=1 Jan 23 01:11:12.831812 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:11:12.831820 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:11:12.831842 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:11:12.831852 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:11:12.831861 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:11:12.831871 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:11:12.831879 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:11:12.831887 kernel: audit: type=1403 audit(1769130669.374:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:11:12.831896 systemd[1]: Successfully loaded SELinux policy in 202.056ms. Jan 23 01:11:12.831908 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.227ms. Jan 23 01:11:12.831919 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:11:12.831932 systemd[1]: Detected virtualization microsoft. Jan 23 01:11:12.831941 systemd[1]: Detected architecture x86-64. Jan 23 01:11:12.831950 systemd[1]: Detected first boot. Jan 23 01:11:12.831959 systemd[1]: Hostname set to . Jan 23 01:11:12.831968 systemd[1]: Initializing machine ID from random generator. Jan 23 01:11:12.831978 zram_generator::config[1172]: No configuration found. Jan 23 01:11:12.831991 kernel: Guest personality initialized and is inactive Jan 23 01:11:12.832000 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 23 01:11:12.832009 kernel: Initialized host personality Jan 23 01:11:12.832019 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:11:12.832028 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:11:12.832038 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:11:12.832048 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:11:12.832057 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:11:12.832069 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:11:12.832078 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:11:12.832088 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:11:12.832097 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:11:12.832106 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:11:12.832115 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:11:12.832125 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:11:12.832136 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:11:12.832146 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:11:12.832156 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:11:12.832165 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:11:12.832174 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:11:12.832186 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:11:12.832196 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:11:12.832207 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:11:12.832219 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:11:12.832230 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:11:12.832240 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:11:12.832249 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:11:12.832258 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:11:12.832268 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:11:12.832279 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:11:12.832291 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:11:12.832301 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:11:12.832310 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:11:12.832320 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:11:12.832329 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:11:12.832339 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:11:12.832352 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:11:12.832362 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:11:12.832372 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:11:12.832382 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:11:12.832391 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:11:12.832401 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:11:12.832410 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:11:12.832421 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:11:12.832431 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:11:12.832442 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:11:12.832453 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:11:12.832463 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:11:12.832473 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:11:12.832483 systemd[1]: Reached target machines.target - Containers. Jan 23 01:11:12.832492 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:11:12.832502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:11:12.832514 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:11:12.832524 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:11:12.832534 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:11:12.832543 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:11:12.832553 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:11:12.832563 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:11:12.832572 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:11:12.832582 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:11:12.832594 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:11:12.832605 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:11:12.832615 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:11:12.832624 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:11:12.832634 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:11:12.832644 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:11:12.832654 kernel: loop: module loaded Jan 23 01:11:12.832662 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:11:12.832673 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:11:12.832682 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:11:12.832692 kernel: fuse: init (API version 7.41) Jan 23 01:11:12.832702 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:11:12.832712 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:11:12.832722 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:11:12.832731 systemd[1]: Stopped verity-setup.service. Jan 23 01:11:12.832741 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:11:12.832767 systemd-journald[1258]: Collecting audit messages is disabled. Jan 23 01:11:12.832793 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:11:12.832803 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:11:12.832812 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:11:12.838110 systemd-journald[1258]: Journal started Jan 23 01:11:12.838158 systemd-journald[1258]: Runtime Journal (/run/log/journal/86cffe07ec9c426e9c46a2623a6dfcb7) is 8M, max 158.6M, 150.6M free. Jan 23 01:11:12.838205 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:11:12.412932 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:11:12.422368 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 01:11:12.422725 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:11:12.840859 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:11:12.843170 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:11:12.845570 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:11:12.849188 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:11:12.851406 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:11:12.854430 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:11:12.854582 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:11:12.857314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:11:12.857471 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:11:12.860165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:11:12.860301 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:11:12.863305 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:11:12.863447 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:11:12.871856 kernel: ACPI: bus type drm_connector registered Jan 23 01:11:12.866373 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:11:12.866522 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:11:12.868958 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:11:12.871511 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:11:12.873472 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:11:12.873618 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:11:12.877121 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:11:12.889606 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:11:12.893719 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:11:12.897910 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:11:12.899305 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:11:12.899903 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:11:12.903248 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:11:12.908729 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:11:12.912040 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:11:12.923966 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:11:12.927645 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:11:12.930023 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:11:12.931956 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:11:12.935948 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:11:12.936991 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:11:12.938221 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:11:12.943422 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:11:12.962960 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:11:12.965985 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:11:12.968748 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:11:12.981510 systemd-journald[1258]: Time spent on flushing to /var/log/journal/86cffe07ec9c426e9c46a2623a6dfcb7 is 76.131ms for 974 entries. Jan 23 01:11:12.981510 systemd-journald[1258]: System Journal (/var/log/journal/86cffe07ec9c426e9c46a2623a6dfcb7) is 11.8M, max 2.6G, 2.6G free. Jan 23 01:11:13.169174 systemd-journald[1258]: Received client request to flush runtime journal. Jan 23 01:11:13.169221 systemd-journald[1258]: /var/log/journal/86cffe07ec9c426e9c46a2623a6dfcb7/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 23 01:11:13.169240 systemd-journald[1258]: Rotating system journal. Jan 23 01:11:13.169257 kernel: loop0: detected capacity change from 0 to 224512 Jan 23 01:11:12.991523 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:11:12.995248 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:11:12.997740 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:11:13.004985 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:11:13.082376 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:11:13.113218 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:11:13.170404 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:11:13.188979 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:11:13.218843 kernel: loop1: detected capacity change from 0 to 27936 Jan 23 01:11:13.259993 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:11:13.264131 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:11:13.319906 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Jan 23 01:11:13.319922 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Jan 23 01:11:13.322571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:11:13.424275 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:11:13.637878 kernel: loop2: detected capacity change from 0 to 110984 Jan 23 01:11:13.729517 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:11:13.735191 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:11:13.766687 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Jan 23 01:11:14.008175 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:11:14.022028 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:11:14.069857 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 01:11:14.107360 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:11:14.144226 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:11:14.182855 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:11:14.183771 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:11:14.205986 kernel: hv_vmbus: registering driver hv_balloon Jan 23 01:11:14.208980 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 01:11:14.219354 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#96 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 01:11:14.269914 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 01:11:14.290313 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 01:11:14.290390 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 01:11:14.293935 kernel: Console: switching to colour dummy device 80x25 Jan 23 01:11:14.301459 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 01:11:14.414154 systemd-networkd[1364]: lo: Link UP Jan 23 01:11:14.414468 systemd-networkd[1364]: lo: Gained carrier Jan 23 01:11:14.416929 systemd-networkd[1364]: Enumeration completed Jan 23 01:11:14.417606 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:11:14.417610 systemd-networkd[1364]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:11:14.417883 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:11:14.421280 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:11:14.427755 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 23 01:11:14.425942 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:11:14.437093 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 01:11:14.437308 kernel: hv_netvsc f8615163-0000-1000-2000-002248a0ff9e eth0: Data path switched to VF: enP30832s1 Jan 23 01:11:14.437627 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:11:14.437664 systemd-networkd[1364]: enP30832s1: Link UP Jan 23 01:11:14.437740 systemd-networkd[1364]: eth0: Link UP Jan 23 01:11:14.437744 systemd-networkd[1364]: eth0: Gained carrier Jan 23 01:11:14.437755 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:11:14.448060 systemd-networkd[1364]: enP30832s1: Gained carrier Jan 23 01:11:14.464868 systemd-networkd[1364]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 23 01:11:14.491700 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:11:14.510228 kernel: loop4: detected capacity change from 0 to 224512 Jan 23 01:11:14.510639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:11:14.522566 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 23 01:11:14.526272 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:11:14.526689 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:11:14.529654 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:11:14.533984 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:11:14.539842 kernel: loop5: detected capacity change from 0 to 27936 Jan 23 01:11:14.551848 kernel: loop6: detected capacity change from 0 to 110984 Jan 23 01:11:14.560839 kernel: loop7: detected capacity change from 0 to 128560 Jan 23 01:11:14.566342 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:11:14.593854 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 23 01:11:14.637542 (sd-merge)[1428]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 23 01:11:14.637943 (sd-merge)[1428]: Merged extensions into '/usr'. Jan 23 01:11:14.641254 systemd[1]: Reload requested from client PID 1312 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:11:14.641266 systemd[1]: Reloading... Jan 23 01:11:14.686851 zram_generator::config[1461]: No configuration found. Jan 23 01:11:14.922322 systemd[1]: Reloading finished in 280 ms. Jan 23 01:11:14.939765 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:11:14.942054 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:11:14.951653 systemd[1]: Starting ensure-sysext.service... Jan 23 01:11:14.953567 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:11:14.972177 systemd[1]: Reload requested from client PID 1526 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:11:14.972270 systemd[1]: Reloading... Jan 23 01:11:14.994623 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:11:15.008427 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:11:15.008682 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:11:15.008933 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:11:15.009634 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:11:15.010996 systemd-tmpfiles[1527]: ACLs are not supported, ignoring. Jan 23 01:11:15.011056 systemd-tmpfiles[1527]: ACLs are not supported, ignoring. Jan 23 01:11:15.016751 systemd-tmpfiles[1527]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:11:15.016768 systemd-tmpfiles[1527]: Skipping /boot Jan 23 01:11:15.026870 zram_generator::config[1558]: No configuration found. Jan 23 01:11:15.033974 systemd-tmpfiles[1527]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:11:15.033987 systemd-tmpfiles[1527]: Skipping /boot Jan 23 01:11:15.218374 systemd[1]: Reloading finished in 245 ms. Jan 23 01:11:15.236854 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:11:15.245059 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:11:15.262979 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:11:15.274919 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:11:15.280060 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:11:15.285044 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:11:15.290585 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:11:15.290761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:11:15.293880 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:11:15.303045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:11:15.308908 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:11:15.310711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:11:15.310859 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:11:15.310966 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:11:15.317132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:11:15.317307 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:11:15.331957 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:11:15.332135 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:11:15.336369 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:11:15.336512 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:11:15.342575 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:11:15.352021 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:11:15.352258 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:11:15.355443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:11:15.360582 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:11:15.366119 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:11:15.374433 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:11:15.376551 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:11:15.376678 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:11:15.376959 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:11:15.380033 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:11:15.381335 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:11:15.381901 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:11:15.384812 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:11:15.386162 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:11:15.392899 systemd[1]: Finished ensure-sysext.service. Jan 23 01:11:15.395232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:11:15.395384 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:11:15.401228 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:11:15.403467 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:11:15.403660 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:11:15.406107 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:11:15.407539 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:11:15.457656 systemd-resolved[1621]: Positive Trust Anchors: Jan 23 01:11:15.457673 systemd-resolved[1621]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:11:15.457702 systemd-resolved[1621]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:11:15.475121 systemd-resolved[1621]: Using system hostname 'ci-4459.2.2-n-0e89f49345'. Jan 23 01:11:15.477692 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:11:15.480256 systemd[1]: Reached target network.target - Network. Jan 23 01:11:15.482012 augenrules[1660]: No rules Jan 23 01:11:15.482443 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:11:15.484506 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:11:15.484711 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:11:16.212997 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:11:16.215309 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:11:16.395181 systemd-networkd[1364]: eth0: Gained IPv6LL Jan 23 01:11:16.397602 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:11:16.399562 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:11:18.738418 ldconfig[1307]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:11:18.748393 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:11:18.753044 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:11:18.772093 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:11:18.773914 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:11:18.777046 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:11:18.779912 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:11:18.783913 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:11:18.787000 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:11:18.789941 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:11:18.791687 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:11:18.795908 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:11:18.795945 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:11:18.798888 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:11:18.800582 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:11:18.803006 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:11:18.807398 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:11:18.809592 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:11:18.812877 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:11:18.822286 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:11:18.825146 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:11:18.828431 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:11:18.831561 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:11:18.832952 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:11:18.834204 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:11:18.834220 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:11:18.851568 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 01:11:18.855921 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:11:18.861941 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 01:11:18.867185 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:11:18.873338 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:11:18.876983 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:11:18.882989 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:11:18.885035 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:11:18.887118 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:11:18.889193 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 23 01:11:18.891945 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 01:11:18.894746 jq[1678]: false Jan 23 01:11:18.893897 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 01:11:18.898946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:11:18.903564 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:11:18.910008 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:11:18.916793 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:11:18.921640 KVP[1684]: KVP starting; pid is:1684 Jan 23 01:11:18.925493 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:11:18.930347 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:11:18.934301 KVP[1684]: KVP LIC Version: 3.1 Jan 23 01:11:18.934839 kernel: hv_utils: KVP IC version 4.0 Jan 23 01:11:18.934952 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:11:18.935384 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:11:18.941548 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:11:18.946973 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:11:18.955028 extend-filesystems[1680]: Found /dev/nvme0n1p6 Jan 23 01:11:18.961070 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:11:18.966376 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:11:18.966603 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:11:18.969128 google_oslogin_nss_cache[1683]: oslogin_cache_refresh[1683]: Refreshing passwd entry cache Jan 23 01:11:18.969137 oslogin_cache_refresh[1683]: Refreshing passwd entry cache Jan 23 01:11:18.974153 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:11:18.975973 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:11:18.979571 extend-filesystems[1680]: Found /dev/nvme0n1p9 Jan 23 01:11:18.987785 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:11:18.989430 extend-filesystems[1680]: Checking size of /dev/nvme0n1p9 Jan 23 01:11:18.994400 jq[1697]: true Jan 23 01:11:18.992053 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:11:19.004082 google_oslogin_nss_cache[1683]: oslogin_cache_refresh[1683]: Failure getting users, quitting Jan 23 01:11:19.004082 google_oslogin_nss_cache[1683]: oslogin_cache_refresh[1683]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:11:19.001764 oslogin_cache_refresh[1683]: Failure getting users, quitting Jan 23 01:11:19.001782 oslogin_cache_refresh[1683]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:11:19.009139 (ntainerd)[1715]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:11:19.019850 jq[1714]: true Jan 23 01:11:19.020180 chronyd[1673]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 01:11:19.022076 google_oslogin_nss_cache[1683]: oslogin_cache_refresh[1683]: Refreshing group entry cache Jan 23 01:11:19.020461 oslogin_cache_refresh[1683]: Refreshing group entry cache Jan 23 01:11:19.039071 extend-filesystems[1680]: Old size kept for /dev/nvme0n1p9 Jan 23 01:11:19.043158 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:11:19.045724 update_engine[1694]: I20260123 01:11:19.045564 1694 main.cc:92] Flatcar Update Engine starting Jan 23 01:11:19.043360 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:11:19.049223 google_oslogin_nss_cache[1683]: oslogin_cache_refresh[1683]: Failure getting groups, quitting Jan 23 01:11:19.049220 oslogin_cache_refresh[1683]: Failure getting groups, quitting Jan 23 01:11:19.049309 google_oslogin_nss_cache[1683]: oslogin_cache_refresh[1683]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:11:19.049229 oslogin_cache_refresh[1683]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:11:19.050658 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:11:19.052198 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:11:19.060500 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:11:19.061058 chronyd[1673]: Timezone right/UTC failed leap second check, ignoring Jan 23 01:11:19.061209 chronyd[1673]: Loaded seccomp filter (level 2) Jan 23 01:11:19.062482 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 01:11:19.089530 systemd-logind[1693]: New seat seat0. Jan 23 01:11:19.465220 systemd-logind[1693]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:11:19.465805 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:11:19.527903 bash[1748]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:11:19.529525 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:11:19.533412 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 01:11:19.580308 dbus-daemon[1676]: [system] SELinux support is enabled Jan 23 01:11:19.586512 update_engine[1694]: I20260123 01:11:19.584390 1694 update_check_scheduler.cc:74] Next update check in 6m28s Jan 23 01:11:19.580445 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:11:19.593177 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:11:19.596339 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:11:19.596504 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:11:19.599506 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:11:19.599613 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:11:19.605674 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:11:19.663883 coreos-metadata[1675]: Jan 23 01:11:19.663 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 01:11:19.666926 coreos-metadata[1675]: Jan 23 01:11:19.666 INFO Fetch successful Jan 23 01:11:19.666926 coreos-metadata[1675]: Jan 23 01:11:19.666 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 01:11:19.674337 coreos-metadata[1675]: Jan 23 01:11:19.672 INFO Fetch successful Jan 23 01:11:19.674337 coreos-metadata[1675]: Jan 23 01:11:19.674 INFO Fetching http://168.63.129.16/machine/d8b7004a-1b79-440e-bebf-a164a8795685/b9079f24%2D6c8e%2D4807%2D944a%2D811f1fa63b1b.%5Fci%2D4459.2.2%2Dn%2D0e89f49345?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 01:11:19.675954 coreos-metadata[1675]: Jan 23 01:11:19.675 INFO Fetch successful Jan 23 01:11:19.676036 coreos-metadata[1675]: Jan 23 01:11:19.675 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 01:11:19.683717 coreos-metadata[1675]: Jan 23 01:11:19.683 INFO Fetch successful Jan 23 01:11:19.709659 sshd_keygen[1727]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:11:19.732974 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 01:11:19.739403 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:11:19.748719 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:11:19.755209 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:11:19.760736 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 01:11:19.784383 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:11:19.788992 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:11:19.793015 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:11:19.801024 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 01:11:19.815102 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:11:19.821165 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:11:19.829956 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:11:19.832996 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:11:19.844622 locksmithd[1769]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:11:20.491770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:11:20.496421 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:11:20.662716 containerd[1715]: time="2026-01-23T01:11:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:11:20.664482 containerd[1715]: time="2026-01-23T01:11:20.663740524Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:11:20.675538 containerd[1715]: time="2026-01-23T01:11:20.675496672Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.239µs" Jan 23 01:11:20.675538 containerd[1715]: time="2026-01-23T01:11:20.675535037Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:11:20.675678 containerd[1715]: time="2026-01-23T01:11:20.675557876Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:11:20.675712 containerd[1715]: time="2026-01-23T01:11:20.675698857Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:11:20.675748 containerd[1715]: time="2026-01-23T01:11:20.675723180Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:11:20.675772 containerd[1715]: time="2026-01-23T01:11:20.675758225Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:11:20.675841 containerd[1715]: time="2026-01-23T01:11:20.675811001Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:11:20.675967 containerd[1715]: time="2026-01-23T01:11:20.675952903Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:11:20.676241 containerd[1715]: time="2026-01-23T01:11:20.676217668Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:11:20.676281 containerd[1715]: time="2026-01-23T01:11:20.676241304Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:11:20.676281 containerd[1715]: time="2026-01-23T01:11:20.676258550Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:11:20.676281 containerd[1715]: time="2026-01-23T01:11:20.676271186Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:11:20.676364 containerd[1715]: time="2026-01-23T01:11:20.676351826Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:11:20.676565 containerd[1715]: time="2026-01-23T01:11:20.676549442Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:11:20.676598 containerd[1715]: time="2026-01-23T01:11:20.676588917Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:11:20.676617 containerd[1715]: time="2026-01-23T01:11:20.676600299Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:11:20.676638 containerd[1715]: time="2026-01-23T01:11:20.676628928Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:11:20.677944 containerd[1715]: time="2026-01-23T01:11:20.677716573Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:11:20.677944 containerd[1715]: time="2026-01-23T01:11:20.677820400Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:11:20.715467 containerd[1715]: time="2026-01-23T01:11:20.715437153Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:11:20.715593 containerd[1715]: time="2026-01-23T01:11:20.715579862Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:11:20.715747 containerd[1715]: time="2026-01-23T01:11:20.715731710Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:11:20.715838 containerd[1715]: time="2026-01-23T01:11:20.715809725Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:11:20.715919 containerd[1715]: time="2026-01-23T01:11:20.715905542Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:11:20.716047 containerd[1715]: time="2026-01-23T01:11:20.715955423Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:11:20.716047 containerd[1715]: time="2026-01-23T01:11:20.715980822Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:11:20.716047 containerd[1715]: time="2026-01-23T01:11:20.716002037Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:11:20.716047 containerd[1715]: time="2026-01-23T01:11:20.716016067Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:11:20.716047 containerd[1715]: time="2026-01-23T01:11:20.716029937Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:11:20.716146 containerd[1715]: time="2026-01-23T01:11:20.716139792Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:11:20.716181 containerd[1715]: time="2026-01-23T01:11:20.716175325Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:11:20.716366 containerd[1715]: time="2026-01-23T01:11:20.716324224Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:11:20.716366 containerd[1715]: time="2026-01-23T01:11:20.716349051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:11:20.716436 containerd[1715]: time="2026-01-23T01:11:20.716428221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:11:20.716473 containerd[1715]: time="2026-01-23T01:11:20.716466010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:11:20.716579 containerd[1715]: time="2026-01-23T01:11:20.716506096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:11:20.716579 containerd[1715]: time="2026-01-23T01:11:20.716517131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:11:20.716579 containerd[1715]: time="2026-01-23T01:11:20.716541518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:11:20.716579 containerd[1715]: time="2026-01-23T01:11:20.716555994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:11:20.716684 containerd[1715]: time="2026-01-23T01:11:20.716675943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:11:20.716729 containerd[1715]: time="2026-01-23T01:11:20.716721186Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:11:20.716849 containerd[1715]: time="2026-01-23T01:11:20.716756831Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:11:20.716849 containerd[1715]: time="2026-01-23T01:11:20.716805258Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:11:20.716905 containerd[1715]: time="2026-01-23T01:11:20.716898746Z" level=info msg="Start snapshots syncer" Jan 23 01:11:20.717028 containerd[1715]: time="2026-01-23T01:11:20.716954093Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:11:20.717381 containerd[1715]: time="2026-01-23T01:11:20.717310578Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:11:20.717381 containerd[1715]: time="2026-01-23T01:11:20.717354939Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:11:20.717625 containerd[1715]: time="2026-01-23T01:11:20.717534758Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:11:20.717759 containerd[1715]: time="2026-01-23T01:11:20.717696038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:11:20.717759 containerd[1715]: time="2026-01-23T01:11:20.717715482Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:11:20.717759 containerd[1715]: time="2026-01-23T01:11:20.717727022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:11:20.717759 containerd[1715]: time="2026-01-23T01:11:20.717737554Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:11:20.717940 containerd[1715]: time="2026-01-23T01:11:20.717748886Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:11:20.717940 containerd[1715]: time="2026-01-23T01:11:20.717875896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:11:20.717940 containerd[1715]: time="2026-01-23T01:11:20.717886809Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:11:20.717940 containerd[1715]: time="2026-01-23T01:11:20.717910704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:11:20.717940 containerd[1715]: time="2026-01-23T01:11:20.717921532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:11:20.718117 containerd[1715]: time="2026-01-23T01:11:20.717931171Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:11:20.718117 containerd[1715]: time="2026-01-23T01:11:20.718067168Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:11:20.718117 containerd[1715]: time="2026-01-23T01:11:20.718079977Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:11:20.718117 containerd[1715]: time="2026-01-23T01:11:20.718088242Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:11:20.718117 containerd[1715]: time="2026-01-23T01:11:20.718097391Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:11:20.718317 containerd[1715]: time="2026-01-23T01:11:20.718104677Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:11:20.718317 containerd[1715]: time="2026-01-23T01:11:20.718252018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:11:20.718317 containerd[1715]: time="2026-01-23T01:11:20.718265694Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:11:20.718317 containerd[1715]: time="2026-01-23T01:11:20.718277526Z" level=info msg="runtime interface created" Jan 23 01:11:20.718317 containerd[1715]: time="2026-01-23T01:11:20.718281042Z" level=info msg="created NRI interface" Jan 23 01:11:20.718317 containerd[1715]: time="2026-01-23T01:11:20.718286210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:11:20.718317 containerd[1715]: time="2026-01-23T01:11:20.718296935Z" level=info msg="Connect containerd service" Jan 23 01:11:20.718528 containerd[1715]: time="2026-01-23T01:11:20.718455929Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:11:20.719329 containerd[1715]: time="2026-01-23T01:11:20.719287239Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:11:20.985030 kubelet[1813]: E0123 01:11:20.984987 1813 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:11:20.986997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:11:20.987135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:11:20.987579 systemd[1]: kubelet.service: Consumed 901ms CPU time, 264.7M memory peak. Jan 23 01:11:21.528977 containerd[1715]: time="2026-01-23T01:11:21.528909773Z" level=info msg="Start subscribing containerd event" Jan 23 01:11:21.529279 containerd[1715]: time="2026-01-23T01:11:21.529119995Z" level=info msg="Start recovering state" Jan 23 01:11:21.529322 containerd[1715]: time="2026-01-23T01:11:21.529123155Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:11:21.529436 containerd[1715]: time="2026-01-23T01:11:21.529359509Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:11:21.529436 containerd[1715]: time="2026-01-23T01:11:21.529386398Z" level=info msg="Start event monitor" Jan 23 01:11:21.529436 containerd[1715]: time="2026-01-23T01:11:21.529400069Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:11:21.529436 containerd[1715]: time="2026-01-23T01:11:21.529411482Z" level=info msg="Start streaming server" Jan 23 01:11:21.529700 containerd[1715]: time="2026-01-23T01:11:21.529420637Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:11:21.529700 containerd[1715]: time="2026-01-23T01:11:21.529555210Z" level=info msg="runtime interface starting up..." Jan 23 01:11:21.529700 containerd[1715]: time="2026-01-23T01:11:21.529562837Z" level=info msg="starting plugins..." Jan 23 01:11:21.529700 containerd[1715]: time="2026-01-23T01:11:21.529576423Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:11:21.529961 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:11:21.530372 containerd[1715]: time="2026-01-23T01:11:21.530287087Z" level=info msg="containerd successfully booted in 0.868669s" Jan 23 01:11:21.536046 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:11:21.538448 systemd[1]: Startup finished in 3.182s (kernel) + 10.922s (initrd) + 12.364s (userspace) = 26.468s. Jan 23 01:11:21.831489 waagent[1794]: 2026-01-23T01:11:21.831362Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 23 01:11:21.839901 waagent[1794]: 2026-01-23T01:11:21.831704Z INFO Daemon Daemon OS: flatcar 4459.2.2 Jan 23 01:11:21.839901 waagent[1794]: 2026-01-23T01:11:21.831962Z INFO Daemon Daemon Python: 3.11.13 Jan 23 01:11:21.839901 waagent[1794]: 2026-01-23T01:11:21.832193Z INFO Daemon Daemon Run daemon Jan 23 01:11:21.839901 waagent[1794]: 2026-01-23T01:11:21.832398Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Jan 23 01:11:21.839901 waagent[1794]: 2026-01-23T01:11:21.832666Z INFO Daemon Daemon Using waagent for provisioning Jan 23 01:11:21.839901 waagent[1794]: 2026-01-23T01:11:21.832899Z INFO Daemon Daemon Activate resource disk Jan 23 01:11:21.839901 waagent[1794]: 2026-01-23T01:11:21.832974Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 01:11:21.839901 waagent[1794]: 2026-01-23T01:11:21.834546Z INFO Daemon Daemon Found device: None Jan 23 01:11:21.839901 waagent[1794]: 2026-01-23T01:11:21.834696Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 01:11:21.839901 waagent[1794]: 2026-01-23T01:11:21.834757Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 01:11:21.839901 waagent[1794]: 2026-01-23T01:11:21.835486Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 01:11:21.839901 waagent[1794]: 2026-01-23T01:11:21.836003Z INFO Daemon Daemon Running default provisioning handler Jan 23 01:11:21.844547 login[1798]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 01:11:21.847978 login[1799]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 01:11:21.855789 waagent[1794]: 2026-01-23T01:11:21.855359Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 01:11:21.859750 waagent[1794]: 2026-01-23T01:11:21.859709Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 01:11:21.863689 waagent[1794]: 2026-01-23T01:11:21.863583Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 01:11:21.864073 waagent[1794]: 2026-01-23T01:11:21.863986Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 01:11:21.873032 systemd-logind[1693]: New session 1 of user core. Jan 23 01:11:21.873950 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:11:21.875908 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:11:21.880024 systemd-logind[1693]: New session 2 of user core. Jan 23 01:11:21.895384 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:11:21.897292 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:11:21.908564 (systemd)[1849]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:11:21.910428 systemd-logind[1693]: New session c1 of user core. Jan 23 01:11:21.978085 waagent[1794]: 2026-01-23T01:11:21.978012Z INFO Daemon Daemon Successfully mounted dvd Jan 23 01:11:22.007979 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 01:11:22.011530 waagent[1794]: 2026-01-23T01:11:22.011462Z INFO Daemon Daemon Detect protocol endpoint Jan 23 01:11:22.023115 waagent[1794]: 2026-01-23T01:11:22.011689Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 01:11:22.023115 waagent[1794]: 2026-01-23T01:11:22.012177Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 01:11:22.023115 waagent[1794]: 2026-01-23T01:11:22.012471Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 01:11:22.023115 waagent[1794]: 2026-01-23T01:11:22.012631Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 01:11:22.023115 waagent[1794]: 2026-01-23T01:11:22.012741Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 01:11:22.040036 waagent[1794]: 2026-01-23T01:11:22.037663Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 01:11:22.040307 waagent[1794]: 2026-01-23T01:11:22.040289Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 01:11:22.042443 waagent[1794]: 2026-01-23T01:11:22.042418Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 01:11:22.083959 systemd[1849]: Queued start job for default target default.target. Jan 23 01:11:22.090230 systemd[1849]: Created slice app.slice - User Application Slice. Jan 23 01:11:22.090548 systemd[1849]: Reached target paths.target - Paths. Jan 23 01:11:22.090639 systemd[1849]: Reached target timers.target - Timers. Jan 23 01:11:22.092938 systemd[1849]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:11:22.112059 systemd[1849]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:11:22.112245 systemd[1849]: Reached target sockets.target - Sockets. Jan 23 01:11:22.112373 systemd[1849]: Reached target basic.target - Basic System. Jan 23 01:11:22.112479 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:11:22.112599 systemd[1849]: Reached target default.target - Main User Target. Jan 23 01:11:22.112939 systemd[1849]: Startup finished in 197ms. Jan 23 01:11:22.117807 waagent[1794]: 2026-01-23T01:11:22.117740Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 01:11:22.118300 waagent[1794]: 2026-01-23T01:11:22.118258Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 01:11:22.118788 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:11:22.119567 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:11:22.130724 waagent[1794]: 2026-01-23T01:11:22.130679Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 01:11:22.147848 waagent[1794]: 2026-01-23T01:11:22.146583Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 23 01:11:22.147848 waagent[1794]: 2026-01-23T01:11:22.147088Z INFO Daemon Jan 23 01:11:22.147848 waagent[1794]: 2026-01-23T01:11:22.147328Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: d3d27013-7037-46ae-aaf8-027583e9a939 eTag: 9552570303498292105 source: Fabric] Jan 23 01:11:22.147848 waagent[1794]: 2026-01-23T01:11:22.147558Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 01:11:22.147988 waagent[1794]: 2026-01-23T01:11:22.147965Z INFO Daemon Jan 23 01:11:22.148082 waagent[1794]: 2026-01-23T01:11:22.148067Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 01:11:22.156842 waagent[1794]: 2026-01-23T01:11:22.156581Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 01:11:22.232745 waagent[1794]: 2026-01-23T01:11:22.232691Z INFO Daemon Downloaded certificate {'thumbprint': '0C352C2EFBA09B05AD980DEBD8506C40F8267E21', 'hasPrivateKey': True} Jan 23 01:11:22.234857 waagent[1794]: 2026-01-23T01:11:22.233349Z INFO Daemon Fetch goal state completed Jan 23 01:11:22.242688 waagent[1794]: 2026-01-23T01:11:22.242633Z INFO Daemon Daemon Starting provisioning Jan 23 01:11:22.243082 waagent[1794]: 2026-01-23T01:11:22.242803Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 01:11:22.243130 waagent[1794]: 2026-01-23T01:11:22.243090Z INFO Daemon Daemon Set hostname [ci-4459.2.2-n-0e89f49345] Jan 23 01:11:22.263658 waagent[1794]: 2026-01-23T01:11:22.263618Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-n-0e89f49345] Jan 23 01:11:22.268277 waagent[1794]: 2026-01-23T01:11:22.263925Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 01:11:22.268277 waagent[1794]: 2026-01-23T01:11:22.264191Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 01:11:22.271676 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:11:22.271682 systemd-networkd[1364]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:11:22.271704 systemd-networkd[1364]: eth0: DHCP lease lost Jan 23 01:11:22.272541 waagent[1794]: 2026-01-23T01:11:22.272498Z INFO Daemon Daemon Create user account if not exists Jan 23 01:11:22.274728 waagent[1794]: 2026-01-23T01:11:22.272692Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 01:11:22.274728 waagent[1794]: 2026-01-23T01:11:22.272982Z INFO Daemon Daemon Configure sudoer Jan 23 01:11:22.277460 waagent[1794]: 2026-01-23T01:11:22.277407Z INFO Daemon Daemon Configure sshd Jan 23 01:11:22.281804 waagent[1794]: 2026-01-23T01:11:22.281444Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 01:11:22.284293 waagent[1794]: 2026-01-23T01:11:22.284217Z INFO Daemon Daemon Deploy ssh public key. Jan 23 01:11:22.286894 systemd-networkd[1364]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 23 01:11:23.354068 waagent[1794]: 2026-01-23T01:11:23.354025Z INFO Daemon Daemon Provisioning complete Jan 23 01:11:23.367781 waagent[1794]: 2026-01-23T01:11:23.367747Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 01:11:23.368597 waagent[1794]: 2026-01-23T01:11:23.367983Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 01:11:23.368597 waagent[1794]: 2026-01-23T01:11:23.368247Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 23 01:11:23.473554 waagent[1892]: 2026-01-23T01:11:23.473466Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 23 01:11:23.473916 waagent[1892]: 2026-01-23T01:11:23.473583Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Jan 23 01:11:23.473916 waagent[1892]: 2026-01-23T01:11:23.473623Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 23 01:11:23.473916 waagent[1892]: 2026-01-23T01:11:23.473660Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 23 01:11:23.511089 waagent[1892]: 2026-01-23T01:11:23.511027Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 23 01:11:23.511225 waagent[1892]: 2026-01-23T01:11:23.511198Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 01:11:23.511294 waagent[1892]: 2026-01-23T01:11:23.511257Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 01:11:23.527222 waagent[1892]: 2026-01-23T01:11:23.527163Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 01:11:23.546232 waagent[1892]: 2026-01-23T01:11:23.546199Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 23 01:11:23.546568 waagent[1892]: 2026-01-23T01:11:23.546540Z INFO ExtHandler Jan 23 01:11:23.546613 waagent[1892]: 2026-01-23T01:11:23.546595Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: abba404b-feab-4b8a-9ba8-2a0f77d18df6 eTag: 9552570303498292105 source: Fabric] Jan 23 01:11:23.546840 waagent[1892]: 2026-01-23T01:11:23.546799Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 01:11:23.547193 waagent[1892]: 2026-01-23T01:11:23.547165Z INFO ExtHandler Jan 23 01:11:23.547237 waagent[1892]: 2026-01-23T01:11:23.547208Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 01:11:23.562544 waagent[1892]: 2026-01-23T01:11:23.562514Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 01:11:23.625447 waagent[1892]: 2026-01-23T01:11:23.625352Z INFO ExtHandler Downloaded certificate {'thumbprint': '0C352C2EFBA09B05AD980DEBD8506C40F8267E21', 'hasPrivateKey': True} Jan 23 01:11:23.625785 waagent[1892]: 2026-01-23T01:11:23.625752Z INFO ExtHandler Fetch goal state completed Jan 23 01:11:23.644836 waagent[1892]: 2026-01-23T01:11:23.644780Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Jan 23 01:11:23.648713 waagent[1892]: 2026-01-23T01:11:23.648663Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1892 Jan 23 01:11:23.648809 waagent[1892]: 2026-01-23T01:11:23.648783Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 01:11:23.649073 waagent[1892]: 2026-01-23T01:11:23.649048Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 23 01:11:23.650149 waagent[1892]: 2026-01-23T01:11:23.650111Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 01:11:23.650443 waagent[1892]: 2026-01-23T01:11:23.650412Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 23 01:11:23.650552 waagent[1892]: 2026-01-23T01:11:23.650527Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 23 01:11:23.650985 waagent[1892]: 2026-01-23T01:11:23.650959Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 01:11:23.734740 waagent[1892]: 2026-01-23T01:11:23.734707Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 01:11:23.734932 waagent[1892]: 2026-01-23T01:11:23.734906Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 01:11:23.740360 waagent[1892]: 2026-01-23T01:11:23.740294Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 01:11:23.745632 systemd[1]: Reload requested from client PID 1907 ('systemctl') (unit waagent.service)... Jan 23 01:11:23.745646 systemd[1]: Reloading... Jan 23 01:11:23.818851 zram_generator::config[1949]: No configuration found. Jan 23 01:11:23.993077 systemd[1]: Reloading finished in 247 ms. Jan 23 01:11:24.006231 waagent[1892]: 2026-01-23T01:11:24.005486Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 01:11:24.006231 waagent[1892]: 2026-01-23T01:11:24.005624Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 01:11:24.013844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#96 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jan 23 01:11:24.251810 waagent[1892]: 2026-01-23T01:11:24.251695Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 01:11:24.252048 waagent[1892]: 2026-01-23T01:11:24.252018Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 23 01:11:24.252681 waagent[1892]: 2026-01-23T01:11:24.252624Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 01:11:24.252962 waagent[1892]: 2026-01-23T01:11:24.252927Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 01:11:24.253288 waagent[1892]: 2026-01-23T01:11:24.253253Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 01:11:24.253351 waagent[1892]: 2026-01-23T01:11:24.253329Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 01:11:24.253351 waagent[1892]: 2026-01-23T01:11:24.253379Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 01:11:24.253591 waagent[1892]: 2026-01-23T01:11:24.253571Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 01:11:24.253690 waagent[1892]: 2026-01-23T01:11:24.253666Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 01:11:24.253979 waagent[1892]: 2026-01-23T01:11:24.253954Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 01:11:24.253979 waagent[1892]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 01:11:24.253979 waagent[1892]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 01:11:24.253979 waagent[1892]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 01:11:24.253979 waagent[1892]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 01:11:24.253979 waagent[1892]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 01:11:24.253979 waagent[1892]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 01:11:24.254133 waagent[1892]: 2026-01-23T01:11:24.254100Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 01:11:24.254163 waagent[1892]: 2026-01-23T01:11:24.254142Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 01:11:24.254519 waagent[1892]: 2026-01-23T01:11:24.254496Z INFO EnvHandler ExtHandler Configure routes Jan 23 01:11:24.254605 waagent[1892]: 2026-01-23T01:11:24.254562Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 01:11:24.254643 waagent[1892]: 2026-01-23T01:11:24.254623Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 01:11:24.254859 waagent[1892]: 2026-01-23T01:11:24.254816Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 01:11:24.255076 waagent[1892]: 2026-01-23T01:11:24.255058Z INFO EnvHandler ExtHandler Gateway:None Jan 23 01:11:24.255538 waagent[1892]: 2026-01-23T01:11:24.255507Z INFO EnvHandler ExtHandler Routes:None Jan 23 01:11:24.262476 waagent[1892]: 2026-01-23T01:11:24.262436Z INFO ExtHandler ExtHandler Jan 23 01:11:24.262544 waagent[1892]: 2026-01-23T01:11:24.262500Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e6a5762f-0e65-42f3-9113-49e97b6df96e correlation 84b4aa88-03f0-4df4-bda1-48c7d9c44ad7 created: 2026-01-23T01:10:25.593651Z] Jan 23 01:11:24.262779 waagent[1892]: 2026-01-23T01:11:24.262751Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 01:11:24.263176 waagent[1892]: 2026-01-23T01:11:24.263150Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 23 01:11:24.294787 waagent[1892]: 2026-01-23T01:11:24.293708Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 23 01:11:24.294787 waagent[1892]: Try `iptables -h' or 'iptables --help' for more information.) Jan 23 01:11:24.294787 waagent[1892]: 2026-01-23T01:11:24.294121Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E3483EFE-856C-48C5-B804-CD30C0D92C41;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 23 01:11:24.313207 waagent[1892]: 2026-01-23T01:11:24.313156Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 01:11:24.313207 waagent[1892]: Executing ['ip', '-a', '-o', 'link']: Jan 23 01:11:24.313207 waagent[1892]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 01:11:24.313207 waagent[1892]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a0:ff:9e brd ff:ff:ff:ff:ff:ff\ alias Network Device Jan 23 01:11:24.313207 waagent[1892]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a0:ff:9e brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 23 01:11:24.313207 waagent[1892]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 01:11:24.313207 waagent[1892]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 01:11:24.313207 waagent[1892]: 2: eth0 inet 10.200.8.14/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 01:11:24.313207 waagent[1892]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 01:11:24.313207 waagent[1892]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 01:11:24.313207 waagent[1892]: 2: eth0 inet6 fe80::222:48ff:fea0:ff9e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 01:11:24.348781 waagent[1892]: 2026-01-23T01:11:24.348731Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 23 01:11:24.348781 waagent[1892]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 01:11:24.348781 waagent[1892]: pkts bytes target prot opt in out source destination Jan 23 01:11:24.348781 waagent[1892]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 01:11:24.348781 waagent[1892]: pkts bytes target prot opt in out source destination Jan 23 01:11:24.348781 waagent[1892]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 01:11:24.348781 waagent[1892]: pkts bytes target prot opt in out source destination Jan 23 01:11:24.348781 waagent[1892]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 01:11:24.348781 waagent[1892]: 4 348 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 01:11:24.348781 waagent[1892]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 01:11:24.351467 waagent[1892]: 2026-01-23T01:11:24.351420Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 01:11:24.351467 waagent[1892]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 01:11:24.351467 waagent[1892]: pkts bytes target prot opt in out source destination Jan 23 01:11:24.351467 waagent[1892]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 01:11:24.351467 waagent[1892]: pkts bytes target prot opt in out source destination Jan 23 01:11:24.351467 waagent[1892]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 01:11:24.351467 waagent[1892]: pkts bytes target prot opt in out source destination Jan 23 01:11:24.351467 waagent[1892]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 01:11:24.351467 waagent[1892]: 7 883 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 01:11:24.351467 waagent[1892]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 01:11:31.164508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 01:11:31.166259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:11:31.692891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:11:31.697155 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:11:31.736662 kubelet[2044]: E0123 01:11:31.736624 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:11:31.739480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:11:31.739613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:11:31.739928 systemd[1]: kubelet.service: Consumed 137ms CPU time, 109.2M memory peak. Jan 23 01:11:41.598154 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:11:41.599185 systemd[1]: Started sshd@0-10.200.8.14:22-10.200.16.10:55600.service - OpenSSH per-connection server daemon (10.200.16.10:55600). Jan 23 01:11:41.914595 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 01:11:41.916134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:11:42.390857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:11:42.401815 sshd[2052]: Accepted publickey for core from 10.200.16.10 port 55600 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:11:42.402537 sshd-session[2052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:42.403184 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:11:42.407479 systemd-logind[1693]: New session 3 of user core. Jan 23 01:11:42.410996 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:11:42.440527 kubelet[2063]: E0123 01:11:42.440503 2063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:11:42.442263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:11:42.442381 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:11:42.442660 systemd[1]: kubelet.service: Consumed 127ms CPU time, 107.8M memory peak. Jan 23 01:11:42.849107 chronyd[1673]: Selected source PHC0 Jan 23 01:11:42.993500 systemd[1]: Started sshd@1-10.200.8.14:22-10.200.16.10:55610.service - OpenSSH per-connection server daemon (10.200.16.10:55610). Jan 23 01:11:43.669933 sshd[2073]: Accepted publickey for core from 10.200.16.10 port 55610 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:11:43.671090 sshd-session[2073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:43.675264 systemd-logind[1693]: New session 4 of user core. Jan 23 01:11:43.683974 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:11:44.143845 sshd[2076]: Connection closed by 10.200.16.10 port 55610 Jan 23 01:11:44.145246 sshd-session[2073]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:44.148383 systemd-logind[1693]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:11:44.148637 systemd[1]: sshd@1-10.200.8.14:22-10.200.16.10:55610.service: Deactivated successfully. Jan 23 01:11:44.150322 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:11:44.151766 systemd-logind[1693]: Removed session 4. Jan 23 01:11:44.276505 systemd[1]: Started sshd@2-10.200.8.14:22-10.200.16.10:55626.service - OpenSSH per-connection server daemon (10.200.16.10:55626). Jan 23 01:11:44.957870 sshd[2082]: Accepted publickey for core from 10.200.16.10 port 55626 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:11:44.958646 sshd-session[2082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:44.962959 systemd-logind[1693]: New session 5 of user core. Jan 23 01:11:44.968974 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:11:45.430249 sshd[2085]: Connection closed by 10.200.16.10 port 55626 Jan 23 01:11:45.431805 sshd-session[2082]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:45.435106 systemd-logind[1693]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:11:45.435283 systemd[1]: sshd@2-10.200.8.14:22-10.200.16.10:55626.service: Deactivated successfully. Jan 23 01:11:45.437133 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:11:45.438529 systemd-logind[1693]: Removed session 5. Jan 23 01:11:45.547550 systemd[1]: Started sshd@3-10.200.8.14:22-10.200.16.10:55628.service - OpenSSH per-connection server daemon (10.200.16.10:55628). Jan 23 01:11:46.232525 sshd[2091]: Accepted publickey for core from 10.200.16.10 port 55628 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:11:46.233699 sshd-session[2091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:46.237948 systemd-logind[1693]: New session 6 of user core. Jan 23 01:11:46.243994 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:11:46.705451 sshd[2094]: Connection closed by 10.200.16.10 port 55628 Jan 23 01:11:46.706024 sshd-session[2091]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:46.709281 systemd[1]: sshd@3-10.200.8.14:22-10.200.16.10:55628.service: Deactivated successfully. Jan 23 01:11:46.711452 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:11:46.712303 systemd-logind[1693]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:11:46.713421 systemd-logind[1693]: Removed session 6. Jan 23 01:11:46.823421 systemd[1]: Started sshd@4-10.200.8.14:22-10.200.16.10:55630.service - OpenSSH per-connection server daemon (10.200.16.10:55630). Jan 23 01:11:47.501291 sshd[2100]: Accepted publickey for core from 10.200.16.10 port 55630 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:11:47.502295 sshd-session[2100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:47.506589 systemd-logind[1693]: New session 7 of user core. Jan 23 01:11:47.511978 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:11:48.017632 sudo[2104]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:11:48.017878 sudo[2104]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:11:48.053570 sudo[2104]: pam_unix(sudo:session): session closed for user root Jan 23 01:11:48.161615 sshd[2103]: Connection closed by 10.200.16.10 port 55630 Jan 23 01:11:48.163039 sshd-session[2100]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:48.166632 systemd-logind[1693]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:11:48.166947 systemd[1]: sshd@4-10.200.8.14:22-10.200.16.10:55630.service: Deactivated successfully. Jan 23 01:11:48.168627 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:11:48.170112 systemd-logind[1693]: Removed session 7. Jan 23 01:11:48.282669 systemd[1]: Started sshd@5-10.200.8.14:22-10.200.16.10:55646.service - OpenSSH per-connection server daemon (10.200.16.10:55646). Jan 23 01:11:48.961860 sshd[2110]: Accepted publickey for core from 10.200.16.10 port 55646 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:11:48.962805 sshd-session[2110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:48.967400 systemd-logind[1693]: New session 8 of user core. Jan 23 01:11:48.972950 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:11:49.329733 sudo[2115]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:11:49.329988 sudo[2115]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:11:49.336404 sudo[2115]: pam_unix(sudo:session): session closed for user root Jan 23 01:11:49.340416 sudo[2114]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:11:49.340630 sudo[2114]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:11:49.348070 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:11:49.376884 augenrules[2137]: No rules Jan 23 01:11:49.377785 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:11:49.378015 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:11:49.378612 sudo[2114]: pam_unix(sudo:session): session closed for user root Jan 23 01:11:49.486852 sshd[2113]: Connection closed by 10.200.16.10 port 55646 Jan 23 01:11:49.488017 sshd-session[2110]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:49.490730 systemd[1]: sshd@5-10.200.8.14:22-10.200.16.10:55646.service: Deactivated successfully. Jan 23 01:11:49.492242 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:11:49.493994 systemd-logind[1693]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:11:49.494751 systemd-logind[1693]: Removed session 8. Jan 23 01:11:49.609942 systemd[1]: Started sshd@6-10.200.8.14:22-10.200.16.10:40016.service - OpenSSH per-connection server daemon (10.200.16.10:40016). Jan 23 01:11:50.293354 sshd[2146]: Accepted publickey for core from 10.200.16.10 port 40016 ssh2: RSA SHA256:ZNqmZSbvXn2N0HVppthi7f6PjojzkUQ0aRQFpEvxwIk Jan 23 01:11:50.294506 sshd-session[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:11:50.299076 systemd-logind[1693]: New session 9 of user core. Jan 23 01:11:50.308003 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:11:50.660587 sudo[2150]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:11:50.660809 sudo[2150]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:11:51.160679 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:11:51.161004 systemd[1]: kubelet.service: Consumed 127ms CPU time, 107.8M memory peak. Jan 23 01:11:51.163211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:11:51.191682 systemd[1]: Reload requested from client PID 2183 ('systemctl') (unit session-9.scope)... Jan 23 01:11:51.191695 systemd[1]: Reloading... Jan 23 01:11:51.272900 zram_generator::config[2232]: No configuration found. Jan 23 01:11:51.460351 systemd[1]: Reloading finished in 268 ms. Jan 23 01:11:51.486544 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:11:51.486616 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:11:51.486868 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:11:51.486915 systemd[1]: kubelet.service: Consumed 68ms CPU time, 70M memory peak. Jan 23 01:11:51.489156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:11:52.055039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:11:52.064165 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:11:52.097896 kubelet[2296]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:11:52.097896 kubelet[2296]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:11:52.097896 kubelet[2296]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:11:52.098184 kubelet[2296]: I0123 01:11:52.098006 2296 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:11:52.383450 kubelet[2296]: I0123 01:11:52.383342 2296 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 01:11:52.383450 kubelet[2296]: I0123 01:11:52.383369 2296 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:11:52.385858 kubelet[2296]: I0123 01:11:52.384414 2296 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 01:11:52.411010 kubelet[2296]: I0123 01:11:52.410984 2296 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:11:52.418004 kubelet[2296]: I0123 01:11:52.417980 2296 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:11:52.420203 kubelet[2296]: I0123 01:11:52.420175 2296 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:11:52.420377 kubelet[2296]: I0123 01:11:52.420348 2296 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:11:52.420531 kubelet[2296]: I0123 01:11:52.420373 2296 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.8.14","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:11:52.420642 kubelet[2296]: I0123 01:11:52.420537 2296 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:11:52.420642 kubelet[2296]: I0123 01:11:52.420548 2296 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 01:11:52.420690 kubelet[2296]: I0123 01:11:52.420648 2296 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:11:52.423277 kubelet[2296]: I0123 01:11:52.423261 2296 kubelet.go:446] "Attempting to sync node with API server" Jan 23 01:11:52.423342 kubelet[2296]: I0123 01:11:52.423285 2296 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:11:52.423342 kubelet[2296]: I0123 01:11:52.423308 2296 kubelet.go:352] "Adding apiserver pod source" Jan 23 01:11:52.423342 kubelet[2296]: I0123 01:11:52.423318 2296 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:11:52.428120 kubelet[2296]: E0123 01:11:52.427622 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:52.428120 kubelet[2296]: E0123 01:11:52.427647 2296 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:52.428120 kubelet[2296]: I0123 01:11:52.427928 2296 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:11:52.429360 kubelet[2296]: I0123 01:11:52.429345 2296 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 01:11:52.429888 kubelet[2296]: W0123 01:11:52.429871 2296 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:11:52.431920 kubelet[2296]: I0123 01:11:52.431899 2296 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:11:52.431994 kubelet[2296]: I0123 01:11:52.431929 2296 server.go:1287] "Started kubelet" Jan 23 01:11:52.433675 kubelet[2296]: I0123 01:11:52.433652 2296 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:11:52.434844 kubelet[2296]: I0123 01:11:52.434649 2296 server.go:479] "Adding debug handlers to kubelet server" Jan 23 01:11:52.437010 kubelet[2296]: I0123 01:11:52.436993 2296 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:11:52.437590 kubelet[2296]: I0123 01:11:52.437537 2296 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:11:52.437765 kubelet[2296]: I0123 01:11:52.437750 2296 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:11:52.438881 kubelet[2296]: I0123 01:11:52.438863 2296 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:11:52.446399 kubelet[2296]: E0123 01:11:52.442941 2296 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.8.14.188d37027bbce3ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.8.14,UID:10.200.8.14,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.8.14,},FirstTimestamp:2026-01-23 01:11:52.431911914 +0000 UTC m=+0.364298657,LastTimestamp:2026-01-23 01:11:52.431911914 +0000 UTC m=+0.364298657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.8.14,}" Jan 23 01:11:52.447646 kubelet[2296]: I0123 01:11:52.447628 2296 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:11:52.447961 kubelet[2296]: E0123 01:11:52.447947 2296 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.8.14\" not found" Jan 23 01:11:52.450560 kubelet[2296]: I0123 01:11:52.450534 2296 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:11:52.451470 kubelet[2296]: I0123 01:11:52.451454 2296 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:11:52.452271 kubelet[2296]: W0123 01:11:52.452256 2296 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.8.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 23 01:11:52.452377 kubelet[2296]: E0123 01:11:52.452365 2296 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.200.8.14\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 23 01:11:52.452451 kubelet[2296]: E0123 01:11:52.452439 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.8.14\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 23 01:11:52.452518 kubelet[2296]: W0123 01:11:52.452512 2296 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 23 01:11:52.452561 kubelet[2296]: E0123 01:11:52.452554 2296 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 23 01:11:52.452682 kubelet[2296]: I0123 01:11:52.452662 2296 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:11:52.454145 kubelet[2296]: I0123 01:11:52.454132 2296 factory.go:221] Registration of the containerd container factory successfully Jan 23 01:11:52.454226 kubelet[2296]: I0123 01:11:52.454220 2296 factory.go:221] Registration of the systemd container factory successfully Jan 23 01:11:52.464821 kubelet[2296]: E0123 01:11:52.464806 2296 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:11:52.470165 kubelet[2296]: E0123 01:11:52.470029 2296 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.8.14.188d37027db2aad1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.8.14,UID:10.200.8.14,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.200.8.14,},FirstTimestamp:2026-01-23 01:11:52.464796369 +0000 UTC m=+0.397183092,LastTimestamp:2026-01-23 01:11:52.464796369 +0000 UTC m=+0.397183092,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.8.14,}" Jan 23 01:11:52.470165 kubelet[2296]: I0123 01:11:52.470126 2296 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:11:52.470165 kubelet[2296]: I0123 01:11:52.470133 2296 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:11:52.470165 kubelet[2296]: I0123 01:11:52.470147 2296 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:11:52.481094 kubelet[2296]: I0123 01:11:52.480906 2296 policy_none.go:49] "None policy: Start" Jan 23 01:11:52.481094 kubelet[2296]: I0123 01:11:52.480933 2296 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:11:52.481094 kubelet[2296]: I0123 01:11:52.480943 2296 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:11:52.489281 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:11:52.498856 kubelet[2296]: I0123 01:11:52.498707 2296 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 01:11:52.499802 kubelet[2296]: I0123 01:11:52.499776 2296 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 01:11:52.499802 kubelet[2296]: I0123 01:11:52.499804 2296 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 01:11:52.499931 kubelet[2296]: I0123 01:11:52.499820 2296 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:11:52.499931 kubelet[2296]: I0123 01:11:52.499849 2296 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 01:11:52.499971 kubelet[2296]: E0123 01:11:52.499929 2296 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:11:52.506763 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:11:52.518397 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:11:52.519457 kubelet[2296]: I0123 01:11:52.519439 2296 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 01:11:52.519573 kubelet[2296]: I0123 01:11:52.519563 2296 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:11:52.519604 kubelet[2296]: I0123 01:11:52.519573 2296 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:11:52.520103 kubelet[2296]: I0123 01:11:52.520006 2296 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:11:52.520963 kubelet[2296]: E0123 01:11:52.520947 2296 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:11:52.521039 kubelet[2296]: E0123 01:11:52.520978 2296 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.14\" not found" Jan 23 01:11:52.621135 kubelet[2296]: I0123 01:11:52.621099 2296 kubelet_node_status.go:75] "Attempting to register node" node="10.200.8.14" Jan 23 01:11:52.625351 kubelet[2296]: I0123 01:11:52.625332 2296 kubelet_node_status.go:78] "Successfully registered node" node="10.200.8.14" Jan 23 01:11:52.625351 kubelet[2296]: E0123 01:11:52.625353 2296 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.200.8.14\": node \"10.200.8.14\" not found" Jan 23 01:11:52.640846 kubelet[2296]: E0123 01:11:52.640572 2296 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.8.14\" not found" Jan 23 01:11:52.741110 kubelet[2296]: E0123 01:11:52.741079 2296 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.8.14\" not found" Jan 23 01:11:52.841612 kubelet[2296]: E0123 01:11:52.841571 2296 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.8.14\" not found" Jan 23 01:11:52.942371 kubelet[2296]: E0123 01:11:52.942345 2296 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.8.14\" not found" Jan 23 01:11:53.042962 kubelet[2296]: E0123 01:11:53.042918 2296 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.8.14\" not found" Jan 23 01:11:53.079667 sudo[2150]: pam_unix(sudo:session): session closed for user root Jan 23 01:11:53.143591 kubelet[2296]: E0123 01:11:53.143541 2296 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.8.14\" not found" Jan 23 01:11:53.187413 sshd[2149]: Connection closed by 10.200.16.10 port 40016 Jan 23 01:11:53.188020 sshd-session[2146]: pam_unix(sshd:session): session closed for user core Jan 23 01:11:53.190976 systemd[1]: sshd@6-10.200.8.14:22-10.200.16.10:40016.service: Deactivated successfully. Jan 23 01:11:53.192820 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:11:53.193006 systemd[1]: session-9.scope: Consumed 412ms CPU time, 69.5M memory peak. Jan 23 01:11:53.195042 systemd-logind[1693]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:11:53.196503 systemd-logind[1693]: Removed session 9. Jan 23 01:11:53.244186 kubelet[2296]: E0123 01:11:53.244149 2296 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.8.14\" not found" Jan 23 01:11:53.344799 kubelet[2296]: E0123 01:11:53.344756 2296 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.8.14\" not found" Jan 23 01:11:53.389327 kubelet[2296]: I0123 01:11:53.389299 2296 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 01:11:53.389502 kubelet[2296]: W0123 01:11:53.389484 2296 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 01:11:53.389547 kubelet[2296]: W0123 01:11:53.389523 2296 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 01:11:53.428008 kubelet[2296]: E0123 01:11:53.427968 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:53.445291 kubelet[2296]: E0123 01:11:53.445209 2296 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.8.14\" not found" Jan 23 01:11:53.546848 kubelet[2296]: I0123 01:11:53.546808 2296 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 23 01:11:53.547229 containerd[1715]: time="2026-01-23T01:11:53.547171137Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:11:53.547566 kubelet[2296]: I0123 01:11:53.547417 2296 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 23 01:11:54.426139 kubelet[2296]: I0123 01:11:54.426106 2296 apiserver.go:52] "Watching apiserver" Jan 23 01:11:54.428295 kubelet[2296]: E0123 01:11:54.428250 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:54.441731 systemd[1]: Created slice kubepods-besteffort-pod4cb83e6c_65b2_4c60_bdc6_4bc24b195c2f.slice - libcontainer container kubepods-besteffort-pod4cb83e6c_65b2_4c60_bdc6_4bc24b195c2f.slice. Jan 23 01:11:54.451041 systemd[1]: Created slice kubepods-burstable-pod9c6b8f4a_cd5b_4f2f_a3b8_09cc6936ac76.slice - libcontainer container kubepods-burstable-pod9c6b8f4a_cd5b_4f2f_a3b8_09cc6936ac76.slice. Jan 23 01:11:54.453329 kubelet[2296]: I0123 01:11:54.453303 2296 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:11:54.463152 kubelet[2296]: I0123 01:11:54.463118 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-host-proc-sys-kernel\") pod \"cilium-qt59s\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " pod="kube-system/cilium-qt59s" Jan 23 01:11:54.463152 kubelet[2296]: I0123 01:11:54.463144 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkppz\" (UniqueName: \"kubernetes.io/projected/4cb83e6c-65b2-4c60-bdc6-4bc24b195c2f-kube-api-access-bkppz\") pod \"kube-proxy-xsxh6\" (UID: \"4cb83e6c-65b2-4c60-bdc6-4bc24b195c2f\") " pod="kube-system/kube-proxy-xsxh6" Jan 23 01:11:54.463341 kubelet[2296]: I0123 01:11:54.463165 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-hostproc\") pod \"cilium-qt59s\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " pod="kube-system/cilium-qt59s" Jan 23 01:11:54.463341 kubelet[2296]: I0123 01:11:54.463180 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cilium-cgroup\") pod \"cilium-qt59s\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " pod="kube-system/cilium-qt59s" Jan 23 01:11:54.463341 kubelet[2296]: I0123 01:11:54.463196 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-clustermesh-secrets\") pod \"cilium-qt59s\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " pod="kube-system/cilium-qt59s" Jan 23 01:11:54.463341 kubelet[2296]: I0123 01:11:54.463211 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-host-proc-sys-net\") pod \"cilium-qt59s\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " pod="kube-system/cilium-qt59s" Jan 23 01:11:54.463341 kubelet[2296]: I0123 01:11:54.463242 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-xtables-lock\") pod \"cilium-qt59s\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " pod="kube-system/cilium-qt59s" Jan 23 01:11:54.463341 kubelet[2296]: I0123 01:11:54.463262 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bhn5\" (UniqueName: \"kubernetes.io/projected/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-kube-api-access-9bhn5\") pod \"cilium-qt59s\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " pod="kube-system/cilium-qt59s" Jan 23 01:11:54.463551 kubelet[2296]: I0123 01:11:54.463287 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4cb83e6c-65b2-4c60-bdc6-4bc24b195c2f-kube-proxy\") pod \"kube-proxy-xsxh6\" (UID: \"4cb83e6c-65b2-4c60-bdc6-4bc24b195c2f\") " pod="kube-system/kube-proxy-xsxh6" Jan 23 01:11:54.463551 kubelet[2296]: I0123 01:11:54.463305 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cb83e6c-65b2-4c60-bdc6-4bc24b195c2f-lib-modules\") pod \"kube-proxy-xsxh6\" (UID: \"4cb83e6c-65b2-4c60-bdc6-4bc24b195c2f\") " pod="kube-system/kube-proxy-xsxh6" Jan 23 01:11:54.463551 kubelet[2296]: I0123 01:11:54.463320 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cilium-run\") pod \"cilium-qt59s\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " pod="kube-system/cilium-qt59s" Jan 23 01:11:54.463551 kubelet[2296]: I0123 01:11:54.463335 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-etc-cni-netd\") pod \"cilium-qt59s\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " pod="kube-system/cilium-qt59s" Jan 23 01:11:54.463551 kubelet[2296]: I0123 01:11:54.463349 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-lib-modules\") pod \"cilium-qt59s\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " pod="kube-system/cilium-qt59s" Jan 23 01:11:54.463551 kubelet[2296]: I0123 01:11:54.463364 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cilium-config-path\") pod \"cilium-qt59s\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " pod="kube-system/cilium-qt59s" Jan 23 01:11:54.463639 kubelet[2296]: I0123 01:11:54.463380 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-bpf-maps\") pod \"cilium-qt59s\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " pod="kube-system/cilium-qt59s" Jan 23 01:11:54.463639 kubelet[2296]: I0123 01:11:54.463395 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cni-path\") pod \"cilium-qt59s\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " pod="kube-system/cilium-qt59s" Jan 23 01:11:54.463639 kubelet[2296]: I0123 01:11:54.463410 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-hubble-tls\") pod \"cilium-qt59s\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " pod="kube-system/cilium-qt59s" Jan 23 01:11:54.463639 kubelet[2296]: I0123 01:11:54.463426 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cb83e6c-65b2-4c60-bdc6-4bc24b195c2f-xtables-lock\") pod \"kube-proxy-xsxh6\" (UID: \"4cb83e6c-65b2-4c60-bdc6-4bc24b195c2f\") " pod="kube-system/kube-proxy-xsxh6" Jan 23 01:11:54.749589 containerd[1715]: time="2026-01-23T01:11:54.749475439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xsxh6,Uid:4cb83e6c-65b2-4c60-bdc6-4bc24b195c2f,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:54.760033 containerd[1715]: time="2026-01-23T01:11:54.759999196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qt59s,Uid:9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76,Namespace:kube-system,Attempt:0,}" Jan 23 01:11:55.396233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4276697786.mount: Deactivated successfully. Jan 23 01:11:55.423266 containerd[1715]: time="2026-01-23T01:11:55.423223608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:11:55.428425 containerd[1715]: time="2026-01-23T01:11:55.428388447Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 23 01:11:55.429361 kubelet[2296]: E0123 01:11:55.429330 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:55.431367 containerd[1715]: time="2026-01-23T01:11:55.431333619Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:11:55.434200 containerd[1715]: time="2026-01-23T01:11:55.434172036Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:11:55.436910 containerd[1715]: time="2026-01-23T01:11:55.436605507Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 01:11:55.440874 containerd[1715]: time="2026-01-23T01:11:55.440848423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:11:55.441404 containerd[1715]: time="2026-01-23T01:11:55.441382310Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 681.88516ms" Jan 23 01:11:55.444995 containerd[1715]: time="2026-01-23T01:11:55.444964869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 659.812113ms" Jan 23 01:11:55.485756 containerd[1715]: time="2026-01-23T01:11:55.485188184Z" level=info msg="connecting to shim 6dec1d6ee44524c8c8749dc9ed6e860a175b9eee9b73b040e5f07cfb260b112e" address="unix:///run/containerd/s/42c5db776c182173cc47146463000f5f7b3b0676aa5d78a1e3ba0c0d8f9fd282" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:55.485989 containerd[1715]: time="2026-01-23T01:11:55.485968956Z" level=info msg="connecting to shim 110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6" address="unix:///run/containerd/s/368edabc778c417aba677028f9ad8bbcbfc4f2ff00325f2b3afd648956931838" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:55.508006 systemd[1]: Started cri-containerd-6dec1d6ee44524c8c8749dc9ed6e860a175b9eee9b73b040e5f07cfb260b112e.scope - libcontainer container 6dec1d6ee44524c8c8749dc9ed6e860a175b9eee9b73b040e5f07cfb260b112e. Jan 23 01:11:55.512475 systemd[1]: Started cri-containerd-110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6.scope - libcontainer container 110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6. Jan 23 01:11:55.542293 containerd[1715]: time="2026-01-23T01:11:55.542264268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qt59s,Uid:9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76,Namespace:kube-system,Attempt:0,} returns sandbox id \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\"" Jan 23 01:11:55.544236 containerd[1715]: time="2026-01-23T01:11:55.544181789Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 01:11:55.544904 containerd[1715]: time="2026-01-23T01:11:55.544874960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xsxh6,Uid:4cb83e6c-65b2-4c60-bdc6-4bc24b195c2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dec1d6ee44524c8c8749dc9ed6e860a175b9eee9b73b040e5f07cfb260b112e\"" Jan 23 01:11:56.429996 kubelet[2296]: E0123 01:11:56.429960 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:57.430913 kubelet[2296]: E0123 01:11:57.430874 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:58.432202 kubelet[2296]: E0123 01:11:58.432158 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:59.432464 kubelet[2296]: E0123 01:11:59.432429 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:59.835066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140813073.mount: Deactivated successfully. Jan 23 01:12:00.433583 kubelet[2296]: E0123 01:12:00.433546 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:01.300662 containerd[1715]: time="2026-01-23T01:12:01.300612954Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:12:01.303092 containerd[1715]: time="2026-01-23T01:12:01.302985799Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 01:12:01.305951 containerd[1715]: time="2026-01-23T01:12:01.305925938Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:12:01.307447 containerd[1715]: time="2026-01-23T01:12:01.307058841Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.762833168s" Jan 23 01:12:01.307447 containerd[1715]: time="2026-01-23T01:12:01.307089851Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 01:12:01.308231 containerd[1715]: time="2026-01-23T01:12:01.308165486Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 01:12:01.309285 containerd[1715]: time="2026-01-23T01:12:01.309254693Z" level=info msg="CreateContainer within sandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:12:01.331907 containerd[1715]: time="2026-01-23T01:12:01.331879681Z" level=info msg="Container bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:01.348777 containerd[1715]: time="2026-01-23T01:12:01.348753554Z" level=info msg="CreateContainer within sandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028\"" Jan 23 01:12:01.349260 containerd[1715]: time="2026-01-23T01:12:01.349237229Z" level=info msg="StartContainer for \"bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028\"" Jan 23 01:12:01.349935 containerd[1715]: time="2026-01-23T01:12:01.349907084Z" level=info msg="connecting to shim bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028" address="unix:///run/containerd/s/368edabc778c417aba677028f9ad8bbcbfc4f2ff00325f2b3afd648956931838" protocol=ttrpc version=3 Jan 23 01:12:01.374982 systemd[1]: Started cri-containerd-bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028.scope - libcontainer container bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028. Jan 23 01:12:01.405426 containerd[1715]: time="2026-01-23T01:12:01.405403977Z" level=info msg="StartContainer for \"bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028\" returns successfully" Jan 23 01:12:01.407941 systemd[1]: cri-containerd-bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028.scope: Deactivated successfully. Jan 23 01:12:01.411053 containerd[1715]: time="2026-01-23T01:12:01.411023815Z" level=info msg="received container exit event container_id:\"bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028\" id:\"bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028\" pid:2475 exited_at:{seconds:1769130721 nanos:410660183}" Jan 23 01:12:01.425770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028-rootfs.mount: Deactivated successfully. Jan 23 01:12:01.434562 kubelet[2296]: E0123 01:12:01.434539 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:02.324846 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 23 01:12:02.435233 kubelet[2296]: E0123 01:12:02.435203 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:03.435658 kubelet[2296]: E0123 01:12:03.435618 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:04.436266 kubelet[2296]: E0123 01:12:04.436224 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:04.530450 containerd[1715]: time="2026-01-23T01:12:04.530366448Z" level=info msg="CreateContainer within sandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:12:04.554669 containerd[1715]: time="2026-01-23T01:12:04.554628851Z" level=info msg="Container e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:04.573656 containerd[1715]: time="2026-01-23T01:12:04.573617365Z" level=info msg="CreateContainer within sandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b\"" Jan 23 01:12:04.574849 containerd[1715]: time="2026-01-23T01:12:04.574200991Z" level=info msg="StartContainer for \"e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b\"" Jan 23 01:12:04.574947 containerd[1715]: time="2026-01-23T01:12:04.574921932Z" level=info msg="connecting to shim e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b" address="unix:///run/containerd/s/368edabc778c417aba677028f9ad8bbcbfc4f2ff00325f2b3afd648956931838" protocol=ttrpc version=3 Jan 23 01:12:04.598109 systemd[1]: Started cri-containerd-e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b.scope - libcontainer container e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b. Jan 23 01:12:04.636637 containerd[1715]: time="2026-01-23T01:12:04.636609808Z" level=info msg="StartContainer for \"e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b\" returns successfully" Jan 23 01:12:04.647014 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:12:04.647388 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:12:04.650518 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:12:04.652676 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:12:04.655543 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:12:04.656720 systemd[1]: cri-containerd-e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b.scope: Deactivated successfully. Jan 23 01:12:04.657999 containerd[1715]: time="2026-01-23T01:12:04.657791689Z" level=info msg="received container exit event container_id:\"e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b\" id:\"e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b\" pid:2526 exited_at:{seconds:1769130724 nanos:655441146}" Jan 23 01:12:04.684476 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:12:05.070070 update_engine[1694]: I20260123 01:12:05.070019 1694 update_attempter.cc:509] Updating boot flags... Jan 23 01:12:05.295682 containerd[1715]: time="2026-01-23T01:12:05.295638499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:12:05.298087 containerd[1715]: time="2026-01-23T01:12:05.297950924Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161907" Jan 23 01:12:05.300647 containerd[1715]: time="2026-01-23T01:12:05.300624531Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:12:05.304415 containerd[1715]: time="2026-01-23T01:12:05.304387180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:12:05.304780 containerd[1715]: time="2026-01-23T01:12:05.304757688Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 3.99656289s" Jan 23 01:12:05.304837 containerd[1715]: time="2026-01-23T01:12:05.304787608Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 01:12:05.306600 containerd[1715]: time="2026-01-23T01:12:05.306574260Z" level=info msg="CreateContainer within sandbox \"6dec1d6ee44524c8c8749dc9ed6e860a175b9eee9b73b040e5f07cfb260b112e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:12:05.322734 containerd[1715]: time="2026-01-23T01:12:05.322663208Z" level=info msg="Container 068a041a0056bcec4f8b3c6a0ed5d6845fe101cae2bb508211d02820cc474a6a: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:05.337839 containerd[1715]: time="2026-01-23T01:12:05.337792332Z" level=info msg="CreateContainer within sandbox \"6dec1d6ee44524c8c8749dc9ed6e860a175b9eee9b73b040e5f07cfb260b112e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"068a041a0056bcec4f8b3c6a0ed5d6845fe101cae2bb508211d02820cc474a6a\"" Jan 23 01:12:05.338240 containerd[1715]: time="2026-01-23T01:12:05.338189680Z" level=info msg="StartContainer for \"068a041a0056bcec4f8b3c6a0ed5d6845fe101cae2bb508211d02820cc474a6a\"" Jan 23 01:12:05.340356 containerd[1715]: time="2026-01-23T01:12:05.340290327Z" level=info msg="connecting to shim 068a041a0056bcec4f8b3c6a0ed5d6845fe101cae2bb508211d02820cc474a6a" address="unix:///run/containerd/s/42c5db776c182173cc47146463000f5f7b3b0676aa5d78a1e3ba0c0d8f9fd282" protocol=ttrpc version=3 Jan 23 01:12:05.357031 systemd[1]: Started cri-containerd-068a041a0056bcec4f8b3c6a0ed5d6845fe101cae2bb508211d02820cc474a6a.scope - libcontainer container 068a041a0056bcec4f8b3c6a0ed5d6845fe101cae2bb508211d02820cc474a6a. Jan 23 01:12:05.374152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b-rootfs.mount: Deactivated successfully. Jan 23 01:12:05.374242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446561944.mount: Deactivated successfully. Jan 23 01:12:05.413622 containerd[1715]: time="2026-01-23T01:12:05.413564309Z" level=info msg="StartContainer for \"068a041a0056bcec4f8b3c6a0ed5d6845fe101cae2bb508211d02820cc474a6a\" returns successfully" Jan 23 01:12:05.436996 kubelet[2296]: E0123 01:12:05.436956 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:05.537099 containerd[1715]: time="2026-01-23T01:12:05.537063920Z" level=info msg="CreateContainer within sandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:12:05.558004 containerd[1715]: time="2026-01-23T01:12:05.557507590Z" level=info msg="Container 099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:05.575994 containerd[1715]: time="2026-01-23T01:12:05.575885727Z" level=info msg="CreateContainer within sandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb\"" Jan 23 01:12:05.576436 containerd[1715]: time="2026-01-23T01:12:05.576411709Z" level=info msg="StartContainer for \"099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb\"" Jan 23 01:12:05.577609 containerd[1715]: time="2026-01-23T01:12:05.577579836Z" level=info msg="connecting to shim 099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb" address="unix:///run/containerd/s/368edabc778c417aba677028f9ad8bbcbfc4f2ff00325f2b3afd648956931838" protocol=ttrpc version=3 Jan 23 01:12:05.596117 systemd[1]: Started cri-containerd-099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb.scope - libcontainer container 099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb. Jan 23 01:12:05.660783 systemd[1]: cri-containerd-099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb.scope: Deactivated successfully. Jan 23 01:12:05.663252 containerd[1715]: time="2026-01-23T01:12:05.663200177Z" level=info msg="received container exit event container_id:\"099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb\" id:\"099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb\" pid:2668 exited_at:{seconds:1769130725 nanos:663054453}" Jan 23 01:12:05.663800 containerd[1715]: time="2026-01-23T01:12:05.663779969Z" level=info msg="StartContainer for \"099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb\" returns successfully" Jan 23 01:12:05.686902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb-rootfs.mount: Deactivated successfully. Jan 23 01:12:06.437346 kubelet[2296]: E0123 01:12:06.437310 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:06.472323 waagent[1892]: 2026-01-23T01:12:06.472280Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 01:12:06.486819 waagent[1892]: 2026-01-23T01:12:06.485528Z INFO ExtHandler Jan 23 01:12:06.486819 waagent[1892]: 2026-01-23T01:12:06.485610Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0f041e26-8307-4b8a-89e1-86f2f7bcad33 eTag: 8227546221775084421 source: Fabric] Jan 23 01:12:06.486819 waagent[1892]: 2026-01-23T01:12:06.485898Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 01:12:06.486819 waagent[1892]: 2026-01-23T01:12:06.486383Z INFO ExtHandler Jan 23 01:12:06.486819 waagent[1892]: 2026-01-23T01:12:06.486435Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 01:12:06.532487 waagent[1892]: 2026-01-23T01:12:06.532460Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 01:12:06.545814 containerd[1715]: time="2026-01-23T01:12:06.545695497Z" level=info msg="CreateContainer within sandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:12:06.559982 kubelet[2296]: I0123 01:12:06.559930 2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xsxh6" podStartSLOduration=4.800212457 podStartE2EDuration="14.559911651s" podCreationTimestamp="2026-01-23 01:11:52 +0000 UTC" firstStartedPulling="2026-01-23 01:11:55.545648314 +0000 UTC m=+3.478035044" lastFinishedPulling="2026-01-23 01:12:05.305347508 +0000 UTC m=+13.237734238" observedRunningTime="2026-01-23 01:12:05.591439794 +0000 UTC m=+13.523826534" watchObservedRunningTime="2026-01-23 01:12:06.559911651 +0000 UTC m=+14.492298389" Jan 23 01:12:06.566858 containerd[1715]: time="2026-01-23T01:12:06.564661692Z" level=info msg="Container 26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:06.583886 containerd[1715]: time="2026-01-23T01:12:06.583837799Z" level=info msg="CreateContainer within sandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8\"" Jan 23 01:12:06.584894 containerd[1715]: time="2026-01-23T01:12:06.584379858Z" level=info msg="StartContainer for \"26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8\"" Jan 23 01:12:06.585413 containerd[1715]: time="2026-01-23T01:12:06.585375744Z" level=info msg="connecting to shim 26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8" address="unix:///run/containerd/s/368edabc778c417aba677028f9ad8bbcbfc4f2ff00325f2b3afd648956931838" protocol=ttrpc version=3 Jan 23 01:12:06.607004 systemd[1]: Started cri-containerd-26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8.scope - libcontainer container 26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8. Jan 23 01:12:06.614225 waagent[1892]: 2026-01-23T01:12:06.614177Z INFO ExtHandler Downloaded certificate {'thumbprint': '0C352C2EFBA09B05AD980DEBD8506C40F8267E21', 'hasPrivateKey': True} Jan 23 01:12:06.614872 waagent[1892]: 2026-01-23T01:12:06.614784Z INFO ExtHandler Fetch goal state completed Jan 23 01:12:06.615316 waagent[1892]: 2026-01-23T01:12:06.615277Z INFO ExtHandler ExtHandler Jan 23 01:12:06.615514 waagent[1892]: 2026-01-23T01:12:06.615478Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 65e05391-16ad-45bf-93e7-e96ecc9cd929 correlation 84b4aa88-03f0-4df4-bda1-48c7d9c44ad7 created: 2026-01-23T01:12:00.578033Z] Jan 23 01:12:06.616000 waagent[1892]: 2026-01-23T01:12:06.615960Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 01:12:06.616851 waagent[1892]: 2026-01-23T01:12:06.616684Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Jan 23 01:12:06.634048 systemd[1]: cri-containerd-26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8.scope: Deactivated successfully. Jan 23 01:12:06.638501 containerd[1715]: time="2026-01-23T01:12:06.637407732Z" level=info msg="received container exit event container_id:\"26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8\" id:\"26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8\" pid:2804 exited_at:{seconds:1769130726 nanos:633415486}" Jan 23 01:12:06.644423 containerd[1715]: time="2026-01-23T01:12:06.644399513Z" level=info msg="StartContainer for \"26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8\" returns successfully" Jan 23 01:12:06.654142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8-rootfs.mount: Deactivated successfully. Jan 23 01:12:07.437686 kubelet[2296]: E0123 01:12:07.437633 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:07.550097 containerd[1715]: time="2026-01-23T01:12:07.550049483Z" level=info msg="CreateContainer within sandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:12:07.569851 containerd[1715]: time="2026-01-23T01:12:07.569322299Z" level=info msg="Container d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:07.589484 containerd[1715]: time="2026-01-23T01:12:07.589452116Z" level=info msg="CreateContainer within sandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\"" Jan 23 01:12:07.589996 containerd[1715]: time="2026-01-23T01:12:07.589962477Z" level=info msg="StartContainer for \"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\"" Jan 23 01:12:07.590777 containerd[1715]: time="2026-01-23T01:12:07.590745712Z" level=info msg="connecting to shim d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926" address="unix:///run/containerd/s/368edabc778c417aba677028f9ad8bbcbfc4f2ff00325f2b3afd648956931838" protocol=ttrpc version=3 Jan 23 01:12:07.610004 systemd[1]: Started cri-containerd-d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926.scope - libcontainer container d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926. Jan 23 01:12:07.652089 containerd[1715]: time="2026-01-23T01:12:07.652037727Z" level=info msg="StartContainer for \"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\" returns successfully" Jan 23 01:12:07.707408 kubelet[2296]: I0123 01:12:07.707037 2296 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 01:12:08.061889 kernel: Initializing XFRM netlink socket Jan 23 01:12:08.438526 kubelet[2296]: E0123 01:12:08.438477 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:09.439367 kubelet[2296]: E0123 01:12:09.439305 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:09.742669 systemd-networkd[1364]: cilium_host: Link UP Jan 23 01:12:09.744310 systemd-networkd[1364]: cilium_net: Link UP Jan 23 01:12:09.744715 systemd-networkd[1364]: cilium_net: Gained carrier Jan 23 01:12:09.745058 systemd-networkd[1364]: cilium_host: Gained carrier Jan 23 01:12:09.802227 systemd-networkd[1364]: cilium_host: Gained IPv6LL Jan 23 01:12:09.889667 systemd-networkd[1364]: cilium_vxlan: Link UP Jan 23 01:12:09.889675 systemd-networkd[1364]: cilium_vxlan: Gained carrier Jan 23 01:12:09.978996 systemd-networkd[1364]: cilium_net: Gained IPv6LL Jan 23 01:12:10.188880 kernel: NET: Registered PF_ALG protocol family Jan 23 01:12:10.440155 kubelet[2296]: E0123 01:12:10.440038 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:10.728380 systemd-networkd[1364]: lxc_health: Link UP Jan 23 01:12:10.741516 systemd-networkd[1364]: lxc_health: Gained carrier Jan 23 01:12:10.782469 kubelet[2296]: I0123 01:12:10.782414 2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qt59s" podStartSLOduration=13.018364074 podStartE2EDuration="18.782395378s" podCreationTimestamp="2026-01-23 01:11:52 +0000 UTC" firstStartedPulling="2026-01-23 01:11:55.543812915 +0000 UTC m=+3.476199637" lastFinishedPulling="2026-01-23 01:12:01.307844194 +0000 UTC m=+9.240230941" observedRunningTime="2026-01-23 01:12:08.566783672 +0000 UTC m=+16.499170406" watchObservedRunningTime="2026-01-23 01:12:10.782395378 +0000 UTC m=+18.714782115" Jan 23 01:12:11.441170 kubelet[2296]: E0123 01:12:11.441126 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:11.883083 systemd-networkd[1364]: lxc_health: Gained IPv6LL Jan 23 01:12:11.947057 systemd-networkd[1364]: cilium_vxlan: Gained IPv6LL Jan 23 01:12:12.423799 kubelet[2296]: E0123 01:12:12.423741 2296 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:12.442300 kubelet[2296]: E0123 01:12:12.442251 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:13.443254 kubelet[2296]: E0123 01:12:13.443208 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:14.443662 kubelet[2296]: E0123 01:12:14.443611 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:14.813361 systemd[1]: Created slice kubepods-besteffort-pod7bc2b1a0_0c29_41d3_8c06_2603c6f759b1.slice - libcontainer container kubepods-besteffort-pod7bc2b1a0_0c29_41d3_8c06_2603c6f759b1.slice. Jan 23 01:12:14.887779 kubelet[2296]: I0123 01:12:14.887744 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpzll\" (UniqueName: \"kubernetes.io/projected/7bc2b1a0-0c29-41d3-8c06-2603c6f759b1-kube-api-access-fpzll\") pod \"nginx-deployment-7fcdb87857-qvxf6\" (UID: \"7bc2b1a0-0c29-41d3-8c06-2603c6f759b1\") " pod="default/nginx-deployment-7fcdb87857-qvxf6" Jan 23 01:12:15.116548 containerd[1715]: time="2026-01-23T01:12:15.116432376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qvxf6,Uid:7bc2b1a0-0c29-41d3-8c06-2603c6f759b1,Namespace:default,Attempt:0,}" Jan 23 01:12:15.141851 systemd-networkd[1364]: lxc8f7cf4a6e6b7: Link UP Jan 23 01:12:15.148972 kernel: eth0: renamed from tmpf3ec0 Jan 23 01:12:15.149192 systemd-networkd[1364]: lxc8f7cf4a6e6b7: Gained carrier Jan 23 01:12:15.304343 containerd[1715]: time="2026-01-23T01:12:15.304298089Z" level=info msg="connecting to shim f3ec00e88a7ba8c04d1f55219dd4bba6c3dd8a7c1aa8d2a38704f8e8e70b4dce" address="unix:///run/containerd/s/6a40f1750328fd24dcf9c3f76f3a6a41b9c1ce8ad99db6a53a263f4b6cf9a76b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:12:15.324079 systemd[1]: Started cri-containerd-f3ec00e88a7ba8c04d1f55219dd4bba6c3dd8a7c1aa8d2a38704f8e8e70b4dce.scope - libcontainer container f3ec00e88a7ba8c04d1f55219dd4bba6c3dd8a7c1aa8d2a38704f8e8e70b4dce. Jan 23 01:12:15.364087 containerd[1715]: time="2026-01-23T01:12:15.364050786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-qvxf6,Uid:7bc2b1a0-0c29-41d3-8c06-2603c6f759b1,Namespace:default,Attempt:0,} returns sandbox id \"f3ec00e88a7ba8c04d1f55219dd4bba6c3dd8a7c1aa8d2a38704f8e8e70b4dce\"" Jan 23 01:12:15.365282 containerd[1715]: time="2026-01-23T01:12:15.365213471Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 01:12:15.444295 kubelet[2296]: E0123 01:12:15.444248 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:16.362989 systemd-networkd[1364]: lxc8f7cf4a6e6b7: Gained IPv6LL Jan 23 01:12:16.445425 kubelet[2296]: E0123 01:12:16.445386 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:17.347813 kubelet[2296]: I0123 01:12:17.347765 2296 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:12:17.446327 kubelet[2296]: E0123 01:12:17.446294 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:17.522519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2449104333.mount: Deactivated successfully. Jan 23 01:12:18.257982 containerd[1715]: time="2026-01-23T01:12:18.257934163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:12:18.260452 containerd[1715]: time="2026-01-23T01:12:18.260368168Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 23 01:12:18.263151 containerd[1715]: time="2026-01-23T01:12:18.263106667Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:12:18.267746 containerd[1715]: time="2026-01-23T01:12:18.267702005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:12:18.268477 containerd[1715]: time="2026-01-23T01:12:18.268449638Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 2.903209389s" Jan 23 01:12:18.268521 containerd[1715]: time="2026-01-23T01:12:18.268478431Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 01:12:18.270465 containerd[1715]: time="2026-01-23T01:12:18.270424664Z" level=info msg="CreateContainer within sandbox \"f3ec00e88a7ba8c04d1f55219dd4bba6c3dd8a7c1aa8d2a38704f8e8e70b4dce\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 23 01:12:18.286689 containerd[1715]: time="2026-01-23T01:12:18.286178346Z" level=info msg="Container 8536072beeea0418f171b20e71d5d372d8f1126344238481c13127d2fdc7493b: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:18.298470 containerd[1715]: time="2026-01-23T01:12:18.298442624Z" level=info msg="CreateContainer within sandbox \"f3ec00e88a7ba8c04d1f55219dd4bba6c3dd8a7c1aa8d2a38704f8e8e70b4dce\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"8536072beeea0418f171b20e71d5d372d8f1126344238481c13127d2fdc7493b\"" Jan 23 01:12:18.298938 containerd[1715]: time="2026-01-23T01:12:18.298883946Z" level=info msg="StartContainer for \"8536072beeea0418f171b20e71d5d372d8f1126344238481c13127d2fdc7493b\"" Jan 23 01:12:18.299563 containerd[1715]: time="2026-01-23T01:12:18.299540805Z" level=info msg="connecting to shim 8536072beeea0418f171b20e71d5d372d8f1126344238481c13127d2fdc7493b" address="unix:///run/containerd/s/6a40f1750328fd24dcf9c3f76f3a6a41b9c1ce8ad99db6a53a263f4b6cf9a76b" protocol=ttrpc version=3 Jan 23 01:12:18.317976 systemd[1]: Started cri-containerd-8536072beeea0418f171b20e71d5d372d8f1126344238481c13127d2fdc7493b.scope - libcontainer container 8536072beeea0418f171b20e71d5d372d8f1126344238481c13127d2fdc7493b. Jan 23 01:12:18.343135 containerd[1715]: time="2026-01-23T01:12:18.343086390Z" level=info msg="StartContainer for \"8536072beeea0418f171b20e71d5d372d8f1126344238481c13127d2fdc7493b\" returns successfully" Jan 23 01:12:18.446493 kubelet[2296]: E0123 01:12:18.446433 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:18.582567 kubelet[2296]: I0123 01:12:18.582428 2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-qvxf6" podStartSLOduration=1.677859258 podStartE2EDuration="4.582415668s" podCreationTimestamp="2026-01-23 01:12:14 +0000 UTC" firstStartedPulling="2026-01-23 01:12:15.364857873 +0000 UTC m=+23.297244601" lastFinishedPulling="2026-01-23 01:12:18.269414275 +0000 UTC m=+26.201801011" observedRunningTime="2026-01-23 01:12:18.582188838 +0000 UTC m=+26.514575570" watchObservedRunningTime="2026-01-23 01:12:18.582415668 +0000 UTC m=+26.514802469" Jan 23 01:12:19.447605 kubelet[2296]: E0123 01:12:19.447544 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:20.448697 kubelet[2296]: E0123 01:12:20.448637 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:21.449864 kubelet[2296]: E0123 01:12:21.449787 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:22.450017 kubelet[2296]: E0123 01:12:22.449957 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:23.450661 kubelet[2296]: E0123 01:12:23.450606 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:24.451253 kubelet[2296]: E0123 01:12:24.451203 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:25.452139 kubelet[2296]: E0123 01:12:25.452074 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:25.707598 systemd[1]: Created slice kubepods-besteffort-pod65d9813d_5183_4478_99c3_a231b1bf8257.slice - libcontainer container kubepods-besteffort-pod65d9813d_5183_4478_99c3_a231b1bf8257.slice. Jan 23 01:12:25.741460 kubelet[2296]: I0123 01:12:25.741393 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk6rc\" (UniqueName: \"kubernetes.io/projected/65d9813d-5183-4478-99c3-a231b1bf8257-kube-api-access-sk6rc\") pod \"nfs-server-provisioner-0\" (UID: \"65d9813d-5183-4478-99c3-a231b1bf8257\") " pod="default/nfs-server-provisioner-0" Jan 23 01:12:25.741460 kubelet[2296]: I0123 01:12:25.741443 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/65d9813d-5183-4478-99c3-a231b1bf8257-data\") pod \"nfs-server-provisioner-0\" (UID: \"65d9813d-5183-4478-99c3-a231b1bf8257\") " pod="default/nfs-server-provisioner-0" Jan 23 01:12:26.011117 containerd[1715]: time="2026-01-23T01:12:26.011008232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:65d9813d-5183-4478-99c3-a231b1bf8257,Namespace:default,Attempt:0,}" Jan 23 01:12:26.038366 kernel: eth0: renamed from tmp242c5 Jan 23 01:12:26.038179 systemd-networkd[1364]: lxce2ce285599c0: Link UP Jan 23 01:12:26.042583 systemd-networkd[1364]: lxce2ce285599c0: Gained carrier Jan 23 01:12:26.161657 containerd[1715]: time="2026-01-23T01:12:26.161613516Z" level=info msg="connecting to shim 242c5f12e50c4e150f852ecccbd6c479ff56d58f473c51ce45b83af8a3e38aa3" address="unix:///run/containerd/s/12a1302b029a7391c9b9289f4277bb012445eafadc413dac54c7792cb02af464" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:12:26.182001 systemd[1]: Started cri-containerd-242c5f12e50c4e150f852ecccbd6c479ff56d58f473c51ce45b83af8a3e38aa3.scope - libcontainer container 242c5f12e50c4e150f852ecccbd6c479ff56d58f473c51ce45b83af8a3e38aa3. Jan 23 01:12:26.220796 containerd[1715]: time="2026-01-23T01:12:26.220763186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:65d9813d-5183-4478-99c3-a231b1bf8257,Namespace:default,Attempt:0,} returns sandbox id \"242c5f12e50c4e150f852ecccbd6c479ff56d58f473c51ce45b83af8a3e38aa3\"" Jan 23 01:12:26.222113 containerd[1715]: time="2026-01-23T01:12:26.222076038Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 23 01:12:26.453008 kubelet[2296]: E0123 01:12:26.452973 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:27.453493 kubelet[2296]: E0123 01:12:27.453449 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:27.884052 systemd-networkd[1364]: lxce2ce285599c0: Gained IPv6LL Jan 23 01:12:28.453587 kubelet[2296]: E0123 01:12:28.453549 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:28.469769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3881949315.mount: Deactivated successfully. Jan 23 01:12:29.454552 kubelet[2296]: E0123 01:12:29.454497 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:29.656455 containerd[1715]: time="2026-01-23T01:12:29.656404144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:12:29.658868 containerd[1715]: time="2026-01-23T01:12:29.658840260Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 23 01:12:29.662650 containerd[1715]: time="2026-01-23T01:12:29.662299767Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:12:29.667066 containerd[1715]: time="2026-01-23T01:12:29.666898562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:12:29.667740 containerd[1715]: time="2026-01-23T01:12:29.667631015Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.445525076s" Jan 23 01:12:29.667740 containerd[1715]: time="2026-01-23T01:12:29.667660787Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 23 01:12:29.669534 containerd[1715]: time="2026-01-23T01:12:29.669502336Z" level=info msg="CreateContainer within sandbox \"242c5f12e50c4e150f852ecccbd6c479ff56d58f473c51ce45b83af8a3e38aa3\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 23 01:12:29.683844 containerd[1715]: time="2026-01-23T01:12:29.683233920Z" level=info msg="Container 00575752df241c45d147c3123234548359ab7f79f002eb32a3fa45ffa6c23b4f: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:29.699347 containerd[1715]: time="2026-01-23T01:12:29.699321255Z" level=info msg="CreateContainer within sandbox \"242c5f12e50c4e150f852ecccbd6c479ff56d58f473c51ce45b83af8a3e38aa3\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"00575752df241c45d147c3123234548359ab7f79f002eb32a3fa45ffa6c23b4f\"" Jan 23 01:12:29.699869 containerd[1715]: time="2026-01-23T01:12:29.699850697Z" level=info msg="StartContainer for \"00575752df241c45d147c3123234548359ab7f79f002eb32a3fa45ffa6c23b4f\"" Jan 23 01:12:29.700610 containerd[1715]: time="2026-01-23T01:12:29.700585024Z" level=info msg="connecting to shim 00575752df241c45d147c3123234548359ab7f79f002eb32a3fa45ffa6c23b4f" address="unix:///run/containerd/s/12a1302b029a7391c9b9289f4277bb012445eafadc413dac54c7792cb02af464" protocol=ttrpc version=3 Jan 23 01:12:29.719975 systemd[1]: Started cri-containerd-00575752df241c45d147c3123234548359ab7f79f002eb32a3fa45ffa6c23b4f.scope - libcontainer container 00575752df241c45d147c3123234548359ab7f79f002eb32a3fa45ffa6c23b4f. Jan 23 01:12:29.747911 containerd[1715]: time="2026-01-23T01:12:29.747841896Z" level=info msg="StartContainer for \"00575752df241c45d147c3123234548359ab7f79f002eb32a3fa45ffa6c23b4f\" returns successfully" Jan 23 01:12:30.455443 kubelet[2296]: E0123 01:12:30.455383 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:30.602792 kubelet[2296]: I0123 01:12:30.602735 2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.156207017 podStartE2EDuration="5.602723639s" podCreationTimestamp="2026-01-23 01:12:25 +0000 UTC" firstStartedPulling="2026-01-23 01:12:26.221797988 +0000 UTC m=+34.154184716" lastFinishedPulling="2026-01-23 01:12:29.668314596 +0000 UTC m=+37.600701338" observedRunningTime="2026-01-23 01:12:30.602411978 +0000 UTC m=+38.534798713" watchObservedRunningTime="2026-01-23 01:12:30.602723639 +0000 UTC m=+38.535110456" Jan 23 01:12:31.455724 kubelet[2296]: E0123 01:12:31.455686 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:32.424417 kubelet[2296]: E0123 01:12:32.424359 2296 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:32.456676 kubelet[2296]: E0123 01:12:32.456639 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:33.457052 kubelet[2296]: E0123 01:12:33.457015 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:34.457675 kubelet[2296]: E0123 01:12:34.457624 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:35.458079 kubelet[2296]: E0123 01:12:35.458029 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:36.459202 kubelet[2296]: E0123 01:12:36.459162 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:37.459549 kubelet[2296]: E0123 01:12:37.459495 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:38.459788 kubelet[2296]: E0123 01:12:38.459745 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:39.339120 systemd[1]: Created slice kubepods-besteffort-podd5109ecf_1ad8_4e4f_81b2_4de9af6ee8cd.slice - libcontainer container kubepods-besteffort-podd5109ecf_1ad8_4e4f_81b2_4de9af6ee8cd.slice. Jan 23 01:12:39.415750 kubelet[2296]: I0123 01:12:39.415679 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-45af7e10-47d8-426e-baa1-9d5e55a211aa\" (UniqueName: \"kubernetes.io/nfs/d5109ecf-1ad8-4e4f-81b2-4de9af6ee8cd-pvc-45af7e10-47d8-426e-baa1-9d5e55a211aa\") pod \"test-pod-1\" (UID: \"d5109ecf-1ad8-4e4f-81b2-4de9af6ee8cd\") " pod="default/test-pod-1" Jan 23 01:12:39.415750 kubelet[2296]: I0123 01:12:39.415716 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvwv4\" (UniqueName: \"kubernetes.io/projected/d5109ecf-1ad8-4e4f-81b2-4de9af6ee8cd-kube-api-access-xvwv4\") pod \"test-pod-1\" (UID: \"d5109ecf-1ad8-4e4f-81b2-4de9af6ee8cd\") " pod="default/test-pod-1" Jan 23 01:12:39.460188 kubelet[2296]: E0123 01:12:39.460155 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:39.598961 kernel: netfs: FS-Cache loaded Jan 23 01:12:39.680018 kernel: RPC: Registered named UNIX socket transport module. Jan 23 01:12:39.680650 kernel: RPC: Registered udp transport module. Jan 23 01:12:39.680673 kernel: RPC: Registered tcp transport module. Jan 23 01:12:39.680746 kernel: RPC: Registered tcp-with-tls transport module. Jan 23 01:12:39.681965 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 23 01:12:39.936562 kernel: NFS: Registering the id_resolver key type Jan 23 01:12:39.936675 kernel: Key type id_resolver registered Jan 23 01:12:39.936690 kernel: Key type id_legacy registered Jan 23 01:12:40.048469 nfsidmap[3636]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.2-n-0e89f49345' Jan 23 01:12:40.060386 nfsidmap[3637]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.2-n-0e89f49345' Jan 23 01:12:40.068609 nfsrahead[3640]: setting /var/lib/kubelet/pods/d5109ecf-1ad8-4e4f-81b2-4de9af6ee8cd/volumes/kubernetes.io~nfs/pvc-45af7e10-47d8-426e-baa1-9d5e55a211aa readahead to 128 Jan 23 01:12:40.241692 containerd[1715]: time="2026-01-23T01:12:40.241633633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d5109ecf-1ad8-4e4f-81b2-4de9af6ee8cd,Namespace:default,Attempt:0,}" Jan 23 01:12:40.262882 systemd-networkd[1364]: lxc4614ef84cd4d: Link UP Jan 23 01:12:40.267900 kernel: eth0: renamed from tmp17d00 Jan 23 01:12:40.269441 systemd-networkd[1364]: lxc4614ef84cd4d: Gained carrier Jan 23 01:12:40.422850 containerd[1715]: time="2026-01-23T01:12:40.422178651Z" level=info msg="connecting to shim 17d008449aa48fe0590b0f1e7e75655f7d2b1d75cbec98ab8164a321ce9ae9e8" address="unix:///run/containerd/s/f0a3d5141e178de48b722b7273b903a9d80c42c6f2db38e4a48e7fee4ecc1eec" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:12:40.442002 systemd[1]: Started cri-containerd-17d008449aa48fe0590b0f1e7e75655f7d2b1d75cbec98ab8164a321ce9ae9e8.scope - libcontainer container 17d008449aa48fe0590b0f1e7e75655f7d2b1d75cbec98ab8164a321ce9ae9e8. Jan 23 01:12:40.461850 kubelet[2296]: E0123 01:12:40.461230 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:40.480604 containerd[1715]: time="2026-01-23T01:12:40.480574082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d5109ecf-1ad8-4e4f-81b2-4de9af6ee8cd,Namespace:default,Attempt:0,} returns sandbox id \"17d008449aa48fe0590b0f1e7e75655f7d2b1d75cbec98ab8164a321ce9ae9e8\"" Jan 23 01:12:40.481715 containerd[1715]: time="2026-01-23T01:12:40.481689490Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 01:12:40.826770 containerd[1715]: time="2026-01-23T01:12:40.826640742Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:12:40.829262 containerd[1715]: time="2026-01-23T01:12:40.829235229Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 23 01:12:40.830927 containerd[1715]: time="2026-01-23T01:12:40.830893579Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 349.17481ms" Jan 23 01:12:40.830927 containerd[1715]: time="2026-01-23T01:12:40.830928855Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 01:12:40.832751 containerd[1715]: time="2026-01-23T01:12:40.832716105Z" level=info msg="CreateContainer within sandbox \"17d008449aa48fe0590b0f1e7e75655f7d2b1d75cbec98ab8164a321ce9ae9e8\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 23 01:12:40.849846 containerd[1715]: time="2026-01-23T01:12:40.849630672Z" level=info msg="Container aee0d987aba2b7ac34a4e2fea2e12bf9a051692046847acf6fcf5c960304674f: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:40.870360 containerd[1715]: time="2026-01-23T01:12:40.870326902Z" level=info msg="CreateContainer within sandbox \"17d008449aa48fe0590b0f1e7e75655f7d2b1d75cbec98ab8164a321ce9ae9e8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"aee0d987aba2b7ac34a4e2fea2e12bf9a051692046847acf6fcf5c960304674f\"" Jan 23 01:12:40.871135 containerd[1715]: time="2026-01-23T01:12:40.870990191Z" level=info msg="StartContainer for \"aee0d987aba2b7ac34a4e2fea2e12bf9a051692046847acf6fcf5c960304674f\"" Jan 23 01:12:40.871770 containerd[1715]: time="2026-01-23T01:12:40.871731991Z" level=info msg="connecting to shim aee0d987aba2b7ac34a4e2fea2e12bf9a051692046847acf6fcf5c960304674f" address="unix:///run/containerd/s/f0a3d5141e178de48b722b7273b903a9d80c42c6f2db38e4a48e7fee4ecc1eec" protocol=ttrpc version=3 Jan 23 01:12:40.896983 systemd[1]: Started cri-containerd-aee0d987aba2b7ac34a4e2fea2e12bf9a051692046847acf6fcf5c960304674f.scope - libcontainer container aee0d987aba2b7ac34a4e2fea2e12bf9a051692046847acf6fcf5c960304674f. Jan 23 01:12:40.924416 containerd[1715]: time="2026-01-23T01:12:40.924397236Z" level=info msg="StartContainer for \"aee0d987aba2b7ac34a4e2fea2e12bf9a051692046847acf6fcf5c960304674f\" returns successfully" Jan 23 01:12:41.462192 kubelet[2296]: E0123 01:12:41.462141 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:41.625382 kubelet[2296]: I0123 01:12:41.625322 2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.275160437 podStartE2EDuration="14.6253096s" podCreationTimestamp="2026-01-23 01:12:27 +0000 UTC" firstStartedPulling="2026-01-23 01:12:40.481355634 +0000 UTC m=+48.413742363" lastFinishedPulling="2026-01-23 01:12:40.8315048 +0000 UTC m=+48.763891526" observedRunningTime="2026-01-23 01:12:41.624782766 +0000 UTC m=+49.557169509" watchObservedRunningTime="2026-01-23 01:12:41.6253096 +0000 UTC m=+49.557696334" Jan 23 01:12:42.219096 systemd-networkd[1364]: lxc4614ef84cd4d: Gained IPv6LL Jan 23 01:12:42.462636 kubelet[2296]: E0123 01:12:42.462580 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:43.463616 kubelet[2296]: E0123 01:12:43.463514 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:44.464646 kubelet[2296]: E0123 01:12:44.464605 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:45.465475 kubelet[2296]: E0123 01:12:45.465415 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:46.466554 kubelet[2296]: E0123 01:12:46.466482 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:47.467023 kubelet[2296]: E0123 01:12:47.466986 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:48.467953 kubelet[2296]: E0123 01:12:48.467899 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:49.468596 kubelet[2296]: E0123 01:12:49.468553 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:49.980123 containerd[1715]: time="2026-01-23T01:12:49.980079852Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:12:49.986087 containerd[1715]: time="2026-01-23T01:12:49.986054786Z" level=info msg="StopContainer for \"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\" with timeout 2 (s)" Jan 23 01:12:49.986348 containerd[1715]: time="2026-01-23T01:12:49.986330980Z" level=info msg="Stop container \"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\" with signal terminated" Jan 23 01:12:49.992415 systemd-networkd[1364]: lxc_health: Link DOWN Jan 23 01:12:49.992421 systemd-networkd[1364]: lxc_health: Lost carrier Jan 23 01:12:50.006558 systemd[1]: cri-containerd-d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926.scope: Deactivated successfully. Jan 23 01:12:50.006862 systemd[1]: cri-containerd-d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926.scope: Consumed 4.794s CPU time, 123.3M memory peak, 104K read from disk, 13.3M written to disk. Jan 23 01:12:50.008345 containerd[1715]: time="2026-01-23T01:12:50.008313516Z" level=info msg="received container exit event container_id:\"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\" id:\"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\" pid:2842 exited_at:{seconds:1769130770 nanos:8085330}" Jan 23 01:12:50.028461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926-rootfs.mount: Deactivated successfully. Jan 23 01:12:50.469608 kubelet[2296]: E0123 01:12:50.469568 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:51.469734 kubelet[2296]: E0123 01:12:51.469694 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:51.508258 containerd[1715]: time="2026-01-23T01:12:51.508222999Z" level=info msg="StopContainer for \"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\" returns successfully" Jan 23 01:12:51.509088 containerd[1715]: time="2026-01-23T01:12:51.509051970Z" level=info msg="StopPodSandbox for \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\"" Jan 23 01:12:51.509167 containerd[1715]: time="2026-01-23T01:12:51.509123585Z" level=info msg="Container to stop \"bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:12:51.509167 containerd[1715]: time="2026-01-23T01:12:51.509136520Z" level=info msg="Container to stop \"e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:12:51.509167 containerd[1715]: time="2026-01-23T01:12:51.509145534Z" level=info msg="Container to stop \"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:12:51.509167 containerd[1715]: time="2026-01-23T01:12:51.509155587Z" level=info msg="Container to stop \"099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:12:51.509167 containerd[1715]: time="2026-01-23T01:12:51.509164246Z" level=info msg="Container to stop \"26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:12:51.514851 systemd[1]: cri-containerd-110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6.scope: Deactivated successfully. Jan 23 01:12:51.516021 containerd[1715]: time="2026-01-23T01:12:51.515988921Z" level=info msg="received sandbox exit event container_id:\"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" id:\"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" exit_status:137 exited_at:{seconds:1769130771 nanos:515286471}" monitor_name=podsandbox Jan 23 01:12:51.532454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6-rootfs.mount: Deactivated successfully. Jan 23 01:12:51.543202 containerd[1715]: time="2026-01-23T01:12:51.543096765Z" level=info msg="shim disconnected" id=110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6 namespace=k8s.io Jan 23 01:12:51.543202 containerd[1715]: time="2026-01-23T01:12:51.543159023Z" level=warning msg="cleaning up after shim disconnected" id=110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6 namespace=k8s.io Jan 23 01:12:51.543202 containerd[1715]: time="2026-01-23T01:12:51.543167206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:12:51.554207 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6-shm.mount: Deactivated successfully. Jan 23 01:12:51.555504 containerd[1715]: time="2026-01-23T01:12:51.554219783Z" level=info msg="received sandbox container exit event sandbox_id:\"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" exit_status:137 exited_at:{seconds:1769130771 nanos:515286471}" monitor_name=criService Jan 23 01:12:51.556030 containerd[1715]: time="2026-01-23T01:12:51.556006876Z" level=info msg="TearDown network for sandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" successfully" Jan 23 01:12:51.556100 containerd[1715]: time="2026-01-23T01:12:51.556082650Z" level=info msg="StopPodSandbox for \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" returns successfully" Jan 23 01:12:51.633031 kubelet[2296]: I0123 01:12:51.633010 2296 scope.go:117] "RemoveContainer" containerID="d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926" Jan 23 01:12:51.634587 containerd[1715]: time="2026-01-23T01:12:51.634558489Z" level=info msg="RemoveContainer for \"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\"" Jan 23 01:12:51.642061 containerd[1715]: time="2026-01-23T01:12:51.642021384Z" level=info msg="RemoveContainer for \"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\" returns successfully" Jan 23 01:12:51.642281 kubelet[2296]: I0123 01:12:51.642264 2296 scope.go:117] "RemoveContainer" containerID="26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8" Jan 23 01:12:51.643426 containerd[1715]: time="2026-01-23T01:12:51.643402321Z" level=info msg="RemoveContainer for \"26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8\"" Jan 23 01:12:51.650327 containerd[1715]: time="2026-01-23T01:12:51.650291096Z" level=info msg="RemoveContainer for \"26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8\" returns successfully" Jan 23 01:12:51.650466 kubelet[2296]: I0123 01:12:51.650452 2296 scope.go:117] "RemoveContainer" containerID="099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb" Jan 23 01:12:51.652204 containerd[1715]: time="2026-01-23T01:12:51.652176441Z" level=info msg="RemoveContainer for \"099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb\"" Jan 23 01:12:51.659042 containerd[1715]: time="2026-01-23T01:12:51.659014005Z" level=info msg="RemoveContainer for \"099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb\" returns successfully" Jan 23 01:12:51.659277 kubelet[2296]: I0123 01:12:51.659160 2296 scope.go:117] "RemoveContainer" containerID="e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b" Jan 23 01:12:51.660413 containerd[1715]: time="2026-01-23T01:12:51.660390906Z" level=info msg="RemoveContainer for \"e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b\"" Jan 23 01:12:51.666778 containerd[1715]: time="2026-01-23T01:12:51.666739986Z" level=info msg="RemoveContainer for \"e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b\" returns successfully" Jan 23 01:12:51.666938 kubelet[2296]: I0123 01:12:51.666908 2296 scope.go:117] "RemoveContainer" containerID="bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028" Jan 23 01:12:51.668011 containerd[1715]: time="2026-01-23T01:12:51.667967202Z" level=info msg="RemoveContainer for \"bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028\"" Jan 23 01:12:51.672407 kubelet[2296]: I0123 01:12:51.672379 2296 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-lib-modules\") pod \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " Jan 23 01:12:51.672485 kubelet[2296]: I0123 01:12:51.672416 2296 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cilium-run\") pod \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " Jan 23 01:12:51.672485 kubelet[2296]: I0123 01:12:51.672445 2296 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-etc-cni-netd\") pod \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " Jan 23 01:12:51.672485 kubelet[2296]: I0123 01:12:51.672460 2296 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cni-path\") pod \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " Jan 23 01:12:51.672485 kubelet[2296]: I0123 01:12:51.672475 2296 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-xtables-lock\") pod \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " Jan 23 01:12:51.672675 kubelet[2296]: I0123 01:12:51.672501 2296 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bhn5\" (UniqueName: \"kubernetes.io/projected/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-kube-api-access-9bhn5\") pod \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " Jan 23 01:12:51.672675 kubelet[2296]: I0123 01:12:51.672520 2296 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-bpf-maps\") pod \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " Jan 23 01:12:51.672675 kubelet[2296]: I0123 01:12:51.672538 2296 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-hubble-tls\") pod \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " Jan 23 01:12:51.672675 kubelet[2296]: I0123 01:12:51.672550 2296 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" (UID: "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:51.672675 kubelet[2296]: I0123 01:12:51.672568 2296 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-hostproc" (OuterVolumeSpecName: "hostproc") pod "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" (UID: "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:51.672784 kubelet[2296]: I0123 01:12:51.672582 2296 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cni-path" (OuterVolumeSpecName: "cni-path") pod "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" (UID: "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:51.672784 kubelet[2296]: I0123 01:12:51.672595 2296 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" (UID: "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:51.673083 kubelet[2296]: I0123 01:12:51.672587 2296 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" (UID: "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:51.673083 kubelet[2296]: I0123 01:12:51.672884 2296 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" (UID: "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:51.673083 kubelet[2296]: I0123 01:12:51.672555 2296 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-hostproc\") pod \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " Jan 23 01:12:51.673083 kubelet[2296]: I0123 01:12:51.672911 2296 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-host-proc-sys-net\") pod \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " Jan 23 01:12:51.673083 kubelet[2296]: I0123 01:12:51.672938 2296 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cilium-cgroup\") pod \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " Jan 23 01:12:51.673083 kubelet[2296]: I0123 01:12:51.672960 2296 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cilium-config-path\") pod \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " Jan 23 01:12:51.673236 kubelet[2296]: I0123 01:12:51.672976 2296 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-host-proc-sys-kernel\") pod \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " Jan 23 01:12:51.673236 kubelet[2296]: I0123 01:12:51.672995 2296 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-clustermesh-secrets\") pod \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\" (UID: \"9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76\") " Jan 23 01:12:51.673236 kubelet[2296]: I0123 01:12:51.673040 2296 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-lib-modules\") on node \"10.200.8.14\" DevicePath \"\"" Jan 23 01:12:51.673236 kubelet[2296]: I0123 01:12:51.673050 2296 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-xtables-lock\") on node \"10.200.8.14\" DevicePath \"\"" Jan 23 01:12:51.673236 kubelet[2296]: I0123 01:12:51.673061 2296 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cilium-run\") on node \"10.200.8.14\" DevicePath \"\"" Jan 23 01:12:51.673236 kubelet[2296]: I0123 01:12:51.673069 2296 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-etc-cni-netd\") on node \"10.200.8.14\" DevicePath \"\"" Jan 23 01:12:51.673673 kubelet[2296]: I0123 01:12:51.673375 2296 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cni-path\") on node \"10.200.8.14\" DevicePath \"\"" Jan 23 01:12:51.673673 kubelet[2296]: I0123 01:12:51.673388 2296 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-hostproc\") on node \"10.200.8.14\" DevicePath \"\"" Jan 23 01:12:51.673673 kubelet[2296]: I0123 01:12:51.673474 2296 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" (UID: "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:51.673920 kubelet[2296]: I0123 01:12:51.673861 2296 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" (UID: "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:51.673994 kubelet[2296]: I0123 01:12:51.673983 2296 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" (UID: "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:51.674860 containerd[1715]: time="2026-01-23T01:12:51.674816284Z" level=info msg="RemoveContainer for \"bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028\" returns successfully" Jan 23 01:12:51.675195 kubelet[2296]: I0123 01:12:51.675139 2296 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" (UID: "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:12:51.675348 kubelet[2296]: I0123 01:12:51.675337 2296 scope.go:117] "RemoveContainer" containerID="d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926" Jan 23 01:12:51.675740 containerd[1715]: time="2026-01-23T01:12:51.675672820Z" level=error msg="ContainerStatus for \"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\": not found" Jan 23 01:12:51.675885 kubelet[2296]: E0123 01:12:51.675870 2296 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\": not found" containerID="d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926" Jan 23 01:12:51.676134 kubelet[2296]: I0123 01:12:51.676026 2296 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926"} err="failed to get container status \"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\": rpc error: code = NotFound desc = an error occurred when try to find container \"d14ebcf9ffa47e03c430c8dab7537ff4821dd12fe091213003dc38ce7edba926\": not found" Jan 23 01:12:51.676134 kubelet[2296]: I0123 01:12:51.676109 2296 scope.go:117] "RemoveContainer" containerID="26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8" Jan 23 01:12:51.676691 containerd[1715]: time="2026-01-23T01:12:51.676615485Z" level=error msg="ContainerStatus for \"26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8\": not found" Jan 23 01:12:51.676962 kubelet[2296]: E0123 01:12:51.676837 2296 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8\": not found" containerID="26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8" Jan 23 01:12:51.676962 kubelet[2296]: I0123 01:12:51.676878 2296 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8"} err="failed to get container status \"26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8\": rpc error: code = NotFound desc = an error occurred when try to find container \"26e3f0624ec6f829f7c74c7e2c7977c9e5bf1c881e5549b429234cb23298bfb8\": not found" Jan 23 01:12:51.676962 kubelet[2296]: I0123 01:12:51.676896 2296 scope.go:117] "RemoveContainer" containerID="099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb" Jan 23 01:12:51.677185 containerd[1715]: time="2026-01-23T01:12:51.677163091Z" level=error msg="ContainerStatus for \"099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb\": not found" Jan 23 01:12:51.679036 kubelet[2296]: E0123 01:12:51.678928 2296 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb\": not found" containerID="099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb" Jan 23 01:12:51.679036 kubelet[2296]: I0123 01:12:51.678959 2296 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb"} err="failed to get container status \"099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb\": rpc error: code = NotFound desc = an error occurred when try to find container \"099a45fb6e2147a4dbd1ae9c71f9736103756c312585b57a6116d33072c23aeb\": not found" Jan 23 01:12:51.679036 kubelet[2296]: I0123 01:12:51.678977 2296 scope.go:117] "RemoveContainer" containerID="e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b" Jan 23 01:12:51.680507 systemd[1]: var-lib-kubelet-pods-9c6b8f4a\x2dcd5b\x2d4f2f\x2da3b8\x2d09cc6936ac76-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 01:12:51.680776 containerd[1715]: time="2026-01-23T01:12:51.680722665Z" level=error msg="ContainerStatus for \"e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b\": not found" Jan 23 01:12:51.682064 kubelet[2296]: I0123 01:12:51.681314 2296 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" (UID: "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:12:51.683102 kubelet[2296]: I0123 01:12:51.681991 2296 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" (UID: "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:12:51.684450 systemd[1]: var-lib-kubelet-pods-9c6b8f4a\x2dcd5b\x2d4f2f\x2da3b8\x2d09cc6936ac76-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9bhn5.mount: Deactivated successfully. Jan 23 01:12:51.684546 systemd[1]: var-lib-kubelet-pods-9c6b8f4a\x2dcd5b\x2d4f2f\x2da3b8\x2d09cc6936ac76-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 01:12:51.685770 kubelet[2296]: I0123 01:12:51.685202 2296 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-kube-api-access-9bhn5" (OuterVolumeSpecName: "kube-api-access-9bhn5") pod "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" (UID: "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76"). InnerVolumeSpecName "kube-api-access-9bhn5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:12:51.685770 kubelet[2296]: I0123 01:12:51.685251 2296 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" (UID: "9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:12:51.685770 kubelet[2296]: E0123 01:12:51.682059 2296 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b\": not found" containerID="e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b" Jan 23 01:12:51.685770 kubelet[2296]: I0123 01:12:51.685693 2296 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b"} err="failed to get container status \"e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b\": rpc error: code = NotFound desc = an error occurred when try to find container \"e70c35fc5ffc0c1e00149eb20d207b0c50f9615df2c6e36b04d30a9ba6e3654b\": not found" Jan 23 01:12:51.685770 kubelet[2296]: I0123 01:12:51.685728 2296 scope.go:117] "RemoveContainer" containerID="bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028" Jan 23 01:12:51.686219 containerd[1715]: time="2026-01-23T01:12:51.686183092Z" level=error msg="ContainerStatus for \"bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028\": not found" Jan 23 01:12:51.687232 kubelet[2296]: E0123 01:12:51.686972 2296 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028\": not found" containerID="bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028" Jan 23 01:12:51.687232 kubelet[2296]: I0123 01:12:51.686998 2296 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028"} err="failed to get container status \"bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028\": rpc error: code = NotFound desc = an error occurred when try to find container \"bfbb47f0609c2bb1d4c8a24b138e5f996317ec2966fff7f09f57a3d677b66028\": not found" Jan 23 01:12:51.773760 kubelet[2296]: I0123 01:12:51.773672 2296 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-host-proc-sys-net\") on node \"10.200.8.14\" DevicePath \"\"" Jan 23 01:12:51.773971 kubelet[2296]: I0123 01:12:51.773901 2296 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-bpf-maps\") on node \"10.200.8.14\" DevicePath \"\"" Jan 23 01:12:51.773971 kubelet[2296]: I0123 01:12:51.773914 2296 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-hubble-tls\") on node \"10.200.8.14\" DevicePath \"\"" Jan 23 01:12:51.773971 kubelet[2296]: I0123 01:12:51.773924 2296 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-host-proc-sys-kernel\") on node \"10.200.8.14\" DevicePath \"\"" Jan 23 01:12:51.773971 kubelet[2296]: I0123 01:12:51.773933 2296 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-clustermesh-secrets\") on node \"10.200.8.14\" DevicePath \"\"" Jan 23 01:12:51.773971 kubelet[2296]: I0123 01:12:51.773943 2296 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cilium-cgroup\") on node \"10.200.8.14\" DevicePath \"\"" Jan 23 01:12:51.773971 kubelet[2296]: I0123 01:12:51.773952 2296 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-cilium-config-path\") on node \"10.200.8.14\" DevicePath \"\"" Jan 23 01:12:51.773971 kubelet[2296]: I0123 01:12:51.773959 2296 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9bhn5\" (UniqueName: \"kubernetes.io/projected/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76-kube-api-access-9bhn5\") on node \"10.200.8.14\" DevicePath \"\"" Jan 23 01:12:51.940925 systemd[1]: Removed slice kubepods-burstable-pod9c6b8f4a_cd5b_4f2f_a3b8_09cc6936ac76.slice - libcontainer container kubepods-burstable-pod9c6b8f4a_cd5b_4f2f_a3b8_09cc6936ac76.slice. Jan 23 01:12:51.942010 systemd[1]: kubepods-burstable-pod9c6b8f4a_cd5b_4f2f_a3b8_09cc6936ac76.slice: Consumed 4.868s CPU time, 123.8M memory peak, 104K read from disk, 13.3M written to disk. Jan 23 01:12:52.423903 kubelet[2296]: E0123 01:12:52.423837 2296 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:52.451654 containerd[1715]: time="2026-01-23T01:12:52.451619048Z" level=info msg="StopPodSandbox for \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\"" Jan 23 01:12:52.451771 containerd[1715]: time="2026-01-23T01:12:52.451753306Z" level=info msg="TearDown network for sandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" successfully" Jan 23 01:12:52.451845 containerd[1715]: time="2026-01-23T01:12:52.451770943Z" level=info msg="StopPodSandbox for \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" returns successfully" Jan 23 01:12:52.452145 containerd[1715]: time="2026-01-23T01:12:52.452120096Z" level=info msg="RemovePodSandbox for \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\"" Jan 23 01:12:52.452145 containerd[1715]: time="2026-01-23T01:12:52.452143010Z" level=info msg="Forcibly stopping sandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\"" Jan 23 01:12:52.452231 containerd[1715]: time="2026-01-23T01:12:52.452219211Z" level=info msg="TearDown network for sandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" successfully" Jan 23 01:12:52.453108 containerd[1715]: time="2026-01-23T01:12:52.453089704Z" level=info msg="Ensure that sandbox 110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6 in task-service has been cleanup successfully" Jan 23 01:12:52.461270 containerd[1715]: time="2026-01-23T01:12:52.461244187Z" level=info msg="RemovePodSandbox \"110d8068d101b87574a411f103bca58a1aa7c718be7043ce7654c415bc6682a6\" returns successfully" Jan 23 01:12:52.469921 kubelet[2296]: E0123 01:12:52.469888 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:52.502537 kubelet[2296]: I0123 01:12:52.502506 2296 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" path="/var/lib/kubelet/pods/9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76/volumes" Jan 23 01:12:52.530791 kubelet[2296]: E0123 01:12:52.530767 2296 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 01:12:53.470749 kubelet[2296]: E0123 01:12:53.470696 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:53.591592 kubelet[2296]: I0123 01:12:53.591562 2296 memory_manager.go:355] "RemoveStaleState removing state" podUID="9c6b8f4a-cd5b-4f2f-a3b8-09cc6936ac76" containerName="cilium-agent" Jan 23 01:12:53.596146 systemd[1]: Created slice kubepods-besteffort-pod1ff6a8bf_234c_4073_bbcb_d845630c6684.slice - libcontainer container kubepods-besteffort-pod1ff6a8bf_234c_4073_bbcb_d845630c6684.slice. Jan 23 01:12:53.615624 systemd[1]: Created slice kubepods-burstable-pod5fbbe1cc_8ed0_416d_a8bd_6a367de0d8fc.slice - libcontainer container kubepods-burstable-pod5fbbe1cc_8ed0_416d_a8bd_6a367de0d8fc.slice. Jan 23 01:12:53.765807 kubelet[2296]: I0123 01:12:53.765173 2296 setters.go:602] "Node became not ready" node="10.200.8.14" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T01:12:53Z","lastTransitionTime":"2026-01-23T01:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 01:12:53.783050 kubelet[2296]: I0123 01:12:53.783023 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-lib-modules\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.783050 kubelet[2296]: I0123 01:12:53.783051 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-bpf-maps\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.783181 kubelet[2296]: I0123 01:12:53.783073 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-hostproc\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.783181 kubelet[2296]: I0123 01:12:53.783098 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-cilium-cgroup\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.783181 kubelet[2296]: I0123 01:12:53.783112 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-cni-path\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.783181 kubelet[2296]: I0123 01:12:53.783126 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-etc-cni-netd\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.783181 kubelet[2296]: I0123 01:12:53.783145 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2cmn\" (UniqueName: \"kubernetes.io/projected/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-kube-api-access-g2cmn\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.783305 kubelet[2296]: I0123 01:12:53.783173 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ff6a8bf-234c-4073-bbcb-d845630c6684-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-sm5hl\" (UID: \"1ff6a8bf-234c-4073-bbcb-d845630c6684\") " pod="kube-system/cilium-operator-6c4d7847fc-sm5hl" Jan 23 01:12:53.783305 kubelet[2296]: I0123 01:12:53.783191 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xctj\" (UniqueName: \"kubernetes.io/projected/1ff6a8bf-234c-4073-bbcb-d845630c6684-kube-api-access-7xctj\") pod \"cilium-operator-6c4d7847fc-sm5hl\" (UID: \"1ff6a8bf-234c-4073-bbcb-d845630c6684\") " pod="kube-system/cilium-operator-6c4d7847fc-sm5hl" Jan 23 01:12:53.783305 kubelet[2296]: I0123 01:12:53.783208 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-cilium-config-path\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.783305 kubelet[2296]: I0123 01:12:53.783225 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-cilium-ipsec-secrets\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.783305 kubelet[2296]: I0123 01:12:53.783257 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-clustermesh-secrets\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.783417 kubelet[2296]: I0123 01:12:53.783273 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-hubble-tls\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.783417 kubelet[2296]: I0123 01:12:53.783291 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-cilium-run\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.783417 kubelet[2296]: I0123 01:12:53.783318 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-xtables-lock\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.783417 kubelet[2296]: I0123 01:12:53.783336 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-host-proc-sys-net\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.783417 kubelet[2296]: I0123 01:12:53.783353 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc-host-proc-sys-kernel\") pod \"cilium-dxtxc\" (UID: \"5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc\") " pod="kube-system/cilium-dxtxc" Jan 23 01:12:53.924589 containerd[1715]: time="2026-01-23T01:12:53.924545597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxtxc,Uid:5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc,Namespace:kube-system,Attempt:0,}" Jan 23 01:12:53.953684 containerd[1715]: time="2026-01-23T01:12:53.953612643Z" level=info msg="connecting to shim c9637fe92eb016cd7eed546cec06971e6adf401b7ba3c08ca47e77eead085913" address="unix:///run/containerd/s/60c8c18691eaf0d720453049d098bdc52246271c6ddfcfd999f034aece85ab99" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:12:53.974965 systemd[1]: Started cri-containerd-c9637fe92eb016cd7eed546cec06971e6adf401b7ba3c08ca47e77eead085913.scope - libcontainer container c9637fe92eb016cd7eed546cec06971e6adf401b7ba3c08ca47e77eead085913. Jan 23 01:12:53.997057 containerd[1715]: time="2026-01-23T01:12:53.997014571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxtxc,Uid:5fbbe1cc-8ed0-416d-a8bd-6a367de0d8fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9637fe92eb016cd7eed546cec06971e6adf401b7ba3c08ca47e77eead085913\"" Jan 23 01:12:53.999263 containerd[1715]: time="2026-01-23T01:12:53.999227079Z" level=info msg="CreateContainer within sandbox \"c9637fe92eb016cd7eed546cec06971e6adf401b7ba3c08ca47e77eead085913\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:12:54.014673 containerd[1715]: time="2026-01-23T01:12:54.014644003Z" level=info msg="Container 8b864c578e7de019687b07f8126b46211dbbd470c3c7f368c461ad93e82a5ec4: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:54.027552 containerd[1715]: time="2026-01-23T01:12:54.026961932Z" level=info msg="CreateContainer within sandbox \"c9637fe92eb016cd7eed546cec06971e6adf401b7ba3c08ca47e77eead085913\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8b864c578e7de019687b07f8126b46211dbbd470c3c7f368c461ad93e82a5ec4\"" Jan 23 01:12:54.027720 containerd[1715]: time="2026-01-23T01:12:54.027702872Z" level=info msg="StartContainer for \"8b864c578e7de019687b07f8126b46211dbbd470c3c7f368c461ad93e82a5ec4\"" Jan 23 01:12:54.028544 containerd[1715]: time="2026-01-23T01:12:54.028514527Z" level=info msg="connecting to shim 8b864c578e7de019687b07f8126b46211dbbd470c3c7f368c461ad93e82a5ec4" address="unix:///run/containerd/s/60c8c18691eaf0d720453049d098bdc52246271c6ddfcfd999f034aece85ab99" protocol=ttrpc version=3 Jan 23 01:12:54.047958 systemd[1]: Started cri-containerd-8b864c578e7de019687b07f8126b46211dbbd470c3c7f368c461ad93e82a5ec4.scope - libcontainer container 8b864c578e7de019687b07f8126b46211dbbd470c3c7f368c461ad93e82a5ec4. Jan 23 01:12:54.073743 containerd[1715]: time="2026-01-23T01:12:54.073568208Z" level=info msg="StartContainer for \"8b864c578e7de019687b07f8126b46211dbbd470c3c7f368c461ad93e82a5ec4\" returns successfully" Jan 23 01:12:54.077530 systemd[1]: cri-containerd-8b864c578e7de019687b07f8126b46211dbbd470c3c7f368c461ad93e82a5ec4.scope: Deactivated successfully. Jan 23 01:12:54.079985 containerd[1715]: time="2026-01-23T01:12:54.079959123Z" level=info msg="received container exit event container_id:\"8b864c578e7de019687b07f8126b46211dbbd470c3c7f368c461ad93e82a5ec4\" id:\"8b864c578e7de019687b07f8126b46211dbbd470c3c7f368c461ad93e82a5ec4\" pid:3901 exited_at:{seconds:1769130774 nanos:79696399}" Jan 23 01:12:54.199327 containerd[1715]: time="2026-01-23T01:12:54.199296336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sm5hl,Uid:1ff6a8bf-234c-4073-bbcb-d845630c6684,Namespace:kube-system,Attempt:0,}" Jan 23 01:12:54.226054 containerd[1715]: time="2026-01-23T01:12:54.226012680Z" level=info msg="connecting to shim e404705e43528dfabb53fdade6b984130006485510e335446c00b1004b799d2e" address="unix:///run/containerd/s/f2fe82cef3af30c50523539e8a53e2fe8324df8a27ebedc6c7cf914197b76e58" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:12:54.245972 systemd[1]: Started cri-containerd-e404705e43528dfabb53fdade6b984130006485510e335446c00b1004b799d2e.scope - libcontainer container e404705e43528dfabb53fdade6b984130006485510e335446c00b1004b799d2e. Jan 23 01:12:54.286175 containerd[1715]: time="2026-01-23T01:12:54.286075457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sm5hl,Uid:1ff6a8bf-234c-4073-bbcb-d845630c6684,Namespace:kube-system,Attempt:0,} returns sandbox id \"e404705e43528dfabb53fdade6b984130006485510e335446c00b1004b799d2e\"" Jan 23 01:12:54.287623 containerd[1715]: time="2026-01-23T01:12:54.287509610Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 01:12:54.471920 kubelet[2296]: E0123 01:12:54.471865 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:54.641196 containerd[1715]: time="2026-01-23T01:12:54.641102020Z" level=info msg="CreateContainer within sandbox \"c9637fe92eb016cd7eed546cec06971e6adf401b7ba3c08ca47e77eead085913\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:12:54.655219 containerd[1715]: time="2026-01-23T01:12:54.655188046Z" level=info msg="Container 60f2ef252d2defdc232d604d3510fae9ac242020c4620f02fc6773dcaad2e2d9: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:54.666875 containerd[1715]: time="2026-01-23T01:12:54.666843580Z" level=info msg="CreateContainer within sandbox \"c9637fe92eb016cd7eed546cec06971e6adf401b7ba3c08ca47e77eead085913\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"60f2ef252d2defdc232d604d3510fae9ac242020c4620f02fc6773dcaad2e2d9\"" Jan 23 01:12:54.667730 containerd[1715]: time="2026-01-23T01:12:54.667315091Z" level=info msg="StartContainer for \"60f2ef252d2defdc232d604d3510fae9ac242020c4620f02fc6773dcaad2e2d9\"" Jan 23 01:12:54.668040 containerd[1715]: time="2026-01-23T01:12:54.668018255Z" level=info msg="connecting to shim 60f2ef252d2defdc232d604d3510fae9ac242020c4620f02fc6773dcaad2e2d9" address="unix:///run/containerd/s/60c8c18691eaf0d720453049d098bdc52246271c6ddfcfd999f034aece85ab99" protocol=ttrpc version=3 Jan 23 01:12:54.683999 systemd[1]: Started cri-containerd-60f2ef252d2defdc232d604d3510fae9ac242020c4620f02fc6773dcaad2e2d9.scope - libcontainer container 60f2ef252d2defdc232d604d3510fae9ac242020c4620f02fc6773dcaad2e2d9. Jan 23 01:12:54.711556 containerd[1715]: time="2026-01-23T01:12:54.711518204Z" level=info msg="StartContainer for \"60f2ef252d2defdc232d604d3510fae9ac242020c4620f02fc6773dcaad2e2d9\" returns successfully" Jan 23 01:12:54.714182 systemd[1]: cri-containerd-60f2ef252d2defdc232d604d3510fae9ac242020c4620f02fc6773dcaad2e2d9.scope: Deactivated successfully. Jan 23 01:12:54.715364 containerd[1715]: time="2026-01-23T01:12:54.715328992Z" level=info msg="received container exit event container_id:\"60f2ef252d2defdc232d604d3510fae9ac242020c4620f02fc6773dcaad2e2d9\" id:\"60f2ef252d2defdc232d604d3510fae9ac242020c4620f02fc6773dcaad2e2d9\" pid:3991 exited_at:{seconds:1769130774 nanos:714745663}" Jan 23 01:12:55.472626 kubelet[2296]: E0123 01:12:55.472585 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:55.540395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097799196.mount: Deactivated successfully. Jan 23 01:12:55.646265 containerd[1715]: time="2026-01-23T01:12:55.646199370Z" level=info msg="CreateContainer within sandbox \"c9637fe92eb016cd7eed546cec06971e6adf401b7ba3c08ca47e77eead085913\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:12:55.672194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3953795172.mount: Deactivated successfully. Jan 23 01:12:55.672836 containerd[1715]: time="2026-01-23T01:12:55.672769239Z" level=info msg="Container 8fb285a643b424bc66c22e48797fb0507243c1a233ddaf93b0529551019b3047: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:55.690978 containerd[1715]: time="2026-01-23T01:12:55.690885134Z" level=info msg="CreateContainer within sandbox \"c9637fe92eb016cd7eed546cec06971e6adf401b7ba3c08ca47e77eead085913\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8fb285a643b424bc66c22e48797fb0507243c1a233ddaf93b0529551019b3047\"" Jan 23 01:12:55.692253 containerd[1715]: time="2026-01-23T01:12:55.691772954Z" level=info msg="StartContainer for \"8fb285a643b424bc66c22e48797fb0507243c1a233ddaf93b0529551019b3047\"" Jan 23 01:12:55.693283 containerd[1715]: time="2026-01-23T01:12:55.693257281Z" level=info msg="connecting to shim 8fb285a643b424bc66c22e48797fb0507243c1a233ddaf93b0529551019b3047" address="unix:///run/containerd/s/60c8c18691eaf0d720453049d098bdc52246271c6ddfcfd999f034aece85ab99" protocol=ttrpc version=3 Jan 23 01:12:55.717370 systemd[1]: Started cri-containerd-8fb285a643b424bc66c22e48797fb0507243c1a233ddaf93b0529551019b3047.scope - libcontainer container 8fb285a643b424bc66c22e48797fb0507243c1a233ddaf93b0529551019b3047. Jan 23 01:12:55.772282 systemd[1]: cri-containerd-8fb285a643b424bc66c22e48797fb0507243c1a233ddaf93b0529551019b3047.scope: Deactivated successfully. Jan 23 01:12:55.776924 containerd[1715]: time="2026-01-23T01:12:55.776865120Z" level=info msg="received container exit event container_id:\"8fb285a643b424bc66c22e48797fb0507243c1a233ddaf93b0529551019b3047\" id:\"8fb285a643b424bc66c22e48797fb0507243c1a233ddaf93b0529551019b3047\" pid:4047 exited_at:{seconds:1769130775 nanos:775739136}" Jan 23 01:12:55.778849 containerd[1715]: time="2026-01-23T01:12:55.778795706Z" level=info msg="StartContainer for \"8fb285a643b424bc66c22e48797fb0507243c1a233ddaf93b0529551019b3047\" returns successfully" Jan 23 01:12:56.123512 containerd[1715]: time="2026-01-23T01:12:56.123268970Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:12:56.125785 containerd[1715]: time="2026-01-23T01:12:56.125702270Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 01:12:56.128846 containerd[1715]: time="2026-01-23T01:12:56.128669522Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:12:56.129713 containerd[1715]: time="2026-01-23T01:12:56.129423543Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.841886438s" Jan 23 01:12:56.129713 containerd[1715]: time="2026-01-23T01:12:56.129457726Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 01:12:56.131558 containerd[1715]: time="2026-01-23T01:12:56.131529655Z" level=info msg="CreateContainer within sandbox \"e404705e43528dfabb53fdade6b984130006485510e335446c00b1004b799d2e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 01:12:56.147978 containerd[1715]: time="2026-01-23T01:12:56.147243581Z" level=info msg="Container 271cc188ab7cfd2dea61c65ec9a6b3d80acab30c1a2b4d4efe859c80479c47c6: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:56.163070 containerd[1715]: time="2026-01-23T01:12:56.163040844Z" level=info msg="CreateContainer within sandbox \"e404705e43528dfabb53fdade6b984130006485510e335446c00b1004b799d2e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"271cc188ab7cfd2dea61c65ec9a6b3d80acab30c1a2b4d4efe859c80479c47c6\"" Jan 23 01:12:56.163685 containerd[1715]: time="2026-01-23T01:12:56.163655723Z" level=info msg="StartContainer for \"271cc188ab7cfd2dea61c65ec9a6b3d80acab30c1a2b4d4efe859c80479c47c6\"" Jan 23 01:12:56.164404 containerd[1715]: time="2026-01-23T01:12:56.164356514Z" level=info msg="connecting to shim 271cc188ab7cfd2dea61c65ec9a6b3d80acab30c1a2b4d4efe859c80479c47c6" address="unix:///run/containerd/s/f2fe82cef3af30c50523539e8a53e2fe8324df8a27ebedc6c7cf914197b76e58" protocol=ttrpc version=3 Jan 23 01:12:56.184976 systemd[1]: Started cri-containerd-271cc188ab7cfd2dea61c65ec9a6b3d80acab30c1a2b4d4efe859c80479c47c6.scope - libcontainer container 271cc188ab7cfd2dea61c65ec9a6b3d80acab30c1a2b4d4efe859c80479c47c6. Jan 23 01:12:56.211860 containerd[1715]: time="2026-01-23T01:12:56.211812351Z" level=info msg="StartContainer for \"271cc188ab7cfd2dea61c65ec9a6b3d80acab30c1a2b4d4efe859c80479c47c6\" returns successfully" Jan 23 01:12:56.473257 kubelet[2296]: E0123 01:12:56.473205 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:56.655346 containerd[1715]: time="2026-01-23T01:12:56.655299753Z" level=info msg="CreateContainer within sandbox \"c9637fe92eb016cd7eed546cec06971e6adf401b7ba3c08ca47e77eead085913\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:12:56.672848 containerd[1715]: time="2026-01-23T01:12:56.672757135Z" level=info msg="Container 0c28b5b3ee4ea9373a98300b7b4e0729774b87399f5b072e787b950518015037: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:56.684903 containerd[1715]: time="2026-01-23T01:12:56.684870316Z" level=info msg="CreateContainer within sandbox \"c9637fe92eb016cd7eed546cec06971e6adf401b7ba3c08ca47e77eead085913\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0c28b5b3ee4ea9373a98300b7b4e0729774b87399f5b072e787b950518015037\"" Jan 23 01:12:56.685364 containerd[1715]: time="2026-01-23T01:12:56.685334401Z" level=info msg="StartContainer for \"0c28b5b3ee4ea9373a98300b7b4e0729774b87399f5b072e787b950518015037\"" Jan 23 01:12:56.686039 containerd[1715]: time="2026-01-23T01:12:56.686002263Z" level=info msg="connecting to shim 0c28b5b3ee4ea9373a98300b7b4e0729774b87399f5b072e787b950518015037" address="unix:///run/containerd/s/60c8c18691eaf0d720453049d098bdc52246271c6ddfcfd999f034aece85ab99" protocol=ttrpc version=3 Jan 23 01:12:56.703973 systemd[1]: Started cri-containerd-0c28b5b3ee4ea9373a98300b7b4e0729774b87399f5b072e787b950518015037.scope - libcontainer container 0c28b5b3ee4ea9373a98300b7b4e0729774b87399f5b072e787b950518015037. Jan 23 01:12:56.724536 systemd[1]: cri-containerd-0c28b5b3ee4ea9373a98300b7b4e0729774b87399f5b072e787b950518015037.scope: Deactivated successfully. Jan 23 01:12:56.732166 containerd[1715]: time="2026-01-23T01:12:56.732072952Z" level=info msg="received container exit event container_id:\"0c28b5b3ee4ea9373a98300b7b4e0729774b87399f5b072e787b950518015037\" id:\"0c28b5b3ee4ea9373a98300b7b4e0729774b87399f5b072e787b950518015037\" pid:4122 exited_at:{seconds:1769130776 nanos:725277620}" Jan 23 01:12:56.738441 containerd[1715]: time="2026-01-23T01:12:56.738413379Z" level=info msg="StartContainer for \"0c28b5b3ee4ea9373a98300b7b4e0729774b87399f5b072e787b950518015037\" returns successfully" Jan 23 01:12:57.473570 kubelet[2296]: E0123 01:12:57.473526 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:57.532631 kubelet[2296]: E0123 01:12:57.532596 2296 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 01:12:57.662965 containerd[1715]: time="2026-01-23T01:12:57.662912499Z" level=info msg="CreateContainer within sandbox \"c9637fe92eb016cd7eed546cec06971e6adf401b7ba3c08ca47e77eead085913\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:12:57.683669 kubelet[2296]: I0123 01:12:57.683613 2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-sm5hl" podStartSLOduration=2.840575562 podStartE2EDuration="4.683594546s" podCreationTimestamp="2026-01-23 01:12:53 +0000 UTC" firstStartedPulling="2026-01-23 01:12:54.287315372 +0000 UTC m=+62.219702105" lastFinishedPulling="2026-01-23 01:12:56.130334359 +0000 UTC m=+64.062721089" observedRunningTime="2026-01-23 01:12:56.675595636 +0000 UTC m=+64.607982367" watchObservedRunningTime="2026-01-23 01:12:57.683594546 +0000 UTC m=+65.615981286" Jan 23 01:12:57.686059 containerd[1715]: time="2026-01-23T01:12:57.686023780Z" level=info msg="Container a1a7c8cbcd96eb213957fe51a2e0f8435ddd94134e587a20ffbf0f3e598867bd: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:12:57.702913 containerd[1715]: time="2026-01-23T01:12:57.702889245Z" level=info msg="CreateContainer within sandbox \"c9637fe92eb016cd7eed546cec06971e6adf401b7ba3c08ca47e77eead085913\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a1a7c8cbcd96eb213957fe51a2e0f8435ddd94134e587a20ffbf0f3e598867bd\"" Jan 23 01:12:57.703347 containerd[1715]: time="2026-01-23T01:12:57.703315368Z" level=info msg="StartContainer for \"a1a7c8cbcd96eb213957fe51a2e0f8435ddd94134e587a20ffbf0f3e598867bd\"" Jan 23 01:12:57.704070 containerd[1715]: time="2026-01-23T01:12:57.704032498Z" level=info msg="connecting to shim a1a7c8cbcd96eb213957fe51a2e0f8435ddd94134e587a20ffbf0f3e598867bd" address="unix:///run/containerd/s/60c8c18691eaf0d720453049d098bdc52246271c6ddfcfd999f034aece85ab99" protocol=ttrpc version=3 Jan 23 01:12:57.724973 systemd[1]: Started cri-containerd-a1a7c8cbcd96eb213957fe51a2e0f8435ddd94134e587a20ffbf0f3e598867bd.scope - libcontainer container a1a7c8cbcd96eb213957fe51a2e0f8435ddd94134e587a20ffbf0f3e598867bd. Jan 23 01:12:57.764765 containerd[1715]: time="2026-01-23T01:12:57.764694588Z" level=info msg="StartContainer for \"a1a7c8cbcd96eb213957fe51a2e0f8435ddd94134e587a20ffbf0f3e598867bd\" returns successfully" Jan 23 01:12:58.033857 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Jan 23 01:12:58.474246 kubelet[2296]: E0123 01:12:58.474187 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:58.680172 kubelet[2296]: I0123 01:12:58.680128 2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dxtxc" podStartSLOduration=5.680116269 podStartE2EDuration="5.680116269s" podCreationTimestamp="2026-01-23 01:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:12:58.679961671 +0000 UTC m=+66.612348414" watchObservedRunningTime="2026-01-23 01:12:58.680116269 +0000 UTC m=+66.612503059" Jan 23 01:12:59.474423 kubelet[2296]: E0123 01:12:59.474361 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:13:00.290968 kubelet[2296]: E0123 01:13:00.290931 2296 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44992->127.0.0.1:46263: write tcp 127.0.0.1:44992->127.0.0.1:46263: write: broken pipe Jan 23 01:13:00.475174 kubelet[2296]: E0123 01:13:00.475134 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:13:00.600515 systemd-networkd[1364]: lxc_health: Link UP Jan 23 01:13:00.604580 systemd-networkd[1364]: lxc_health: Gained carrier Jan 23 01:13:01.475972 kubelet[2296]: E0123 01:13:01.475911 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:13:01.675046 systemd-networkd[1364]: lxc_health: Gained IPv6LL Jan 23 01:13:02.476900 kubelet[2296]: E0123 01:13:02.476861 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:13:03.477857 kubelet[2296]: E0123 01:13:03.477777 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:13:04.477921 kubelet[2296]: E0123 01:13:04.477877 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:13:05.478662 kubelet[2296]: E0123 01:13:05.478620 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:13:06.478897 kubelet[2296]: E0123 01:13:06.478845 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:13:07.479201 kubelet[2296]: E0123 01:13:07.479159 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:13:08.479479 kubelet[2296]: E0123 01:13:08.479439 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:13:09.480152 kubelet[2296]: E0123 01:13:09.480085 2296 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"