Jan 23 01:06:03.824852 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:06:03.824892 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:06:03.824907 kernel: BIOS-provided physical RAM map: Jan 23 01:06:03.824917 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:06:03.824926 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 01:06:03.824935 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 01:06:03.824949 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 01:06:03.824958 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 01:06:03.824968 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 01:06:03.824977 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 01:06:03.824987 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000007e93efff] usable Jan 23 01:06:03.824996 kernel: BIOS-e820: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 23 01:06:03.825006 kernel: BIOS-e820: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 23 01:06:03.825015 kernel: BIOS-e820: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 23 01:06:03.825030 kernel: BIOS-e820: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 23 01:06:03.825040 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 01:06:03.825050 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 01:06:03.825060 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 01:06:03.825070 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 23 01:06:03.825080 kernel: BIOS-e820: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 23 01:06:03.825090 kernel: BIOS-e820: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 23 01:06:03.825102 kernel: BIOS-e820: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 23 01:06:03.825112 kernel: BIOS-e820: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 23 01:06:03.825122 kernel: BIOS-e820: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 23 01:06:03.825132 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 01:06:03.825142 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 01:06:03.825152 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 01:06:03.825161 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 01:06:03.825171 kernel: NX (Execute Disable) protection: active Jan 23 01:06:03.825181 kernel: APIC: Static calls initialized Jan 23 01:06:03.825191 kernel: e820: update [mem 0x7df7f018-0x7df88a57] usable ==> usable Jan 23 01:06:03.825202 kernel: e820: update [mem 0x7df57018-0x7df7e457] usable ==> usable Jan 23 01:06:03.825212 kernel: extended physical RAM map: Jan 23 01:06:03.825224 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:06:03.825235 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 01:06:03.825245 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 01:06:03.825255 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 01:06:03.825265 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 01:06:03.825275 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 01:06:03.825285 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 01:06:03.825300 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000007df57017] usable Jan 23 01:06:03.825313 kernel: reserve setup_data: [mem 0x000000007df57018-0x000000007df7e457] usable Jan 23 01:06:03.825324 kernel: reserve setup_data: [mem 0x000000007df7e458-0x000000007df7f017] usable Jan 23 01:06:03.825334 kernel: reserve setup_data: [mem 0x000000007df7f018-0x000000007df88a57] usable Jan 23 01:06:03.825345 kernel: reserve setup_data: [mem 0x000000007df88a58-0x000000007e93efff] usable Jan 23 01:06:03.825355 kernel: reserve setup_data: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 23 01:06:03.825366 kernel: reserve setup_data: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 23 01:06:03.825376 kernel: reserve setup_data: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 23 01:06:03.825389 kernel: reserve setup_data: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 23 01:06:03.825400 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 01:06:03.825410 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 01:06:03.825421 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 01:06:03.825431 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 23 01:06:03.825442 kernel: reserve setup_data: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 23 01:06:03.825452 kernel: reserve setup_data: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 23 01:06:03.825463 kernel: reserve setup_data: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 23 01:06:03.825473 kernel: reserve setup_data: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 23 01:06:03.825484 kernel: reserve setup_data: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 23 01:06:03.825494 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 01:06:03.825507 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 01:06:03.825518 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 01:06:03.825528 kernel: reserve setup_data: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 01:06:03.825538 kernel: efi: EFI v2.7 by EDK II Jan 23 01:06:03.825549 kernel: efi: SMBIOS=0x7f972000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7dfd8018 RNG=0x7fb72018 Jan 23 01:06:03.825560 kernel: random: crng init done Jan 23 01:06:03.825570 kernel: efi: Remove mem139: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 01:06:03.825581 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 01:06:03.825591 kernel: secureboot: Secure boot disabled Jan 23 01:06:03.825602 kernel: SMBIOS 2.8 present. Jan 23 01:06:03.825613 kernel: DMI: STACKIT Cloud OpenStack Nova/Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 23 01:06:03.825623 kernel: DMI: Memory slots populated: 1/1 Jan 23 01:06:03.825636 kernel: Hypervisor detected: KVM Jan 23 01:06:03.825647 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 23 01:06:03.825657 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 01:06:03.825689 kernel: kvm-clock: using sched offset of 6951494087 cycles Jan 23 01:06:03.825702 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 01:06:03.825713 kernel: tsc: Detected 2294.594 MHz processor Jan 23 01:06:03.825724 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:06:03.825735 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:06:03.825746 kernel: last_pfn = 0x180000 max_arch_pfn = 0x10000000000 Jan 23 01:06:03.825757 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 01:06:03.825771 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:06:03.825782 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 23 01:06:03.825793 kernel: Using GB pages for direct mapping Jan 23 01:06:03.825804 kernel: ACPI: Early table checksum verification disabled Jan 23 01:06:03.825815 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 23 01:06:03.825826 kernel: ACPI: XSDT 0x000000007FB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jan 23 01:06:03.825837 kernel: ACPI: FACP 0x000000007FB77000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:03.825848 kernel: ACPI: DSDT 0x000000007FB78000 00423C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:03.825859 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 23 01:06:03.825872 kernel: ACPI: APIC 0x000000007FB76000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:03.825883 kernel: ACPI: MCFG 0x000000007FB75000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:03.825894 kernel: ACPI: WAET 0x000000007FB74000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:03.825905 kernel: ACPI: BGRT 0x000000007FB73000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 01:06:03.825916 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb77000-0x7fb770f3] Jan 23 01:06:03.825927 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb78000-0x7fb7c23b] Jan 23 01:06:03.825938 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 23 01:06:03.825948 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb76000-0x7fb7607f] Jan 23 01:06:03.825959 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb75000-0x7fb7503b] Jan 23 01:06:03.825973 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb74000-0x7fb74027] Jan 23 01:06:03.825984 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb73000-0x7fb73037] Jan 23 01:06:03.825994 kernel: No NUMA configuration found Jan 23 01:06:03.826005 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 23 01:06:03.826016 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Jan 23 01:06:03.826027 kernel: Zone ranges: Jan 23 01:06:03.826038 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:06:03.826049 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 01:06:03.826060 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 23 01:06:03.826073 kernel: Device empty Jan 23 01:06:03.826084 kernel: Movable zone start for each node Jan 23 01:06:03.826095 kernel: Early memory node ranges Jan 23 01:06:03.826106 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 01:06:03.826116 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 23 01:06:03.826127 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 23 01:06:03.826138 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 23 01:06:03.826148 kernel: node 0: [mem 0x0000000000900000-0x000000007e93efff] Jan 23 01:06:03.826159 kernel: node 0: [mem 0x000000007ea00000-0x000000007ec70fff] Jan 23 01:06:03.826170 kernel: node 0: [mem 0x000000007ed85000-0x000000007f8ecfff] Jan 23 01:06:03.826193 kernel: node 0: [mem 0x000000007fbff000-0x000000007feaefff] Jan 23 01:06:03.826205 kernel: node 0: [mem 0x000000007feb5000-0x000000007feebfff] Jan 23 01:06:03.826216 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 23 01:06:03.826231 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 23 01:06:03.826242 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:06:03.826254 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 01:06:03.826266 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 23 01:06:03.826278 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:06:03.826292 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 23 01:06:03.826303 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 01:06:03.826315 kernel: On node 0, zone DMA32: 276 pages in unavailable ranges Jan 23 01:06:03.826327 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 01:06:03.826339 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 23 01:06:03.826365 kernel: On node 0, zone Normal: 276 pages in unavailable ranges Jan 23 01:06:03.826377 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 01:06:03.826389 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 01:06:03.826401 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 01:06:03.826416 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 01:06:03.826428 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 01:06:03.826440 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:06:03.826452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 01:06:03.826463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 01:06:03.826476 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:06:03.826488 kernel: TSC deadline timer available Jan 23 01:06:03.826499 kernel: CPU topo: Max. logical packages: 2 Jan 23 01:06:03.826511 kernel: CPU topo: Max. logical dies: 2 Jan 23 01:06:03.826525 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:06:03.826537 kernel: CPU topo: Max. threads per core: 1 Jan 23 01:06:03.826549 kernel: CPU topo: Num. cores per package: 1 Jan 23 01:06:03.826561 kernel: CPU topo: Num. threads per package: 1 Jan 23 01:06:03.826573 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 01:06:03.826585 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 01:06:03.826596 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 01:06:03.826608 kernel: kvm-guest: setup PV sched yield Jan 23 01:06:03.826620 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 23 01:06:03.826707 kernel: Booting paravirtualized kernel on KVM Jan 23 01:06:03.826720 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:06:03.826732 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 01:06:03.826744 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 01:06:03.826756 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 01:06:03.826768 kernel: pcpu-alloc: [0] 0 1 Jan 23 01:06:03.826780 kernel: kvm-guest: PV spinlocks enabled Jan 23 01:06:03.826792 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:06:03.826806 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:06:03.826821 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 01:06:03.826833 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 01:06:03.826845 kernel: Fallback order for Node 0: 0 Jan 23 01:06:03.826857 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1046694 Jan 23 01:06:03.826869 kernel: Policy zone: Normal Jan 23 01:06:03.826881 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:06:03.826892 kernel: software IO TLB: area num 2. Jan 23 01:06:03.826903 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 01:06:03.826915 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:06:03.826926 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:06:03.826937 kernel: Dynamic Preempt: voluntary Jan 23 01:06:03.826948 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:06:03.826960 kernel: rcu: RCU event tracing is enabled. Jan 23 01:06:03.826971 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 01:06:03.826982 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:06:03.826993 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:06:03.827004 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:06:03.827015 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:06:03.827028 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 01:06:03.827039 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:06:03.827049 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:06:03.827060 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:06:03.827071 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 01:06:03.827082 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:06:03.827092 kernel: Console: colour dummy device 80x25 Jan 23 01:06:03.827103 kernel: printk: legacy console [tty0] enabled Jan 23 01:06:03.827116 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:06:03.827127 kernel: ACPI: Core revision 20240827 Jan 23 01:06:03.827138 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:06:03.827149 kernel: x2apic enabled Jan 23 01:06:03.827160 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:06:03.827171 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 01:06:03.827182 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 01:06:03.827192 kernel: kvm-guest: setup PV IPIs Jan 23 01:06:03.827203 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134287020, max_idle_ns: 440795320515 ns Jan 23 01:06:03.827214 kernel: Calibrating delay loop (skipped) preset value.. 4589.18 BogoMIPS (lpj=2294594) Jan 23 01:06:03.827227 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 01:06:03.827238 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 01:06:03.827248 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 01:06:03.827259 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:06:03.827269 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 23 01:06:03.827280 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 23 01:06:03.827290 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 23 01:06:03.827301 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 01:06:03.827311 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 01:06:03.827322 kernel: TAA: Mitigation: Clear CPU buffers Jan 23 01:06:03.827334 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 23 01:06:03.827345 kernel: active return thunk: its_return_thunk Jan 23 01:06:03.827355 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 01:06:03.827365 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:06:03.827376 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:06:03.827387 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:06:03.827397 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 01:06:03.827408 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 01:06:03.827418 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 01:06:03.827428 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 01:06:03.827439 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:06:03.827451 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 01:06:03.827462 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 01:06:03.827472 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 01:06:03.827482 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 23 01:06:03.827493 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 23 01:06:03.827503 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:06:03.827514 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:06:03.827524 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:06:03.827535 kernel: landlock: Up and running. Jan 23 01:06:03.827545 kernel: SELinux: Initializing. Jan 23 01:06:03.827555 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:06:03.827568 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:06:03.827578 kernel: smpboot: CPU0: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (family: 0x6, model: 0x6a, stepping: 0x6) Jan 23 01:06:03.827589 kernel: Performance Events: PEBS fmt0-, Icelake events, full-width counters, Intel PMU driver. Jan 23 01:06:03.827599 kernel: ... version: 2 Jan 23 01:06:03.827610 kernel: ... bit width: 48 Jan 23 01:06:03.827620 kernel: ... generic registers: 8 Jan 23 01:06:03.827631 kernel: ... value mask: 0000ffffffffffff Jan 23 01:06:03.827642 kernel: ... max period: 00007fffffffffff Jan 23 01:06:03.827652 kernel: ... fixed-purpose events: 3 Jan 23 01:06:03.827672 kernel: ... event mask: 00000007000000ff Jan 23 01:06:03.827685 kernel: signal: max sigframe size: 3632 Jan 23 01:06:03.827696 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:06:03.827707 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:06:03.827717 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 01:06:03.827728 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:06:03.827739 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:06:03.827750 kernel: .... node #0, CPUs: #1 Jan 23 01:06:03.827760 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 01:06:03.827771 kernel: smpboot: Total of 2 processors activated (9178.37 BogoMIPS) Jan 23 01:06:03.827784 kernel: Memory: 3945188K/4186776K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 236712K reserved, 0K cma-reserved) Jan 23 01:06:03.827795 kernel: devtmpfs: initialized Jan 23 01:06:03.827805 kernel: x86/mm: Memory block size: 128MB Jan 23 01:06:03.827816 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 23 01:06:03.827827 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 23 01:06:03.827838 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 23 01:06:03.827848 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 23 01:06:03.827859 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feb3000-0x7feb4fff] (8192 bytes) Jan 23 01:06:03.827870 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7ff70000-0x7fffffff] (589824 bytes) Jan 23 01:06:03.827883 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:06:03.827894 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 01:06:03.827905 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:06:03.827916 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:06:03.827926 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:06:03.827937 kernel: audit: type=2000 audit(1769130361.063:1): state=initialized audit_enabled=0 res=1 Jan 23 01:06:03.827947 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:06:03.827958 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:06:03.827968 kernel: cpuidle: using governor menu Jan 23 01:06:03.827981 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:06:03.827992 kernel: dca service started, version 1.12.1 Jan 23 01:06:03.828002 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 01:06:03.828013 kernel: PCI: Using configuration type 1 for base access Jan 23 01:06:03.828024 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:06:03.828035 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:06:03.828045 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:06:03.828056 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:06:03.828067 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:06:03.828080 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:06:03.828090 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:06:03.828101 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:06:03.828112 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 01:06:03.828123 kernel: ACPI: Interpreter enabled Jan 23 01:06:03.828133 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 01:06:03.828144 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:06:03.828155 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:06:03.828165 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 01:06:03.828178 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 01:06:03.828189 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 01:06:03.828358 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 01:06:03.828466 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 01:06:03.828565 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 01:06:03.828579 kernel: PCI host bridge to bus 0000:00 Jan 23 01:06:03.829008 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 01:06:03.829118 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 01:06:03.829209 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 01:06:03.829299 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 23 01:06:03.829387 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 01:06:03.829476 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38e800003fff window] Jan 23 01:06:03.829567 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 01:06:03.830095 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 01:06:03.830255 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 23 01:06:03.830375 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Jan 23 01:06:03.830477 kernel: pci 0000:00:01.0: BAR 2 [mem 0x38e800000000-0x38e800003fff 64bit pref] Jan 23 01:06:03.830577 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8439e000-0x8439efff] Jan 23 01:06:03.832142 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 01:06:03.832259 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 01:06:03.832371 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.832468 kernel: pci 0000:00:02.0: BAR 0 [mem 0x8439d000-0x8439dfff] Jan 23 01:06:03.832565 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 01:06:03.832672 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 23 01:06:03.832770 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 23 01:06:03.832864 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 01:06:03.832969 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.833070 kernel: pci 0000:00:02.1: BAR 0 [mem 0x8439c000-0x8439cfff] Jan 23 01:06:03.833164 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 01:06:03.833259 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 23 01:06:03.833354 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 01:06:03.833455 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.833551 kernel: pci 0000:00:02.2: BAR 0 [mem 0x8439b000-0x8439bfff] Jan 23 01:06:03.833649 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 01:06:03.833767 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 23 01:06:03.833902 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 01:06:03.834002 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.834098 kernel: pci 0000:00:02.3: BAR 0 [mem 0x8439a000-0x8439afff] Jan 23 01:06:03.834192 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 01:06:03.834288 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 23 01:06:03.834396 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 01:06:03.834500 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.834594 kernel: pci 0000:00:02.4: BAR 0 [mem 0x84399000-0x84399fff] Jan 23 01:06:03.835270 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 01:06:03.835368 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 23 01:06:03.835459 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 01:06:03.835560 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.835652 kernel: pci 0000:00:02.5: BAR 0 [mem 0x84398000-0x84398fff] Jan 23 01:06:03.835803 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 01:06:03.835893 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 23 01:06:03.835982 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 01:06:03.836077 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.836167 kernel: pci 0000:00:02.6: BAR 0 [mem 0x84397000-0x84397fff] Jan 23 01:06:03.836256 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 01:06:03.836347 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 23 01:06:03.836437 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 01:06:03.836531 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.836621 kernel: pci 0000:00:02.7: BAR 0 [mem 0x84396000-0x84396fff] Jan 23 01:06:03.837751 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 01:06:03.837850 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 23 01:06:03.837941 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 01:06:03.838043 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.838133 kernel: pci 0000:00:03.0: BAR 0 [mem 0x84395000-0x84395fff] Jan 23 01:06:03.838224 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 01:06:03.838313 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 23 01:06:03.838420 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 01:06:03.838519 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.838651 kernel: pci 0000:00:03.1: BAR 0 [mem 0x84394000-0x84394fff] Jan 23 01:06:03.838780 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 01:06:03.838883 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 23 01:06:03.838970 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 01:06:03.839061 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.839150 kernel: pci 0000:00:03.2: BAR 0 [mem 0x84393000-0x84393fff] Jan 23 01:06:03.839236 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 01:06:03.839321 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 23 01:06:03.839410 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 01:06:03.839500 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.839587 kernel: pci 0000:00:03.3: BAR 0 [mem 0x84392000-0x84392fff] Jan 23 01:06:03.840720 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 01:06:03.840838 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 23 01:06:03.840934 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 01:06:03.841028 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.841118 kernel: pci 0000:00:03.4: BAR 0 [mem 0x84391000-0x84391fff] Jan 23 01:06:03.841205 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 01:06:03.841290 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 23 01:06:03.841377 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 01:06:03.841471 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.841561 kernel: pci 0000:00:03.5: BAR 0 [mem 0x84390000-0x84390fff] Jan 23 01:06:03.841647 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 01:06:03.842491 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 23 01:06:03.842585 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 01:06:03.842705 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.842795 kernel: pci 0000:00:03.6: BAR 0 [mem 0x8438f000-0x8438ffff] Jan 23 01:06:03.842878 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 01:06:03.842964 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 23 01:06:03.843046 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 01:06:03.843133 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.843217 kernel: pci 0000:00:03.7: BAR 0 [mem 0x8438e000-0x8438efff] Jan 23 01:06:03.843300 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 01:06:03.843384 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 23 01:06:03.843465 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 01:06:03.843556 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.843640 kernel: pci 0000:00:04.0: BAR 0 [mem 0x8438d000-0x8438dfff] Jan 23 01:06:03.844788 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 01:06:03.844890 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 23 01:06:03.844975 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 01:06:03.845063 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.845147 kernel: pci 0000:00:04.1: BAR 0 [mem 0x8438c000-0x8438cfff] Jan 23 01:06:03.845234 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 01:06:03.845328 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 23 01:06:03.845412 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 01:06:03.845506 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.845590 kernel: pci 0000:00:04.2: BAR 0 [mem 0x8438b000-0x8438bfff] Jan 23 01:06:03.845685 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 01:06:03.845768 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 23 01:06:03.845854 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 01:06:03.845941 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.846024 kernel: pci 0000:00:04.3: BAR 0 [mem 0x8438a000-0x8438afff] Jan 23 01:06:03.846106 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 01:06:03.846188 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 23 01:06:03.846270 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 01:06:03.846370 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.846457 kernel: pci 0000:00:04.4: BAR 0 [mem 0x84389000-0x84389fff] Jan 23 01:06:03.846539 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 01:06:03.846622 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 23 01:06:03.846714 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 01:06:03.846796 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.846873 kernel: pci 0000:00:04.5: BAR 0 [mem 0x84388000-0x84388fff] Jan 23 01:06:03.846949 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 01:06:03.847026 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 23 01:06:03.847105 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 01:06:03.847188 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.847265 kernel: pci 0000:00:04.6: BAR 0 [mem 0x84387000-0x84387fff] Jan 23 01:06:03.847343 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 01:06:03.847418 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 23 01:06:03.847493 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 01:06:03.847573 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.847649 kernel: pci 0000:00:04.7: BAR 0 [mem 0x84386000-0x84386fff] Jan 23 01:06:03.849775 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 01:06:03.849863 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 23 01:06:03.849941 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 01:06:03.850029 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.850106 kernel: pci 0000:00:05.0: BAR 0 [mem 0x84385000-0x84385fff] Jan 23 01:06:03.850183 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 01:06:03.850270 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 23 01:06:03.850704 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 01:06:03.850833 kernel: pci 0000:00:05.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.850911 kernel: pci 0000:00:05.1: BAR 0 [mem 0x84384000-0x84384fff] Jan 23 01:06:03.850988 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 01:06:03.851061 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 23 01:06:03.851134 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 01:06:03.851216 kernel: pci 0000:00:05.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.851290 kernel: pci 0000:00:05.2: BAR 0 [mem 0x84383000-0x84383fff] Jan 23 01:06:03.851366 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 01:06:03.851441 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 01:06:03.851516 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 01:06:03.851593 kernel: pci 0000:00:05.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.852715 kernel: pci 0000:00:05.3: BAR 0 [mem 0x84382000-0x84382fff] Jan 23 01:06:03.852801 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 01:06:03.852875 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 01:06:03.852949 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 01:06:03.853346 kernel: pci 0000:00:05.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:06:03.853440 kernel: pci 0000:00:05.4: BAR 0 [mem 0x84381000-0x84381fff] Jan 23 01:06:03.853518 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 01:06:03.853594 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 23 01:06:03.853982 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 01:06:03.854075 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 01:06:03.854151 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 01:06:03.854232 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 01:06:03.854310 kernel: pci 0000:00:1f.2: BAR 4 [io 0x7040-0x705f] Jan 23 01:06:03.854397 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x84380000-0x84380fff] Jan 23 01:06:03.854476 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 01:06:03.854550 kernel: pci 0000:00:1f.3: BAR 4 [io 0x7000-0x703f] Jan 23 01:06:03.854633 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 23 01:06:03.854720 kernel: pci 0000:01:00.0: BAR 0 [mem 0x84200000-0x842000ff 64bit] Jan 23 01:06:03.854796 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 01:06:03.854871 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 23 01:06:03.854943 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 23 01:06:03.855015 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 01:06:03.855090 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 01:06:03.855172 kernel: pci_bus 0000:02: extended config space not accessible Jan 23 01:06:03.855183 kernel: acpiphp: Slot [1] registered Jan 23 01:06:03.855194 kernel: acpiphp: Slot [0] registered Jan 23 01:06:03.855204 kernel: acpiphp: Slot [2] registered Jan 23 01:06:03.855212 kernel: acpiphp: Slot [3] registered Jan 23 01:06:03.855220 kernel: acpiphp: Slot [4] registered Jan 23 01:06:03.855227 kernel: acpiphp: Slot [5] registered Jan 23 01:06:03.855235 kernel: acpiphp: Slot [6] registered Jan 23 01:06:03.855243 kernel: acpiphp: Slot [7] registered Jan 23 01:06:03.855251 kernel: acpiphp: Slot [8] registered Jan 23 01:06:03.855259 kernel: acpiphp: Slot [9] registered Jan 23 01:06:03.855267 kernel: acpiphp: Slot [10] registered Jan 23 01:06:03.855275 kernel: acpiphp: Slot [11] registered Jan 23 01:06:03.855285 kernel: acpiphp: Slot [12] registered Jan 23 01:06:03.855293 kernel: acpiphp: Slot [13] registered Jan 23 01:06:03.855301 kernel: acpiphp: Slot [14] registered Jan 23 01:06:03.855309 kernel: acpiphp: Slot [15] registered Jan 23 01:06:03.855316 kernel: acpiphp: Slot [16] registered Jan 23 01:06:03.855324 kernel: acpiphp: Slot [17] registered Jan 23 01:06:03.855332 kernel: acpiphp: Slot [18] registered Jan 23 01:06:03.855340 kernel: acpiphp: Slot [19] registered Jan 23 01:06:03.855347 kernel: acpiphp: Slot [20] registered Jan 23 01:06:03.855357 kernel: acpiphp: Slot [21] registered Jan 23 01:06:03.855365 kernel: acpiphp: Slot [22] registered Jan 23 01:06:03.855373 kernel: acpiphp: Slot [23] registered Jan 23 01:06:03.855381 kernel: acpiphp: Slot [24] registered Jan 23 01:06:03.855388 kernel: acpiphp: Slot [25] registered Jan 23 01:06:03.855396 kernel: acpiphp: Slot [26] registered Jan 23 01:06:03.855404 kernel: acpiphp: Slot [27] registered Jan 23 01:06:03.855412 kernel: acpiphp: Slot [28] registered Jan 23 01:06:03.855420 kernel: acpiphp: Slot [29] registered Jan 23 01:06:03.855428 kernel: acpiphp: Slot [30] registered Jan 23 01:06:03.855437 kernel: acpiphp: Slot [31] registered Jan 23 01:06:03.855520 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jan 23 01:06:03.855598 kernel: pci 0000:02:01.0: BAR 4 [io 0x6000-0x601f] Jan 23 01:06:03.855701 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 01:06:03.855712 kernel: acpiphp: Slot [0-2] registered Jan 23 01:06:03.855792 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 01:06:03.855867 kernel: pci 0000:03:00.0: BAR 1 [mem 0x83e00000-0x83e00fff] Jan 23 01:06:03.855944 kernel: pci 0000:03:00.0: BAR 4 [mem 0x380800000000-0x380800003fff 64bit pref] Jan 23 01:06:03.856019 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 01:06:03.856685 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 01:06:03.856698 kernel: acpiphp: Slot [0-3] registered Jan 23 01:06:03.856782 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 23 01:06:03.856857 kernel: pci 0000:04:00.0: BAR 1 [mem 0x83c00000-0x83c00fff] Jan 23 01:06:03.858785 kernel: pci 0000:04:00.0: BAR 4 [mem 0x381000000000-0x381000003fff 64bit pref] Jan 23 01:06:03.858939 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 01:06:03.858952 kernel: acpiphp: Slot [0-4] registered Jan 23 01:06:03.859033 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 01:06:03.859109 kernel: pci 0000:05:00.0: BAR 4 [mem 0x381800000000-0x381800003fff 64bit pref] Jan 23 01:06:03.859181 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 01:06:03.859191 kernel: acpiphp: Slot [0-5] registered Jan 23 01:06:03.859266 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 01:06:03.859341 kernel: pci 0000:06:00.0: BAR 1 [mem 0x83800000-0x83800fff] Jan 23 01:06:03.859412 kernel: pci 0000:06:00.0: BAR 4 [mem 0x382000000000-0x382000003fff 64bit pref] Jan 23 01:06:03.859483 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 01:06:03.859493 kernel: acpiphp: Slot [0-6] registered Jan 23 01:06:03.859564 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 01:06:03.859574 kernel: acpiphp: Slot [0-7] registered Jan 23 01:06:03.859642 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 01:06:03.859652 kernel: acpiphp: Slot [0-8] registered Jan 23 01:06:03.861006 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 01:06:03.861022 kernel: acpiphp: Slot [0-9] registered Jan 23 01:06:03.861091 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 01:06:03.861101 kernel: acpiphp: Slot [0-10] registered Jan 23 01:06:03.861170 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 01:06:03.861180 kernel: acpiphp: Slot [0-11] registered Jan 23 01:06:03.861249 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 01:06:03.861260 kernel: acpiphp: Slot [0-12] registered Jan 23 01:06:03.861334 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 01:06:03.861344 kernel: acpiphp: Slot [0-13] registered Jan 23 01:06:03.861412 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 01:06:03.861422 kernel: acpiphp: Slot [0-14] registered Jan 23 01:06:03.861490 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 01:06:03.861500 kernel: acpiphp: Slot [0-15] registered Jan 23 01:06:03.861569 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 01:06:03.861579 kernel: acpiphp: Slot [0-16] registered Jan 23 01:06:03.861651 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 01:06:03.861670 kernel: acpiphp: Slot [0-17] registered Jan 23 01:06:03.861738 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 01:06:03.861749 kernel: acpiphp: Slot [0-18] registered Jan 23 01:06:03.861817 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 01:06:03.861828 kernel: acpiphp: Slot [0-19] registered Jan 23 01:06:03.861895 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 01:06:03.861906 kernel: acpiphp: Slot [0-20] registered Jan 23 01:06:03.861976 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 01:06:03.861986 kernel: acpiphp: Slot [0-21] registered Jan 23 01:06:03.862055 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 01:06:03.862065 kernel: acpiphp: Slot [0-22] registered Jan 23 01:06:03.862133 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 01:06:03.862143 kernel: acpiphp: Slot [0-23] registered Jan 23 01:06:03.862211 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 01:06:03.862223 kernel: acpiphp: Slot [0-24] registered Jan 23 01:06:03.862292 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 01:06:03.862302 kernel: acpiphp: Slot [0-25] registered Jan 23 01:06:03.862384 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 01:06:03.862395 kernel: acpiphp: Slot [0-26] registered Jan 23 01:06:03.862462 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 01:06:03.862473 kernel: acpiphp: Slot [0-27] registered Jan 23 01:06:03.862541 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 01:06:03.862553 kernel: acpiphp: Slot [0-28] registered Jan 23 01:06:03.862621 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 01:06:03.862631 kernel: acpiphp: Slot [0-29] registered Jan 23 01:06:03.865165 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 01:06:03.865185 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 01:06:03.865194 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 01:06:03.865202 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 01:06:03.865210 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 01:06:03.865218 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 01:06:03.865229 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 01:06:03.865237 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 01:06:03.865244 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 01:06:03.865252 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 01:06:03.865260 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 01:06:03.865267 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 01:06:03.865275 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 01:06:03.865283 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 01:06:03.865291 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 01:06:03.865301 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 01:06:03.865308 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 01:06:03.865316 kernel: iommu: Default domain type: Translated Jan 23 01:06:03.865324 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:06:03.865332 kernel: efivars: Registered efivars operations Jan 23 01:06:03.865339 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:06:03.865347 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 01:06:03.865355 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 23 01:06:03.865362 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 23 01:06:03.865371 kernel: e820: reserve RAM buffer [mem 0x7df57018-0x7fffffff] Jan 23 01:06:03.865379 kernel: e820: reserve RAM buffer [mem 0x7df7f018-0x7fffffff] Jan 23 01:06:03.865386 kernel: e820: reserve RAM buffer [mem 0x7e93f000-0x7fffffff] Jan 23 01:06:03.865394 kernel: e820: reserve RAM buffer [mem 0x7ec71000-0x7fffffff] Jan 23 01:06:03.865401 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 23 01:06:03.865409 kernel: e820: reserve RAM buffer [mem 0x7feaf000-0x7fffffff] Jan 23 01:06:03.865416 kernel: e820: reserve RAM buffer [mem 0x7feec000-0x7fffffff] Jan 23 01:06:03.865496 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 01:06:03.865572 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 01:06:03.865644 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 01:06:03.865653 kernel: vgaarb: loaded Jan 23 01:06:03.865670 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 01:06:03.866220 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:06:03.866229 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:06:03.866237 kernel: pnp: PnP ACPI init Jan 23 01:06:03.866332 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 01:06:03.866359 kernel: pnp: PnP ACPI: found 5 devices Jan 23 01:06:03.866368 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:06:03.866376 kernel: NET: Registered PF_INET protocol family Jan 23 01:06:03.866384 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 01:06:03.866392 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 01:06:03.866400 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:06:03.866408 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 01:06:03.866416 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 01:06:03.866424 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 01:06:03.866434 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:06:03.866442 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:06:03.866449 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:06:03.866457 kernel: NET: Registered PF_XDP protocol family Jan 23 01:06:03.866538 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 01:06:03.866612 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 01:06:03.866697 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 01:06:03.866771 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 01:06:03.866846 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 01:06:03.866917 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 01:06:03.866988 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 01:06:03.867059 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 01:06:03.867132 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 23 01:06:03.867204 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 23 01:06:03.867277 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 23 01:06:03.867347 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 23 01:06:03.867419 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 23 01:06:03.867490 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 23 01:06:03.867560 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 23 01:06:03.867630 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 23 01:06:03.867717 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 23 01:06:03.867789 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 23 01:06:03.867858 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 23 01:06:03.867929 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 23 01:06:03.868003 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 23 01:06:03.868073 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 23 01:06:03.868143 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 23 01:06:03.868213 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 23 01:06:03.868283 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 23 01:06:03.868352 kernel: pci 0000:00:05.1: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 23 01:06:03.868423 kernel: pci 0000:00:05.2: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 23 01:06:03.868493 kernel: pci 0000:00:05.3: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 23 01:06:03.868565 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 23 01:06:03.868632 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Jan 23 01:06:03.868735 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Jan 23 01:06:03.868805 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Jan 23 01:06:03.868872 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Jan 23 01:06:03.868940 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Jan 23 01:06:03.869009 kernel: pci 0000:00:02.6: bridge window [io 0x8000-0x8fff]: assigned Jan 23 01:06:03.869077 kernel: pci 0000:00:02.7: bridge window [io 0x9000-0x9fff]: assigned Jan 23 01:06:03.869149 kernel: pci 0000:00:03.0: bridge window [io 0xa000-0xafff]: assigned Jan 23 01:06:03.869218 kernel: pci 0000:00:03.1: bridge window [io 0xb000-0xbfff]: assigned Jan 23 01:06:03.869286 kernel: pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]: assigned Jan 23 01:06:03.869355 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff]: assigned Jan 23 01:06:03.869423 kernel: pci 0000:00:03.4: bridge window [io 0xe000-0xefff]: assigned Jan 23 01:06:03.869491 kernel: pci 0000:00:03.5: bridge window [io 0xf000-0xffff]: assigned Jan 23 01:06:03.869560 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.869628 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.869714 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.869783 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.869851 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.869919 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.869988 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.870056 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.870124 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.870195 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.870265 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.870333 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.870413 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.870483 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.870552 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.870620 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.870711 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.870783 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.870852 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.870920 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.870988 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.871057 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.871127 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.871195 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.871263 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.871334 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.871402 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.871470 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.871538 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.871606 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.871683 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff]: assigned Jan 23 01:06:03.871752 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff]: assigned Jan 23 01:06:03.871821 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 01:06:03.871891 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff]: assigned Jan 23 01:06:03.871958 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff]: assigned Jan 23 01:06:03.872025 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 01:06:03.872093 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff]: assigned Jan 23 01:06:03.872161 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff]: assigned Jan 23 01:06:03.872228 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff]: assigned Jan 23 01:06:03.872297 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff]: assigned Jan 23 01:06:03.872365 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff]: assigned Jan 23 01:06:03.872434 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff]: assigned Jan 23 01:06:03.872502 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff]: assigned Jan 23 01:06:03.872570 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.872637 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.872713 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.872781 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.872848 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.872917 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.872984 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.873055 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.873123 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.873192 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.873260 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.873328 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.873396 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.873463 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.873535 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.873602 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.873682 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.873753 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.873821 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.873889 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.873957 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.874024 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.874095 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.874163 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.874231 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.874299 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.874377 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.874445 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.874513 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 01:06:03.874581 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 23 01:06:03.874657 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 01:06:03.874739 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 23 01:06:03.874810 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 23 01:06:03.874879 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 01:06:03.874949 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 01:06:03.875026 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 23 01:06:03.875144 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 23 01:06:03.875213 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 01:06:03.875285 kernel: pci 0000:03:00.0: ROM [mem 0x83e80000-0x83efffff pref]: assigned Jan 23 01:06:03.875357 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 01:06:03.875424 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 23 01:06:03.875492 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 01:06:03.875559 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 01:06:03.875628 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 23 01:06:03.877745 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 01:06:03.877831 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 01:06:03.877904 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 23 01:06:03.877974 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 01:06:03.878056 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 01:06:03.878132 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 23 01:06:03.878201 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 01:06:03.878272 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 01:06:03.878342 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 23 01:06:03.878425 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 01:06:03.878497 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 01:06:03.878568 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 23 01:06:03.878637 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 01:06:03.878762 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 01:06:03.878832 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 23 01:06:03.878899 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 01:06:03.878968 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 01:06:03.879036 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 23 01:06:03.879104 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 01:06:03.879173 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 01:06:03.879241 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 23 01:06:03.879313 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 01:06:03.879382 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 01:06:03.879449 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 23 01:06:03.879517 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 01:06:03.879587 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 01:06:03.879655 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 23 01:06:03.881045 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 01:06:03.881203 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 01:06:03.881317 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 23 01:06:03.881387 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 01:06:03.881489 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 01:06:03.881558 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 23 01:06:03.881627 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 01:06:03.881706 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 01:06:03.881774 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 23 01:06:03.881842 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 01:06:03.881913 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 01:06:03.881990 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 23 01:06:03.882059 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 01:06:03.882133 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 01:06:03.882202 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff] Jan 23 01:06:03.882271 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 23 01:06:03.882340 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 01:06:03.882426 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 01:06:03.882495 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff] Jan 23 01:06:03.882563 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 23 01:06:03.882634 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 01:06:03.882714 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 01:06:03.882784 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff] Jan 23 01:06:03.882852 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 23 01:06:03.882920 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 01:06:03.882990 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 01:06:03.883059 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff] Jan 23 01:06:03.883130 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 23 01:06:03.883228 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 01:06:03.883301 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 01:06:03.883370 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff] Jan 23 01:06:03.883439 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 23 01:06:03.883508 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 01:06:03.883577 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 01:06:03.883646 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff] Jan 23 01:06:03.884038 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 23 01:06:03.884112 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 01:06:03.884183 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 01:06:03.884253 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff] Jan 23 01:06:03.884321 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 23 01:06:03.884389 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 01:06:03.884459 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 01:06:03.884531 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff] Jan 23 01:06:03.884599 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 23 01:06:03.884679 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 01:06:03.884750 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 01:06:03.884818 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff] Jan 23 01:06:03.884886 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 23 01:06:03.884955 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 01:06:03.885029 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 01:06:03.885099 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff] Jan 23 01:06:03.885168 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 23 01:06:03.885236 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 01:06:03.885307 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 01:06:03.885375 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff] Jan 23 01:06:03.885443 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 01:06:03.885512 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 01:06:03.885584 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 01:06:03.885652 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff] Jan 23 01:06:03.885738 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 01:06:03.885807 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 01:06:03.885876 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 01:06:03.885944 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff] Jan 23 01:06:03.886014 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 23 01:06:03.886250 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 01:06:03.886326 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 01:06:03.886401 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 01:06:03.886462 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 01:06:03.886523 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 23 01:06:03.886583 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 01:06:03.886644 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x38e800003fff window] Jan 23 01:06:03.886731 kernel: pci_bus 0000:01: resource 0 [io 0x6000-0x6fff] Jan 23 01:06:03.886800 kernel: pci_bus 0000:01: resource 1 [mem 0x84000000-0x842fffff] Jan 23 01:06:03.886863 kernel: pci_bus 0000:01: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 01:06:03.886933 kernel: pci_bus 0000:02: resource 0 [io 0x6000-0x6fff] Jan 23 01:06:03.887001 kernel: pci_bus 0000:02: resource 1 [mem 0x84000000-0x841fffff] Jan 23 01:06:03.887066 kernel: pci_bus 0000:02: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 01:06:03.887135 kernel: pci_bus 0000:03: resource 1 [mem 0x83e00000-0x83ffffff] Jan 23 01:06:03.887202 kernel: pci_bus 0000:03: resource 2 [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 01:06:03.887273 kernel: pci_bus 0000:04: resource 1 [mem 0x83c00000-0x83dfffff] Jan 23 01:06:03.887338 kernel: pci_bus 0000:04: resource 2 [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 01:06:03.887405 kernel: pci_bus 0000:05: resource 1 [mem 0x83a00000-0x83bfffff] Jan 23 01:06:03.887470 kernel: pci_bus 0000:05: resource 2 [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 01:06:03.887538 kernel: pci_bus 0000:06: resource 1 [mem 0x83800000-0x839fffff] Jan 23 01:06:03.887601 kernel: pci_bus 0000:06: resource 2 [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 01:06:03.887683 kernel: pci_bus 0000:07: resource 1 [mem 0x83600000-0x837fffff] Jan 23 01:06:03.887748 kernel: pci_bus 0000:07: resource 2 [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 01:06:03.887818 kernel: pci_bus 0000:08: resource 1 [mem 0x83400000-0x835fffff] Jan 23 01:06:03.887883 kernel: pci_bus 0000:08: resource 2 [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 01:06:03.887951 kernel: pci_bus 0000:09: resource 1 [mem 0x83200000-0x833fffff] Jan 23 01:06:03.888016 kernel: pci_bus 0000:09: resource 2 [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 01:06:03.888086 kernel: pci_bus 0000:0a: resource 1 [mem 0x83000000-0x831fffff] Jan 23 01:06:03.888151 kernel: pci_bus 0000:0a: resource 2 [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 01:06:03.888248 kernel: pci_bus 0000:0b: resource 1 [mem 0x82e00000-0x82ffffff] Jan 23 01:06:03.888314 kernel: pci_bus 0000:0b: resource 2 [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 01:06:03.888398 kernel: pci_bus 0000:0c: resource 1 [mem 0x82c00000-0x82dfffff] Jan 23 01:06:03.888483 kernel: pci_bus 0000:0c: resource 2 [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 01:06:03.888557 kernel: pci_bus 0000:0d: resource 1 [mem 0x82a00000-0x82bfffff] Jan 23 01:06:03.888621 kernel: pci_bus 0000:0d: resource 2 [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 01:06:03.888705 kernel: pci_bus 0000:0e: resource 1 [mem 0x82800000-0x829fffff] Jan 23 01:06:03.888772 kernel: pci_bus 0000:0e: resource 2 [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 01:06:03.888873 kernel: pci_bus 0000:0f: resource 1 [mem 0x82600000-0x827fffff] Jan 23 01:06:03.888963 kernel: pci_bus 0000:0f: resource 2 [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 01:06:03.889034 kernel: pci_bus 0000:10: resource 1 [mem 0x82400000-0x825fffff] Jan 23 01:06:03.889100 kernel: pci_bus 0000:10: resource 2 [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 01:06:03.889171 kernel: pci_bus 0000:11: resource 1 [mem 0x82200000-0x823fffff] Jan 23 01:06:03.889236 kernel: pci_bus 0000:11: resource 2 [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 01:06:03.889304 kernel: pci_bus 0000:12: resource 0 [io 0xf000-0xffff] Jan 23 01:06:03.889371 kernel: pci_bus 0000:12: resource 1 [mem 0x82000000-0x821fffff] Jan 23 01:06:03.889434 kernel: pci_bus 0000:12: resource 2 [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 01:06:03.889501 kernel: pci_bus 0000:13: resource 0 [io 0xe000-0xefff] Jan 23 01:06:03.889565 kernel: pci_bus 0000:13: resource 1 [mem 0x81e00000-0x81ffffff] Jan 23 01:06:03.889628 kernel: pci_bus 0000:13: resource 2 [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 01:06:03.889748 kernel: pci_bus 0000:14: resource 0 [io 0xd000-0xdfff] Jan 23 01:06:03.889816 kernel: pci_bus 0000:14: resource 1 [mem 0x81c00000-0x81dfffff] Jan 23 01:06:03.889945 kernel: pci_bus 0000:14: resource 2 [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 01:06:03.890020 kernel: pci_bus 0000:15: resource 0 [io 0xc000-0xcfff] Jan 23 01:06:03.890084 kernel: pci_bus 0000:15: resource 1 [mem 0x81a00000-0x81bfffff] Jan 23 01:06:03.890149 kernel: pci_bus 0000:15: resource 2 [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 01:06:03.890218 kernel: pci_bus 0000:16: resource 0 [io 0xb000-0xbfff] Jan 23 01:06:03.890283 kernel: pci_bus 0000:16: resource 1 [mem 0x81800000-0x819fffff] Jan 23 01:06:03.890357 kernel: pci_bus 0000:16: resource 2 [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 01:06:03.890432 kernel: pci_bus 0000:17: resource 0 [io 0xa000-0xafff] Jan 23 01:06:03.890496 kernel: pci_bus 0000:17: resource 1 [mem 0x81600000-0x817fffff] Jan 23 01:06:03.890560 kernel: pci_bus 0000:17: resource 2 [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 01:06:03.890630 kernel: pci_bus 0000:18: resource 0 [io 0x9000-0x9fff] Jan 23 01:06:03.890713 kernel: pci_bus 0000:18: resource 1 [mem 0x81400000-0x815fffff] Jan 23 01:06:03.890783 kernel: pci_bus 0000:18: resource 2 [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 01:06:03.890852 kernel: pci_bus 0000:19: resource 0 [io 0x8000-0x8fff] Jan 23 01:06:03.890919 kernel: pci_bus 0000:19: resource 1 [mem 0x81200000-0x813fffff] Jan 23 01:06:03.890983 kernel: pci_bus 0000:19: resource 2 [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 01:06:03.891054 kernel: pci_bus 0000:1a: resource 0 [io 0x5000-0x5fff] Jan 23 01:06:03.891118 kernel: pci_bus 0000:1a: resource 1 [mem 0x81000000-0x811fffff] Jan 23 01:06:03.891182 kernel: pci_bus 0000:1a: resource 2 [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 01:06:03.891250 kernel: pci_bus 0000:1b: resource 0 [io 0x4000-0x4fff] Jan 23 01:06:03.891316 kernel: pci_bus 0000:1b: resource 1 [mem 0x80e00000-0x80ffffff] Jan 23 01:06:03.891380 kernel: pci_bus 0000:1b: resource 2 [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 01:06:03.891448 kernel: pci_bus 0000:1c: resource 0 [io 0x3000-0x3fff] Jan 23 01:06:03.891512 kernel: pci_bus 0000:1c: resource 1 [mem 0x80c00000-0x80dfffff] Jan 23 01:06:03.891576 kernel: pci_bus 0000:1c: resource 2 [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 01:06:03.891644 kernel: pci_bus 0000:1d: resource 0 [io 0x2000-0x2fff] Jan 23 01:06:03.891741 kernel: pci_bus 0000:1d: resource 1 [mem 0x80a00000-0x80bfffff] Jan 23 01:06:03.891810 kernel: pci_bus 0000:1d: resource 2 [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 01:06:03.891880 kernel: pci_bus 0000:1e: resource 0 [io 0x1000-0x1fff] Jan 23 01:06:03.891944 kernel: pci_bus 0000:1e: resource 1 [mem 0x80800000-0x809fffff] Jan 23 01:06:03.892009 kernel: pci_bus 0000:1e: resource 2 [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 01:06:03.892020 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 01:06:03.892028 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:06:03.892037 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 01:06:03.892045 kernel: software IO TLB: mapped [mem 0x0000000077ede000-0x000000007bede000] (64MB) Jan 23 01:06:03.892055 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 01:06:03.892063 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134287020, max_idle_ns: 440795320515 ns Jan 23 01:06:03.892070 kernel: Initialise system trusted keyrings Jan 23 01:06:03.892079 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 01:06:03.892086 kernel: Key type asymmetric registered Jan 23 01:06:03.892094 kernel: Asymmetric key parser 'x509' registered Jan 23 01:06:03.892103 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:06:03.892111 kernel: io scheduler mq-deadline registered Jan 23 01:06:03.892120 kernel: io scheduler kyber registered Jan 23 01:06:03.892128 kernel: io scheduler bfq registered Jan 23 01:06:03.892203 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 23 01:06:03.892276 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 23 01:06:03.892348 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 23 01:06:03.892419 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 23 01:06:03.892491 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 23 01:06:03.892562 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 23 01:06:03.892634 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 23 01:06:03.892739 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 23 01:06:03.892811 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 23 01:06:03.892880 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 23 01:06:03.892951 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 23 01:06:03.893023 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 23 01:06:03.893094 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 23 01:06:03.893163 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 23 01:06:03.893233 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 23 01:06:03.893301 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 23 01:06:03.893312 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 01:06:03.893382 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 23 01:06:03.893451 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 23 01:06:03.893519 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33 Jan 23 01:06:03.893587 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33 Jan 23 01:06:03.893656 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34 Jan 23 01:06:03.893734 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34 Jan 23 01:06:03.893807 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35 Jan 23 01:06:03.893875 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35 Jan 23 01:06:03.893944 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36 Jan 23 01:06:03.894012 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36 Jan 23 01:06:03.894081 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37 Jan 23 01:06:03.894154 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37 Jan 23 01:06:03.894223 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38 Jan 23 01:06:03.894292 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38 Jan 23 01:06:03.894376 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39 Jan 23 01:06:03.894445 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39 Jan 23 01:06:03.894455 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 01:06:03.894523 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40 Jan 23 01:06:03.894591 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40 Jan 23 01:06:03.894686 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 41 Jan 23 01:06:03.896795 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 41 Jan 23 01:06:03.896871 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 42 Jan 23 01:06:03.896942 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 42 Jan 23 01:06:03.897012 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 43 Jan 23 01:06:03.897081 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 43 Jan 23 01:06:03.897151 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 44 Jan 23 01:06:03.897221 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 44 Jan 23 01:06:03.897296 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 45 Jan 23 01:06:03.897364 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 45 Jan 23 01:06:03.897433 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 46 Jan 23 01:06:03.897502 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 46 Jan 23 01:06:03.897572 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 47 Jan 23 01:06:03.897642 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 47 Jan 23 01:06:03.897653 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 23 01:06:03.897755 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 48 Jan 23 01:06:03.897827 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 48 Jan 23 01:06:03.897897 kernel: pcieport 0000:00:05.1: PME: Signaling with IRQ 49 Jan 23 01:06:03.897966 kernel: pcieport 0000:00:05.1: AER: enabled with IRQ 49 Jan 23 01:06:03.898036 kernel: pcieport 0000:00:05.2: PME: Signaling with IRQ 50 Jan 23 01:06:03.898105 kernel: pcieport 0000:00:05.2: AER: enabled with IRQ 50 Jan 23 01:06:03.898175 kernel: pcieport 0000:00:05.3: PME: Signaling with IRQ 51 Jan 23 01:06:03.898244 kernel: pcieport 0000:00:05.3: AER: enabled with IRQ 51 Jan 23 01:06:03.898313 kernel: pcieport 0000:00:05.4: PME: Signaling with IRQ 52 Jan 23 01:06:03.898393 kernel: pcieport 0000:00:05.4: AER: enabled with IRQ 52 Jan 23 01:06:03.898405 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:06:03.898413 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:06:03.898422 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:06:03.898430 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 01:06:03.898438 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 01:06:03.898446 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 01:06:03.898454 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 01:06:03.898528 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 01:06:03.898597 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 01:06:03.898661 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T01:06:03 UTC (1769130363) Jan 23 01:06:03.899450 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 23 01:06:03.899461 kernel: intel_pstate: CPU model not supported Jan 23 01:06:03.899469 kernel: efifb: probing for efifb Jan 23 01:06:03.899477 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Jan 23 01:06:03.899485 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 01:06:03.899493 kernel: efifb: scrolling: redraw Jan 23 01:06:03.899504 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 01:06:03.899512 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 01:06:03.899520 kernel: fb0: EFI VGA frame buffer device Jan 23 01:06:03.899528 kernel: pstore: Using crash dump compression: deflate Jan 23 01:06:03.899536 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 01:06:03.899544 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:06:03.899552 kernel: Segment Routing with IPv6 Jan 23 01:06:03.899559 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:06:03.899567 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:06:03.899575 kernel: Key type dns_resolver registered Jan 23 01:06:03.900171 kernel: IPI shorthand broadcast: enabled Jan 23 01:06:03.900181 kernel: sched_clock: Marking stable (3861002458, 154586037)->(4118532660, -102944165) Jan 23 01:06:03.900189 kernel: registered taskstats version 1 Jan 23 01:06:03.900197 kernel: Loading compiled-in X.509 certificates Jan 23 01:06:03.900205 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:06:03.900213 kernel: Demotion targets for Node 0: null Jan 23 01:06:03.900221 kernel: Key type .fscrypt registered Jan 23 01:06:03.900229 kernel: Key type fscrypt-provisioning registered Jan 23 01:06:03.900237 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:06:03.900247 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:06:03.900255 kernel: ima: No architecture policies found Jan 23 01:06:03.900263 kernel: clk: Disabling unused clocks Jan 23 01:06:03.900271 kernel: Warning: unable to open an initial console. Jan 23 01:06:03.900279 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:06:03.900287 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:06:03.900295 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:06:03.900303 kernel: Run /init as init process Jan 23 01:06:03.900311 kernel: with arguments: Jan 23 01:06:03.900321 kernel: /init Jan 23 01:06:03.900328 kernel: with environment: Jan 23 01:06:03.900336 kernel: HOME=/ Jan 23 01:06:03.900343 kernel: TERM=linux Jan 23 01:06:03.900352 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:06:03.900364 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:06:03.900372 systemd[1]: Detected virtualization kvm. Jan 23 01:06:03.900382 systemd[1]: Detected architecture x86-64. Jan 23 01:06:03.900390 systemd[1]: Running in initrd. Jan 23 01:06:03.900398 systemd[1]: No hostname configured, using default hostname. Jan 23 01:06:03.900407 systemd[1]: Hostname set to . Jan 23 01:06:03.900415 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:06:03.900434 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:06:03.900444 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:06:03.900453 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:06:03.900462 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:06:03.900471 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:06:03.900479 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:06:03.900490 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:06:03.900499 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:06:03.900508 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:06:03.900516 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:06:03.900524 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:06:03.900532 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:06:03.900541 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:06:03.900551 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:06:03.900559 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:06:03.900567 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:06:03.900575 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:06:03.900584 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:06:03.900592 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:06:03.900600 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:06:03.900608 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:06:03.900618 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:06:03.900626 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:06:03.900635 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:06:03.900643 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:06:03.900651 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:06:03.900659 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:06:03.900678 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:06:03.900686 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:06:03.900694 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:06:03.900704 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:03.900734 systemd-journald[225]: Collecting audit messages is disabled. Jan 23 01:06:03.900756 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:06:03.900767 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:06:03.900775 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:06:03.900784 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:06:03.900792 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:03.900804 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:06:03.900813 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:06:03.900822 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:06:03.900831 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:06:03.900839 kernel: Bridge firewalling registered Jan 23 01:06:03.900847 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:06:03.900857 systemd-journald[225]: Journal started Jan 23 01:06:03.900877 systemd-journald[225]: Runtime Journal (/run/log/journal/3c4c3d51fbf94974ab322177664e5502) is 8M, max 78M, 70M free. Jan 23 01:06:03.850873 systemd-modules-load[226]: Inserted module 'overlay' Jan 23 01:06:03.890760 systemd-modules-load[226]: Inserted module 'br_netfilter' Jan 23 01:06:03.903987 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:06:03.907683 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:06:03.908529 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:06:03.914021 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:06:03.915253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:06:03.916987 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:06:03.919220 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:06:03.929182 systemd-tmpfiles[258]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:06:03.932797 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:06:03.935517 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:06:03.939364 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:06:03.968872 systemd-resolved[273]: Positive Trust Anchors: Jan 23 01:06:03.969526 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:06:03.969559 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:06:03.972770 systemd-resolved[273]: Defaulting to hostname 'linux'. Jan 23 01:06:03.974871 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:06:03.975367 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:06:04.014695 kernel: SCSI subsystem initialized Jan 23 01:06:04.024686 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:06:04.034694 kernel: iscsi: registered transport (tcp) Jan 23 01:06:04.055880 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:06:04.055996 kernel: QLogic iSCSI HBA Driver Jan 23 01:06:04.075964 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:06:04.093363 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:06:04.095281 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:06:04.137018 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:06:04.138688 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:06:04.182704 kernel: raid6: avx512x4 gen() 43161 MB/s Jan 23 01:06:04.199691 kernel: raid6: avx512x2 gen() 44469 MB/s Jan 23 01:06:04.216698 kernel: raid6: avx512x1 gen() 44406 MB/s Jan 23 01:06:04.233699 kernel: raid6: avx2x4 gen() 34415 MB/s Jan 23 01:06:04.250707 kernel: raid6: avx2x2 gen() 34088 MB/s Jan 23 01:06:04.268034 kernel: raid6: avx2x1 gen() 26627 MB/s Jan 23 01:06:04.268098 kernel: raid6: using algorithm avx512x2 gen() 44469 MB/s Jan 23 01:06:04.286135 kernel: raid6: .... xor() 26815 MB/s, rmw enabled Jan 23 01:06:04.286193 kernel: raid6: using avx512x2 recovery algorithm Jan 23 01:06:04.305887 kernel: xor: automatically using best checksumming function avx Jan 23 01:06:04.442700 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:06:04.449512 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:06:04.453057 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:06:04.472517 systemd-udevd[475]: Using default interface naming scheme 'v255'. Jan 23 01:06:04.477067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:06:04.479931 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:06:04.502580 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Jan 23 01:06:04.530612 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:06:04.532928 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:06:04.610370 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:06:04.613435 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:06:04.687684 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 23 01:06:04.694688 kernel: virtio_blk virtio2: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 23 01:06:04.709985 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 01:06:04.710044 kernel: GPT:17805311 != 104857599 Jan 23 01:06:04.710056 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 01:06:04.711045 kernel: GPT:17805311 != 104857599 Jan 23 01:06:04.713037 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 01:06:04.713057 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:06:04.718373 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:06:04.739686 kernel: AES CTR mode by8 optimization enabled Jan 23 01:06:04.744685 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 01:06:04.752821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:06:04.753656 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:04.762422 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:04.772294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:04.781788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:06:04.781873 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:04.788642 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:04.792860 kernel: libata version 3.00 loaded. Jan 23 01:06:04.801116 kernel: ACPI: bus type USB registered Jan 23 01:06:04.801170 kernel: usbcore: registered new interface driver usbfs Jan 23 01:06:04.803441 kernel: usbcore: registered new interface driver hub Jan 23 01:06:04.814679 kernel: usbcore: registered new device driver usb Jan 23 01:06:04.816678 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 01:06:04.825374 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 01:06:04.830680 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 01:06:04.830704 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 01:06:04.830867 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 01:06:04.830958 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 01:06:04.831996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:04.834154 kernel: scsi host0: ahci Jan 23 01:06:04.835196 kernel: scsi host1: ahci Jan 23 01:06:04.842045 kernel: scsi host2: ahci Jan 23 01:06:04.842089 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller Jan 23 01:06:04.845228 kernel: scsi host3: ahci Jan 23 01:06:04.845345 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1 Jan 23 01:06:04.845450 kernel: uhci_hcd 0000:02:01.0: detected 2 ports Jan 23 01:06:04.846871 kernel: scsi host4: ahci Jan 23 01:06:04.847243 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 01:06:04.859406 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x00006000 Jan 23 01:06:04.859614 kernel: scsi host5: ahci Jan 23 01:06:04.859736 kernel: ata1: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380100 irq 61 lpm-pol 1 Jan 23 01:06:04.859748 kernel: ata2: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380180 irq 61 lpm-pol 1 Jan 23 01:06:04.859757 kernel: ata3: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380200 irq 61 lpm-pol 1 Jan 23 01:06:04.859767 kernel: ata4: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380280 irq 61 lpm-pol 1 Jan 23 01:06:04.859776 kernel: hub 1-0:1.0: USB hub found Jan 23 01:06:04.859899 kernel: ata5: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380300 irq 61 lpm-pol 1 Jan 23 01:06:04.859910 kernel: hub 1-0:1.0: 2 ports detected Jan 23 01:06:04.860010 kernel: ata6: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380380 irq 61 lpm-pol 1 Jan 23 01:06:05.076740 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Jan 23 01:06:05.081049 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 01:06:05.168437 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 01:06:05.182111 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:05.182148 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:05.182174 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:05.182191 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:05.182208 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:05.182224 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:05.181514 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 01:06:05.183629 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:06:05.190081 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:06:05.191566 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:06:05.192366 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:06:05.193395 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:06:05.195578 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:06:05.213444 disk-uuid[678]: Primary Header is updated. Jan 23 01:06:05.213444 disk-uuid[678]: Secondary Entries is updated. Jan 23 01:06:05.213444 disk-uuid[678]: Secondary Header is updated. Jan 23 01:06:05.217308 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:06:05.228396 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:06:05.269707 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 01:06:05.278401 kernel: usbcore: registered new interface driver usbhid Jan 23 01:06:05.278459 kernel: usbhid: USB HID core driver Jan 23 01:06:05.287136 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 23 01:06:05.287193 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0 Jan 23 01:06:06.236873 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:06:06.238093 disk-uuid[679]: The operation has completed successfully. Jan 23 01:06:06.351691 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:06:06.351824 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:06:06.383286 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:06:06.401523 sh[698]: Success Jan 23 01:06:06.438625 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:06:06.438730 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:06:06.438764 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:06:06.457756 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 01:06:06.569978 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:06:06.587262 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:06:06.590816 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:06:06.627705 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (710) Jan 23 01:06:06.632939 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:06:06.632984 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:06:06.662088 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:06:06.662183 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:06:06.666062 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:06:06.667995 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:06:06.669564 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:06:06.671027 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:06:06.676221 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:06:06.728705 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (741) Jan 23 01:06:06.732717 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:06.734700 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:06:06.743905 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:06:06.743958 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:06:06.749686 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:06.752246 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:06:06.753580 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:06:06.808566 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:06:06.810437 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:06:06.838435 systemd-networkd[880]: lo: Link UP Jan 23 01:06:06.838443 systemd-networkd[880]: lo: Gained carrier Jan 23 01:06:06.839408 systemd-networkd[880]: Enumeration completed Jan 23 01:06:06.839658 systemd-networkd[880]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:06:06.839676 systemd-networkd[880]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:06:06.839837 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:06:06.840331 systemd-networkd[880]: eth0: Link UP Jan 23 01:06:06.840420 systemd-networkd[880]: eth0: Gained carrier Jan 23 01:06:06.840429 systemd-networkd[880]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:06:06.841606 systemd[1]: Reached target network.target - Network. Jan 23 01:06:06.857696 systemd-networkd[880]: eth0: DHCPv4 address 10.0.2.223/25, gateway 10.0.2.129 acquired from 10.0.2.129 Jan 23 01:06:07.472214 ignition[814]: Ignition 2.22.0 Jan 23 01:06:07.472242 ignition[814]: Stage: fetch-offline Jan 23 01:06:07.472329 ignition[814]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:07.472351 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:06:07.472544 ignition[814]: parsed url from cmdline: "" Jan 23 01:06:07.472552 ignition[814]: no config URL provided Jan 23 01:06:07.472565 ignition[814]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:06:07.472582 ignition[814]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:06:07.472594 ignition[814]: failed to fetch config: resource requires networking Jan 23 01:06:07.477357 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:06:07.473324 ignition[814]: Ignition finished successfully Jan 23 01:06:07.481915 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 01:06:07.531646 ignition[890]: Ignition 2.22.0 Jan 23 01:06:07.531705 ignition[890]: Stage: fetch Jan 23 01:06:07.531996 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:07.532015 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:06:07.532205 ignition[890]: parsed url from cmdline: "" Jan 23 01:06:07.532212 ignition[890]: no config URL provided Jan 23 01:06:07.532223 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:06:07.532237 ignition[890]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:06:07.532446 ignition[890]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 23 01:06:07.532571 ignition[890]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 01:06:07.532607 ignition[890]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 01:06:08.462066 ignition[890]: GET result: OK Jan 23 01:06:08.462842 ignition[890]: parsing config with SHA512: 9190fb4672d85e65e676340f5bdff3754a4bbab3dfb70887abfdf796d24235865bfd933661083e0f7e5395f82af85b94d3d74b4e14f35ff3ea28667c449074b3 Jan 23 01:06:08.469269 unknown[890]: fetched base config from "system" Jan 23 01:06:08.469291 unknown[890]: fetched base config from "system" Jan 23 01:06:08.469304 unknown[890]: fetched user config from "openstack" Jan 23 01:06:08.470214 ignition[890]: fetch: fetch complete Jan 23 01:06:08.470226 ignition[890]: fetch: fetch passed Jan 23 01:06:08.470329 ignition[890]: Ignition finished successfully Jan 23 01:06:08.475530 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 01:06:08.479614 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:06:08.523880 ignition[896]: Ignition 2.22.0 Jan 23 01:06:08.524789 ignition[896]: Stage: kargs Jan 23 01:06:08.524984 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:08.529468 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:06:08.524997 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:06:08.526365 ignition[896]: kargs: kargs passed Jan 23 01:06:08.526466 ignition[896]: Ignition finished successfully Jan 23 01:06:08.534374 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:06:08.572441 ignition[903]: Ignition 2.22.0 Jan 23 01:06:08.572453 ignition[903]: Stage: disks Jan 23 01:06:08.572614 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:08.572624 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:06:08.575714 ignition[903]: disks: disks passed Jan 23 01:06:08.576173 ignition[903]: Ignition finished successfully Jan 23 01:06:08.578495 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:06:08.580478 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:06:08.581843 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:06:08.583092 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:06:08.583928 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:06:08.585157 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:06:08.588173 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:06:08.639509 systemd-fsck[912]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 01:06:08.642854 systemd-networkd[880]: eth0: Gained IPv6LL Jan 23 01:06:08.645444 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:06:08.650521 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:06:08.872703 kernel: EXT4-fs (vda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:06:08.874602 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:06:08.877031 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:06:08.882869 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:06:08.887846 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:06:08.891769 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 01:06:08.894895 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 23 01:06:08.899880 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:06:08.900797 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:06:08.918654 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:06:08.920583 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:06:08.926866 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Jan 23 01:06:08.931691 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:08.931734 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:06:08.953332 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:06:08.953381 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:06:08.956462 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:06:09.009688 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:06:09.023223 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:06:09.031234 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:06:09.038227 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:06:09.043677 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:06:09.158989 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:06:09.161258 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:06:09.162830 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:06:09.179328 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:06:09.182418 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:09.211157 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:06:09.220348 ignition[1036]: INFO : Ignition 2.22.0 Jan 23 01:06:09.220348 ignition[1036]: INFO : Stage: mount Jan 23 01:06:09.221397 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:09.221397 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:06:09.221397 ignition[1036]: INFO : mount: mount passed Jan 23 01:06:09.221397 ignition[1036]: INFO : Ignition finished successfully Jan 23 01:06:09.222502 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:06:10.054693 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:06:12.064718 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:06:16.078741 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:06:16.089378 coreos-metadata[922]: Jan 23 01:06:16.089 WARN failed to locate config-drive, using the metadata service API instead Jan 23 01:06:16.127013 coreos-metadata[922]: Jan 23 01:06:16.126 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 01:06:16.931862 coreos-metadata[922]: Jan 23 01:06:16.931 INFO Fetch successful Jan 23 01:06:16.933733 coreos-metadata[922]: Jan 23 01:06:16.933 INFO wrote hostname ci-4459-2-2-n-41f4b5c765 to /sysroot/etc/hostname Jan 23 01:06:16.936647 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 23 01:06:16.936997 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 23 01:06:16.940272 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:06:16.978885 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:06:17.047693 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1053) Jan 23 01:06:17.055143 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:17.055253 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:06:17.069492 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:06:17.069621 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:06:17.073735 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:06:17.131421 ignition[1070]: INFO : Ignition 2.22.0 Jan 23 01:06:17.132689 ignition[1070]: INFO : Stage: files Jan 23 01:06:17.135003 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:17.135003 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:06:17.135003 ignition[1070]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:06:17.137297 ignition[1070]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:06:17.137297 ignition[1070]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:06:17.142735 ignition[1070]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:06:17.143434 ignition[1070]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:06:17.143434 ignition[1070]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:06:17.143393 unknown[1070]: wrote ssh authorized keys file for user: core Jan 23 01:06:17.149696 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:06:17.150433 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:06:17.151421 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:06:17.152048 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:06:17.152048 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:06:17.153683 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:06:17.153683 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:06:17.153683 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 01:06:17.427040 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 23 01:06:18.027741 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:06:18.029179 ignition[1070]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:06:18.031685 ignition[1070]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:06:18.031685 ignition[1070]: INFO : files: files passed Jan 23 01:06:18.031685 ignition[1070]: INFO : Ignition finished successfully Jan 23 01:06:18.033394 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:06:18.035435 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:06:18.036491 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:06:18.055095 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:06:18.055208 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:06:18.063183 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:06:18.063183 initrd-setup-root-after-ignition[1101]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:06:18.065702 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:06:18.067530 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:06:18.068474 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:06:18.070011 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:06:18.097632 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:06:18.097764 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:06:18.099177 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:06:18.099931 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:06:18.101402 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:06:18.102196 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:06:18.127951 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:06:18.130794 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:06:18.152161 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:06:18.152967 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:06:18.154377 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:06:18.155614 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:06:18.155772 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:06:18.157504 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:06:18.158797 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:06:18.159930 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:06:18.161075 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:06:18.162222 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:06:18.163539 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:06:18.164729 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:06:18.165756 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:06:18.166734 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:06:18.167622 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:06:18.168517 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:06:18.169332 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:06:18.169455 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:06:18.170591 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:06:18.171442 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:06:18.172210 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:06:18.172284 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:06:18.173031 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:06:18.173132 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:06:18.174324 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:06:18.174429 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:06:18.175191 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:06:18.175275 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:06:18.176794 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:06:18.179389 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:06:18.181718 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:06:18.181830 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:06:18.182528 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:06:18.182618 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:06:18.187818 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:06:18.190502 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:06:18.208710 ignition[1125]: INFO : Ignition 2.22.0 Jan 23 01:06:18.209645 ignition[1125]: INFO : Stage: umount Jan 23 01:06:18.210711 ignition[1125]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:18.210711 ignition[1125]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:06:18.212018 ignition[1125]: INFO : umount: umount passed Jan 23 01:06:18.212490 ignition[1125]: INFO : Ignition finished successfully Jan 23 01:06:18.212888 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:06:18.214894 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:06:18.215012 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:06:18.216986 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:06:18.217040 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:06:18.217525 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:06:18.217562 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:06:18.217995 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 01:06:18.218025 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 01:06:18.218905 systemd[1]: Stopped target network.target - Network. Jan 23 01:06:18.220244 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:06:18.220295 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:06:18.221152 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:06:18.222027 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:06:18.225705 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:06:18.226198 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:06:18.227128 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:06:18.228063 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:06:18.228101 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:06:18.228920 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:06:18.228948 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:06:18.229781 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:06:18.229827 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:06:18.230639 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:06:18.230684 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:06:18.231588 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:06:18.232416 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:06:18.234328 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:06:18.234403 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:06:18.235449 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:06:18.235525 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:06:18.239179 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:06:18.239637 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:06:18.242603 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:06:18.243247 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:06:18.243763 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:06:18.245334 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:06:18.246507 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:06:18.247344 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:06:18.247754 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:06:18.249243 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:06:18.249963 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:06:18.250334 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:06:18.251152 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:06:18.251185 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:06:18.252769 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:06:18.252805 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:06:18.253916 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:06:18.254285 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:06:18.255145 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:06:18.256705 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:06:18.257117 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:06:18.273244 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:06:18.273853 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:06:18.274750 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:06:18.274841 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:06:18.276244 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:06:18.276305 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:06:18.277030 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:06:18.277058 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:06:18.277858 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:06:18.277898 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:06:18.279062 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:06:18.279094 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:06:18.280244 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:06:18.280283 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:06:18.282771 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:06:18.283166 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:06:18.283212 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:06:18.285810 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:06:18.285849 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:06:18.286558 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 01:06:18.286594 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:06:18.287336 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:06:18.287368 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:06:18.288073 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:06:18.288106 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:18.290207 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 01:06:18.290252 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 01:06:18.290282 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 01:06:18.290315 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:06:18.301464 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:06:18.301545 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:06:18.302147 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:06:18.304811 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:06:18.331435 systemd[1]: Switching root. Jan 23 01:06:18.369558 systemd-journald[225]: Journal stopped Jan 23 01:06:20.385751 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Jan 23 01:06:20.385842 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:06:20.385857 kernel: SELinux: policy capability open_perms=1 Jan 23 01:06:20.385872 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:06:20.385882 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:06:20.385892 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:06:20.385903 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:06:20.385917 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:06:20.385929 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:06:20.385939 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:06:20.385950 kernel: audit: type=1403 audit(1769130379.305:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:06:20.385962 systemd[1]: Successfully loaded SELinux policy in 77.669ms. Jan 23 01:06:20.385982 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.392ms. Jan 23 01:06:20.385993 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:06:20.386004 systemd[1]: Detected virtualization kvm. Jan 23 01:06:20.386016 systemd[1]: Detected architecture x86-64. Jan 23 01:06:20.386030 systemd[1]: Detected first boot. Jan 23 01:06:20.386043 systemd[1]: Hostname set to . Jan 23 01:06:20.386054 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:06:20.386065 zram_generator::config[1168]: No configuration found. Jan 23 01:06:20.386077 kernel: Guest personality initialized and is inactive Jan 23 01:06:20.386087 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 01:06:20.386096 kernel: Initialized host personality Jan 23 01:06:20.386106 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:06:20.386116 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:06:20.386129 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:06:20.386139 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:06:20.386149 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:06:20.386160 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:06:20.386170 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:06:20.386181 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:06:20.386191 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:06:20.386208 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:06:20.386218 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:06:20.386230 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:06:20.386241 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:06:20.386252 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:06:20.386263 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:06:20.386273 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:06:20.386283 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:06:20.386294 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:06:20.386306 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:06:20.386320 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:06:20.386331 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:06:20.386341 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:06:20.386351 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:06:20.386361 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:06:20.386371 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:06:20.386381 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:06:20.386393 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:06:20.386403 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:06:20.386414 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:06:20.386424 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:06:20.386434 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:06:20.386444 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:06:20.386479 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:06:20.386490 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:06:20.386501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:06:20.386513 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:06:20.386523 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:06:20.386534 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:06:20.386544 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:06:20.386554 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:06:20.386566 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:06:20.386576 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:20.386586 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:06:20.386600 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:06:20.386612 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:06:20.386623 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:06:20.386633 systemd[1]: Reached target machines.target - Containers. Jan 23 01:06:20.386643 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:06:20.386659 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:06:20.386680 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:06:20.386690 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:06:20.386701 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:06:20.386714 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:06:20.386726 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:06:20.386737 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:06:20.386748 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:06:20.386758 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:06:20.386768 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:06:20.386778 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:06:20.386788 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:06:20.386798 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:06:20.386815 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:06:20.386827 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:06:20.386837 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:06:20.386847 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:06:20.386857 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:06:20.386868 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:06:20.386878 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:06:20.386888 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:06:20.386898 systemd[1]: Stopped verity-setup.service. Jan 23 01:06:20.386908 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:20.386920 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:06:20.386930 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:06:20.386940 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:06:20.386950 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:06:20.386960 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:06:20.386971 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:06:20.386981 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:06:20.386991 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:06:20.387000 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:06:20.387012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:06:20.387023 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:06:20.387034 kernel: loop: module loaded Jan 23 01:06:20.387043 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:06:20.387077 systemd-journald[1238]: Collecting audit messages is disabled. Jan 23 01:06:20.387101 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:06:20.387111 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:06:20.387123 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:06:20.387134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:06:20.387145 systemd-journald[1238]: Journal started Jan 23 01:06:20.387165 systemd-journald[1238]: Runtime Journal (/run/log/journal/3c4c3d51fbf94974ab322177664e5502) is 8M, max 78M, 70M free. Jan 23 01:06:20.102924 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:06:20.128763 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 01:06:20.129203 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:06:20.390724 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:06:20.392456 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:06:20.393707 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:06:20.394332 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:06:20.406960 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:06:20.410758 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:06:20.411683 kernel: fuse: init (API version 7.41) Jan 23 01:06:20.413728 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:06:20.413766 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:06:20.416718 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:06:20.421769 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:06:20.422304 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:06:20.426776 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:06:20.438688 kernel: ACPI: bus type drm_connector registered Jan 23 01:06:20.440393 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:06:20.440890 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:06:20.441968 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:06:20.442448 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:06:20.443812 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:06:20.449817 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:06:20.456826 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:06:20.460718 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:06:20.461428 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:06:20.461582 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:06:20.462279 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:06:20.466320 systemd-journald[1238]: Time spent on flushing to /var/log/journal/3c4c3d51fbf94974ab322177664e5502 is 67.971ms for 1693 entries. Jan 23 01:06:20.466320 systemd-journald[1238]: System Journal (/var/log/journal/3c4c3d51fbf94974ab322177664e5502) is 8M, max 584.8M, 576.8M free. Jan 23 01:06:20.547530 systemd-journald[1238]: Received client request to flush runtime journal. Jan 23 01:06:20.547579 kernel: loop0: detected capacity change from 0 to 110984 Jan 23 01:06:20.466958 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:06:20.468593 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:06:20.555683 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:06:20.469956 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:06:20.477203 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:06:20.483747 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:06:20.487775 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:06:20.499810 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:06:20.536803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:06:20.539566 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Jan 23 01:06:20.539577 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Jan 23 01:06:20.549998 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:06:20.556698 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:06:20.562292 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:06:20.585685 kernel: loop1: detected capacity change from 0 to 128560 Jan 23 01:06:20.605557 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:06:20.636690 kernel: loop2: detected capacity change from 0 to 224512 Jan 23 01:06:20.685688 kernel: loop3: detected capacity change from 0 to 1640 Jan 23 01:06:20.719008 kernel: loop4: detected capacity change from 0 to 110984 Jan 23 01:06:20.720733 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:06:20.723954 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:06:20.757427 kernel: loop5: detected capacity change from 0 to 128560 Jan 23 01:06:20.765738 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jan 23 01:06:20.765756 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jan 23 01:06:20.770571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:06:20.791074 kernel: loop6: detected capacity change from 0 to 224512 Jan 23 01:06:20.823695 kernel: loop7: detected capacity change from 0 to 1640 Jan 23 01:06:20.833489 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-stackit'. Jan 23 01:06:20.834259 (sd-merge)[1317]: Merged extensions into '/usr'. Jan 23 01:06:20.839914 systemd[1]: Reload requested from client PID 1289 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:06:20.840022 systemd[1]: Reloading... Jan 23 01:06:20.911841 zram_generator::config[1344]: No configuration found. Jan 23 01:06:21.087474 systemd[1]: Reloading finished in 246 ms. Jan 23 01:06:21.105055 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:06:21.106044 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:06:21.106843 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:06:21.117729 systemd[1]: Starting ensure-sysext.service... Jan 23 01:06:21.119778 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:06:21.122909 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:06:21.132854 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:06:21.135422 systemd[1]: Reload requested from client PID 1392 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:06:21.135435 systemd[1]: Reloading... Jan 23 01:06:21.154952 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:06:21.156864 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:06:21.157100 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:06:21.157306 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:06:21.157941 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:06:21.158147 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Jan 23 01:06:21.158192 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Jan 23 01:06:21.164378 systemd-udevd[1394]: Using default interface naming scheme 'v255'. Jan 23 01:06:21.165520 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:06:21.165531 systemd-tmpfiles[1393]: Skipping /boot Jan 23 01:06:21.177849 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:06:21.177859 systemd-tmpfiles[1393]: Skipping /boot Jan 23 01:06:21.216708 zram_generator::config[1422]: No configuration found. Jan 23 01:06:21.376766 systemd[1]: Reloading finished in 241 ms. Jan 23 01:06:21.391784 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:06:21.398780 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:06:21.403328 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:06:21.405839 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:06:21.412140 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:06:21.414116 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:06:21.421981 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:21.422128 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:06:21.424457 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:06:21.426747 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:06:21.429138 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:06:21.430022 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:06:21.430162 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:06:21.430259 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:21.433592 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:21.434134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:06:21.434283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:06:21.434356 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:06:21.436397 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:06:21.436886 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:21.443146 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:21.443339 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:06:21.447185 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:06:21.456893 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 23 01:06:21.457519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:06:21.457619 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:06:21.457785 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:06:21.458912 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:21.460712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:06:21.460874 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:06:21.461769 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:06:21.469003 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:06:21.471190 systemd[1]: Finished ensure-sysext.service. Jan 23 01:06:21.481915 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:06:21.482725 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:06:21.483717 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:06:21.494552 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 01:06:21.494602 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 01:06:21.505298 kernel: PTP clock support registered Jan 23 01:06:21.507470 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:06:21.511121 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:06:21.513023 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 23 01:06:21.513181 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 23 01:06:21.514400 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:06:21.515699 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:06:21.516432 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:06:21.516564 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:06:21.521831 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:06:21.555808 augenrules[1522]: No rules Jan 23 01:06:21.557151 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:06:21.557344 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:06:21.567969 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:06:21.654812 systemd-resolved[1469]: Positive Trust Anchors: Jan 23 01:06:21.654825 systemd-resolved[1469]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:06:21.654856 systemd-resolved[1469]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:06:21.660780 systemd-resolved[1469]: Using system hostname 'ci-4459-2-2-n-41f4b5c765'. Jan 23 01:06:21.662325 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:06:21.663205 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:06:21.688983 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:06:21.689567 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:06:21.691063 systemd-networkd[1489]: lo: Link UP Jan 23 01:06:21.691294 systemd-networkd[1489]: lo: Gained carrier Jan 23 01:06:21.691838 systemd-networkd[1489]: Enumeration completed Jan 23 01:06:21.692451 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:06:21.692991 systemd[1]: Reached target network.target - Network. Jan 23 01:06:21.694869 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:06:21.697772 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:06:21.721143 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:06:21.723281 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:06:21.810765 ldconfig[1284]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:06:21.827924 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 01:06:21.837859 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:06:21.837995 systemd-networkd[1489]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:06:21.840501 kernel: ACPI: button: Power Button [PWRF] Jan 23 01:06:21.839773 systemd-networkd[1489]: eth0: Link UP Jan 23 01:06:21.839881 systemd-networkd[1489]: eth0: Gained carrier Jan 23 01:06:21.839900 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:06:21.851794 systemd-networkd[1489]: eth0: DHCPv4 address 10.0.2.223/25, gateway 10.0.2.129 acquired from 10.0.2.129 Jan 23 01:06:21.863745 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:06:21.877095 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:06:21.881215 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:06:21.903781 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:06:21.904526 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:06:21.905183 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:06:21.905632 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:06:21.906065 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:06:21.906582 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:06:21.907028 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:06:21.907385 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:06:21.907927 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:06:21.907962 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:06:21.908306 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:06:21.909628 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:06:21.911151 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:06:21.913510 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:06:21.914048 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:06:21.914484 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:06:21.921223 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:06:21.922836 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:06:21.923861 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:06:21.927430 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:06:21.927851 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:06:21.928256 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:06:21.928282 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:06:21.932926 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 01:06:21.933166 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 01:06:21.936054 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 01:06:21.933457 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 01:06:21.937879 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:06:21.940787 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 01:06:21.946839 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:06:21.948212 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:06:21.959741 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:06:21.963696 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:06:21.965777 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:06:21.966734 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:06:21.973169 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:06:21.976859 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:06:21.979649 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:06:21.985251 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:06:21.988832 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:06:21.990938 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:06:21.991344 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:06:21.992090 jq[1577]: false Jan 23 01:06:21.996373 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:06:21.999836 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:06:22.005722 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Refreshing passwd entry cache Jan 23 01:06:22.004492 oslogin_cache_refresh[1582]: Refreshing passwd entry cache Jan 23 01:06:22.006609 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:06:22.007391 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:06:22.007581 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:06:22.010411 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:06:22.013240 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:06:22.023016 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Failure getting users, quitting Jan 23 01:06:22.023009 oslogin_cache_refresh[1582]: Failure getting users, quitting Jan 23 01:06:22.023138 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:06:22.023138 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Refreshing group entry cache Jan 23 01:06:22.023029 oslogin_cache_refresh[1582]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:06:22.023077 oslogin_cache_refresh[1582]: Refreshing group entry cache Jan 23 01:06:22.034991 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Failure getting groups, quitting Jan 23 01:06:22.034991 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:06:22.034540 oslogin_cache_refresh[1582]: Failure getting groups, quitting Jan 23 01:06:22.041452 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:06:22.034560 oslogin_cache_refresh[1582]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:06:22.043454 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:06:22.056706 jq[1590]: true Jan 23 01:06:22.065098 (ntainerd)[1604]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:06:22.077171 jq[1609]: true Jan 23 01:06:22.080321 update_engine[1588]: I20260123 01:06:22.080237 1588 main.cc:92] Flatcar Update Engine starting Jan 23 01:06:22.088360 extend-filesystems[1578]: Found /dev/vda6 Jan 23 01:06:22.095085 dbus-daemon[1575]: [system] SELinux support is enabled Jan 23 01:06:22.095233 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:06:22.098036 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:06:22.098063 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:06:22.099042 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:06:22.099062 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:06:22.099556 extend-filesystems[1578]: Found /dev/vda9 Jan 23 01:06:22.106000 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:06:22.106649 update_engine[1588]: I20260123 01:06:22.106106 1588 update_check_scheduler.cc:74] Next update check in 10m10s Jan 23 01:06:22.107160 extend-filesystems[1578]: Checking size of /dev/vda9 Jan 23 01:06:22.109274 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:06:22.112861 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:06:22.113720 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:06:22.124525 extend-filesystems[1578]: Resized partition /dev/vda9 Jan 23 01:06:22.133679 extend-filesystems[1638]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 01:06:22.136207 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 23 01:06:22.136251 bash[1633]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:06:22.134501 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:06:22.141805 systemd[1]: Starting sshkeys.service... Jan 23 01:06:22.146719 kernel: Console: switching to colour dummy device 80x25 Jan 23 01:06:22.160578 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 01:06:22.161680 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 01:06:22.166699 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 23 01:06:22.166928 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 01:06:22.166945 kernel: [drm] features: -context_init Jan 23 01:06:22.188679 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:06:22.210678 kernel: [drm] number of scanouts: 1 Jan 23 01:06:22.210749 kernel: [drm] number of cap sets: 0 Jan 23 01:06:22.241715 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 23 01:06:22.254075 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:22.264277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:06:22.264629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:22.267580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:22.308148 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 23 01:06:22.318687 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 12499963 blocks Jan 23 01:06:22.384385 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 01:06:22.391825 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 01:06:22.402128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:06:22.402431 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:22.408298 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:06:22.416685 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 01:06:22.426244 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:06:22.437001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:22.492984 systemd-logind[1587]: New seat seat0. Jan 23 01:06:22.498398 chronyd[1571]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 01:06:22.502229 systemd-logind[1587]: Watching system buttons on /dev/input/event3 (Power Button) Jan 23 01:06:22.502247 systemd-logind[1587]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:06:22.502416 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:06:22.513368 chronyd[1571]: Loaded seccomp filter (level 2) Jan 23 01:06:22.514259 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:06:22.515927 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 01:06:22.521110 locksmithd[1632]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:06:22.601523 containerd[1604]: time="2026-01-23T01:06:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:06:22.602686 containerd[1604]: time="2026-01-23T01:06:22.602239967Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:06:22.612675 sshd_keygen[1612]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:06:22.612980 containerd[1604]: time="2026-01-23T01:06:22.612949109Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.953µs" Jan 23 01:06:22.613042 containerd[1604]: time="2026-01-23T01:06:22.613026703Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:06:22.613092 containerd[1604]: time="2026-01-23T01:06:22.613080861Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:06:22.613251 containerd[1604]: time="2026-01-23T01:06:22.613236033Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:06:22.613294 containerd[1604]: time="2026-01-23T01:06:22.613286567Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:06:22.613354 containerd[1604]: time="2026-01-23T01:06:22.613345070Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:06:22.613441 containerd[1604]: time="2026-01-23T01:06:22.613426890Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:06:22.616762 containerd[1604]: time="2026-01-23T01:06:22.616724247Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:06:22.617038 containerd[1604]: time="2026-01-23T01:06:22.617008139Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:06:22.617038 containerd[1604]: time="2026-01-23T01:06:22.617030466Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:06:22.617078 containerd[1604]: time="2026-01-23T01:06:22.617041026Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:06:22.617078 containerd[1604]: time="2026-01-23T01:06:22.617052153Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:06:22.617131 containerd[1604]: time="2026-01-23T01:06:22.617119793Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:06:22.617323 containerd[1604]: time="2026-01-23T01:06:22.617300148Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:06:22.617347 containerd[1604]: time="2026-01-23T01:06:22.617330952Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:06:22.617347 containerd[1604]: time="2026-01-23T01:06:22.617341025Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:06:22.617388 containerd[1604]: time="2026-01-23T01:06:22.617363589Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:06:22.617596 containerd[1604]: time="2026-01-23T01:06:22.617573071Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:06:22.617635 containerd[1604]: time="2026-01-23T01:06:22.617621656Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:06:22.635062 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:06:22.639051 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:06:22.654732 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:06:22.654932 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:06:22.660082 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:06:22.679149 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:06:22.681020 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:06:22.684933 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:06:22.685184 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:06:22.692800 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:22.745908 containerd[1604]: time="2026-01-23T01:06:22.745850729Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:06:22.746154 containerd[1604]: time="2026-01-23T01:06:22.746044244Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:06:22.746284 containerd[1604]: time="2026-01-23T01:06:22.746232348Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:06:22.746284 containerd[1604]: time="2026-01-23T01:06:22.746258624Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:06:22.746445 containerd[1604]: time="2026-01-23T01:06:22.746373740Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:06:22.746445 containerd[1604]: time="2026-01-23T01:06:22.746396031Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:06:22.746445 containerd[1604]: time="2026-01-23T01:06:22.746423537Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:06:22.746684 containerd[1604]: time="2026-01-23T01:06:22.746586158Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:06:22.746684 containerd[1604]: time="2026-01-23T01:06:22.746614720Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:06:22.746832 containerd[1604]: time="2026-01-23T01:06:22.746629036Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:06:22.746832 containerd[1604]: time="2026-01-23T01:06:22.746772642Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:06:22.746832 containerd[1604]: time="2026-01-23T01:06:22.746800348Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:06:22.747243 containerd[1604]: time="2026-01-23T01:06:22.747108050Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:06:22.747243 containerd[1604]: time="2026-01-23T01:06:22.747183434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:06:22.747355 containerd[1604]: time="2026-01-23T01:06:22.747341119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:06:22.747420 containerd[1604]: time="2026-01-23T01:06:22.747409413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:06:22.747477 containerd[1604]: time="2026-01-23T01:06:22.747460408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:06:22.747592 containerd[1604]: time="2026-01-23T01:06:22.747525995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:06:22.747592 containerd[1604]: time="2026-01-23T01:06:22.747542797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:06:22.747592 containerd[1604]: time="2026-01-23T01:06:22.747555276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:06:22.747795 containerd[1604]: time="2026-01-23T01:06:22.747580666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:06:22.747795 containerd[1604]: time="2026-01-23T01:06:22.747763452Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:06:22.747795 containerd[1604]: time="2026-01-23T01:06:22.747776388Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:06:22.747950 containerd[1604]: time="2026-01-23T01:06:22.747934697Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:06:22.748072 containerd[1604]: time="2026-01-23T01:06:22.748006392Z" level=info msg="Start snapshots syncer" Jan 23 01:06:22.748072 containerd[1604]: time="2026-01-23T01:06:22.748041799Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:06:22.748633 containerd[1604]: time="2026-01-23T01:06:22.748579371Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:06:22.748940 containerd[1604]: time="2026-01-23T01:06:22.748760581Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:06:22.750767 containerd[1604]: time="2026-01-23T01:06:22.750738495Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:06:22.750978 containerd[1604]: time="2026-01-23T01:06:22.750957344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:06:22.751052 containerd[1604]: time="2026-01-23T01:06:22.751040247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:06:22.751103 containerd[1604]: time="2026-01-23T01:06:22.751092818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:06:22.751203 containerd[1604]: time="2026-01-23T01:06:22.751187430Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:06:22.751271 containerd[1604]: time="2026-01-23T01:06:22.751257723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:06:22.751323 containerd[1604]: time="2026-01-23T01:06:22.751313013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751363740Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751395028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751409186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751422880Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751467046Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751487137Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751499471Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751513180Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751522776Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751533722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751554293Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751575379Z" level=info msg="runtime interface created" Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751582361Z" level=info msg="created NRI interface" Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751592528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:06:22.752688 containerd[1604]: time="2026-01-23T01:06:22.751607911Z" level=info msg="Connect containerd service" Jan 23 01:06:22.753079 containerd[1604]: time="2026-01-23T01:06:22.751631434Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:06:22.753079 containerd[1604]: time="2026-01-23T01:06:22.752451053Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:06:22.913125 containerd[1604]: time="2026-01-23T01:06:22.913068661Z" level=info msg="Start subscribing containerd event" Jan 23 01:06:22.913440 containerd[1604]: time="2026-01-23T01:06:22.913387290Z" level=info msg="Start recovering state" Jan 23 01:06:22.913733 containerd[1604]: time="2026-01-23T01:06:22.913660056Z" level=info msg="Start event monitor" Jan 23 01:06:22.913858 containerd[1604]: time="2026-01-23T01:06:22.913838149Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:06:22.914071 containerd[1604]: time="2026-01-23T01:06:22.913988417Z" level=info msg="Start streaming server" Jan 23 01:06:22.914175 containerd[1604]: time="2026-01-23T01:06:22.914158107Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:06:22.914570 containerd[1604]: time="2026-01-23T01:06:22.914539525Z" level=info msg="runtime interface starting up..." Jan 23 01:06:22.914773 containerd[1604]: time="2026-01-23T01:06:22.914749939Z" level=info msg="starting plugins..." Jan 23 01:06:22.914897 containerd[1604]: time="2026-01-23T01:06:22.913735581Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:06:22.915083 containerd[1604]: time="2026-01-23T01:06:22.915059447Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:06:22.915307 containerd[1604]: time="2026-01-23T01:06:22.915165726Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:06:22.915785 containerd[1604]: time="2026-01-23T01:06:22.915741692Z" level=info msg="containerd successfully booted in 0.314880s" Jan 23 01:06:22.916020 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:06:23.267746 kernel: EXT4-fs (vda9): resized filesystem to 12499963 Jan 23 01:06:23.398753 extend-filesystems[1638]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 01:06:23.398753 extend-filesystems[1638]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 23 01:06:23.398753 extend-filesystems[1638]: The filesystem on /dev/vda9 is now 12499963 (4k) blocks long. Jan 23 01:06:23.399479 extend-filesystems[1578]: Resized filesystem in /dev/vda9 Jan 23 01:06:23.400236 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:06:23.400629 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:06:23.571192 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:06:23.571383 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:06:23.746150 systemd-networkd[1489]: eth0: Gained IPv6LL Jan 23 01:06:23.753132 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:06:23.756281 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:06:23.759752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:06:23.762990 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:06:23.807916 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:06:25.120086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:06:25.129234 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:06:25.582703 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:06:25.590703 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:06:26.013213 kubelet[1722]: E0123 01:06:26.013077 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:06:26.015919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:06:26.016187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:06:26.016804 systemd[1]: kubelet.service: Consumed 1.216s CPU time, 264.3M memory peak. Jan 23 01:06:27.597351 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:06:27.603886 systemd[1]: Started sshd@0-10.0.2.223:22-20.161.92.111:33396.service - OpenSSH per-connection server daemon (20.161.92.111:33396). Jan 23 01:06:28.256548 sshd[1734]: Accepted publickey for core from 20.161.92.111 port 33396 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:06:28.258271 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:28.265730 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:06:28.267209 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:06:28.275927 systemd-logind[1587]: New session 1 of user core. Jan 23 01:06:28.288678 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:06:28.291546 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:06:28.305132 (systemd)[1743]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:06:28.307365 systemd-logind[1587]: New session c1 of user core. Jan 23 01:06:28.435793 systemd[1743]: Queued start job for default target default.target. Jan 23 01:06:28.447010 systemd[1743]: Created slice app.slice - User Application Slice. Jan 23 01:06:28.447042 systemd[1743]: Reached target paths.target - Paths. Jan 23 01:06:28.447078 systemd[1743]: Reached target timers.target - Timers. Jan 23 01:06:28.448318 systemd[1743]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:06:28.465622 systemd[1743]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:06:28.465694 systemd[1743]: Reached target sockets.target - Sockets. Jan 23 01:06:28.465732 systemd[1743]: Reached target basic.target - Basic System. Jan 23 01:06:28.465762 systemd[1743]: Reached target default.target - Main User Target. Jan 23 01:06:28.465787 systemd[1743]: Startup finished in 151ms. Jan 23 01:06:28.466288 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:06:28.474055 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:06:28.933410 systemd[1]: Started sshd@1-10.0.2.223:22-20.161.92.111:33400.service - OpenSSH per-connection server daemon (20.161.92.111:33400). Jan 23 01:06:29.608704 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:06:29.614697 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:06:29.621702 sshd[1754]: Accepted publickey for core from 20.161.92.111 port 33400 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:06:29.624179 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:29.625307 coreos-metadata[1574]: Jan 23 01:06:29.625 WARN failed to locate config-drive, using the metadata service API instead Jan 23 01:06:29.634759 coreos-metadata[1642]: Jan 23 01:06:29.634 WARN failed to locate config-drive, using the metadata service API instead Jan 23 01:06:29.648764 systemd-logind[1587]: New session 2 of user core. Jan 23 01:06:29.652016 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:06:29.670278 coreos-metadata[1642]: Jan 23 01:06:29.670 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 23 01:06:29.677695 coreos-metadata[1574]: Jan 23 01:06:29.677 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 23 01:06:30.067959 sshd[1761]: Connection closed by 20.161.92.111 port 33400 Jan 23 01:06:30.069053 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:30.078286 systemd-logind[1587]: Session 2 logged out. Waiting for processes to exit. Jan 23 01:06:30.080261 systemd[1]: sshd@1-10.0.2.223:22-20.161.92.111:33400.service: Deactivated successfully. Jan 23 01:06:30.084622 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 01:06:30.088821 systemd-logind[1587]: Removed session 2. Jan 23 01:06:30.186950 systemd[1]: Started sshd@2-10.0.2.223:22-20.161.92.111:33412.service - OpenSSH per-connection server daemon (20.161.92.111:33412). Jan 23 01:06:30.810823 sshd[1767]: Accepted publickey for core from 20.161.92.111 port 33412 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:06:30.811966 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:30.817211 systemd-logind[1587]: New session 3 of user core. Jan 23 01:06:30.826048 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:06:30.988026 coreos-metadata[1642]: Jan 23 01:06:30.987 INFO Fetch successful Jan 23 01:06:30.988026 coreos-metadata[1642]: Jan 23 01:06:30.987 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 01:06:31.249206 sshd[1770]: Connection closed by 20.161.92.111 port 33412 Jan 23 01:06:31.248903 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:31.258054 systemd[1]: sshd@2-10.0.2.223:22-20.161.92.111:33412.service: Deactivated successfully. Jan 23 01:06:31.262264 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 01:06:31.265941 systemd-logind[1587]: Session 3 logged out. Waiting for processes to exit. Jan 23 01:06:31.269187 systemd-logind[1587]: Removed session 3. Jan 23 01:06:31.550031 coreos-metadata[1574]: Jan 23 01:06:31.549 INFO Fetch successful Jan 23 01:06:31.551169 coreos-metadata[1574]: Jan 23 01:06:31.551 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 01:06:32.148637 coreos-metadata[1642]: Jan 23 01:06:32.148 INFO Fetch successful Jan 23 01:06:32.152910 unknown[1642]: wrote ssh authorized keys file for user: core Jan 23 01:06:32.156999 coreos-metadata[1574]: Jan 23 01:06:32.156 INFO Fetch successful Jan 23 01:06:32.156999 coreos-metadata[1574]: Jan 23 01:06:32.156 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 23 01:06:32.201008 update-ssh-keys[1776]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:06:32.202407 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 01:06:32.206530 systemd[1]: Finished sshkeys.service. Jan 23 01:06:32.743253 coreos-metadata[1574]: Jan 23 01:06:32.743 INFO Fetch successful Jan 23 01:06:32.743253 coreos-metadata[1574]: Jan 23 01:06:32.743 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 23 01:06:33.345984 coreos-metadata[1574]: Jan 23 01:06:33.345 INFO Fetch successful Jan 23 01:06:33.345984 coreos-metadata[1574]: Jan 23 01:06:33.345 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 23 01:06:33.921462 coreos-metadata[1574]: Jan 23 01:06:33.921 INFO Fetch successful Jan 23 01:06:33.921462 coreos-metadata[1574]: Jan 23 01:06:33.921 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 23 01:06:35.572835 coreos-metadata[1574]: Jan 23 01:06:35.572 INFO Fetch successful Jan 23 01:06:35.623287 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 01:06:35.624071 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:06:35.624449 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:06:35.624692 systemd[1]: Startup finished in 3.937s (kernel) + 15.636s (initrd) + 16.395s (userspace) = 35.970s. Jan 23 01:06:36.268135 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 01:06:36.273551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:06:36.434840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:06:36.441947 (kubelet)[1792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:06:36.970727 kubelet[1792]: E0123 01:06:36.970640 1792 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:06:36.975157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:06:36.975363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:06:36.976047 systemd[1]: kubelet.service: Consumed 194ms CPU time, 108.5M memory peak. Jan 23 01:06:41.362515 systemd[1]: Started sshd@3-10.0.2.223:22-20.161.92.111:34198.service - OpenSSH per-connection server daemon (20.161.92.111:34198). Jan 23 01:06:42.002706 sshd[1800]: Accepted publickey for core from 20.161.92.111 port 34198 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:06:42.004412 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:42.011537 systemd-logind[1587]: New session 4 of user core. Jan 23 01:06:42.019005 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:06:42.440060 sshd[1803]: Connection closed by 20.161.92.111 port 34198 Jan 23 01:06:42.440987 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:42.447125 systemd-logind[1587]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:06:42.447632 systemd[1]: sshd@3-10.0.2.223:22-20.161.92.111:34198.service: Deactivated successfully. Jan 23 01:06:42.451483 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:06:42.456640 systemd-logind[1587]: Removed session 4. Jan 23 01:06:42.558814 systemd[1]: Started sshd@4-10.0.2.223:22-20.161.92.111:34204.service - OpenSSH per-connection server daemon (20.161.92.111:34204). Jan 23 01:06:43.218659 sshd[1809]: Accepted publickey for core from 20.161.92.111 port 34204 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:06:43.221020 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:43.231466 systemd-logind[1587]: New session 5 of user core. Jan 23 01:06:43.241051 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:06:43.642100 sshd[1812]: Connection closed by 20.161.92.111 port 34204 Jan 23 01:06:43.643356 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:43.654879 systemd-logind[1587]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:06:43.656854 systemd[1]: sshd@4-10.0.2.223:22-20.161.92.111:34204.service: Deactivated successfully. Jan 23 01:06:43.661361 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:06:43.664750 systemd-logind[1587]: Removed session 5. Jan 23 01:06:43.760518 systemd[1]: Started sshd@5-10.0.2.223:22-20.161.92.111:34218.service - OpenSSH per-connection server daemon (20.161.92.111:34218). Jan 23 01:06:44.420739 sshd[1818]: Accepted publickey for core from 20.161.92.111 port 34218 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:06:44.423208 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:44.433036 systemd-logind[1587]: New session 6 of user core. Jan 23 01:06:44.440982 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:06:44.851796 sshd[1821]: Connection closed by 20.161.92.111 port 34218 Jan 23 01:06:44.853105 sshd-session[1818]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:44.862327 systemd[1]: sshd@5-10.0.2.223:22-20.161.92.111:34218.service: Deactivated successfully. Jan 23 01:06:44.862352 systemd-logind[1587]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:06:44.865422 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:06:44.869661 systemd-logind[1587]: Removed session 6. Jan 23 01:06:44.971059 systemd[1]: Started sshd@6-10.0.2.223:22-20.161.92.111:34228.service - OpenSSH per-connection server daemon (20.161.92.111:34228). Jan 23 01:06:45.640509 sshd[1827]: Accepted publickey for core from 20.161.92.111 port 34228 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:06:45.642530 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:45.648736 systemd-logind[1587]: New session 7 of user core. Jan 23 01:06:45.659897 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:06:46.023937 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:06:46.024556 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:06:46.045804 sudo[1831]: pam_unix(sudo:session): session closed for user root Jan 23 01:06:46.146465 sshd[1830]: Connection closed by 20.161.92.111 port 34228 Jan 23 01:06:46.148805 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:46.158161 systemd[1]: sshd@6-10.0.2.223:22-20.161.92.111:34228.service: Deactivated successfully. Jan 23 01:06:46.162397 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:06:46.165471 systemd-logind[1587]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:06:46.169037 systemd-logind[1587]: Removed session 7. Jan 23 01:06:46.272367 systemd[1]: Started sshd@7-10.0.2.223:22-20.161.92.111:34240.service - OpenSSH per-connection server daemon (20.161.92.111:34240). Jan 23 01:06:46.303786 chronyd[1571]: Selected source PHC0 Jan 23 01:06:46.303824 chronyd[1571]: System clock wrong by 1.585227 seconds Jan 23 01:06:47.889866 systemd-resolved[1469]: Clock change detected. Flushing caches. Jan 23 01:06:47.889091 chronyd[1571]: System clock was stepped by 1.585227 seconds Jan 23 01:06:48.501621 sshd[1837]: Accepted publickey for core from 20.161.92.111 port 34240 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:06:48.504740 sshd-session[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:48.516475 systemd-logind[1587]: New session 8 of user core. Jan 23 01:06:48.537382 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:06:48.678503 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 01:06:48.682288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:06:48.836579 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:06:48.836797 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:06:48.863164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:06:48.879641 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:06:48.946159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:06:49.206518 kubelet[1852]: E0123 01:06:48.943910 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:06:48.946340 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:06:48.947058 systemd[1]: kubelet.service: Consumed 212ms CPU time, 109.4M memory peak. Jan 23 01:06:49.271065 sudo[1847]: pam_unix(sudo:session): session closed for user root Jan 23 01:06:49.286451 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:06:49.287846 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:06:49.315260 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:06:49.404054 augenrules[1878]: No rules Jan 23 01:06:49.406347 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:06:49.406949 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:06:49.410244 sudo[1846]: pam_unix(sudo:session): session closed for user root Jan 23 01:06:49.509549 sshd[1840]: Connection closed by 20.161.92.111 port 34240 Jan 23 01:06:49.510683 sshd-session[1837]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:49.521123 systemd[1]: sshd@7-10.0.2.223:22-20.161.92.111:34240.service: Deactivated successfully. Jan 23 01:06:49.525713 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:06:49.528153 systemd-logind[1587]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:06:49.531934 systemd-logind[1587]: Removed session 8. Jan 23 01:06:49.630369 systemd[1]: Started sshd@8-10.0.2.223:22-20.161.92.111:34244.service - OpenSSH per-connection server daemon (20.161.92.111:34244). Jan 23 01:06:50.301970 sshd[1887]: Accepted publickey for core from 20.161.92.111 port 34244 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:06:50.304063 sshd-session[1887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:06:50.317006 systemd-logind[1587]: New session 9 of user core. Jan 23 01:06:50.326267 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:06:50.644725 sudo[1891]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:06:50.646324 sudo[1891]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:06:51.699283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:06:51.700283 systemd[1]: kubelet.service: Consumed 212ms CPU time, 109.4M memory peak. Jan 23 01:06:51.703205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:06:51.742924 systemd[1]: Reload requested from client PID 1924 ('systemctl') (unit session-9.scope)... Jan 23 01:06:51.743081 systemd[1]: Reloading... Jan 23 01:06:51.850921 zram_generator::config[1966]: No configuration found. Jan 23 01:06:52.031410 systemd[1]: Reloading finished in 287 ms. Jan 23 01:06:52.087299 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:06:52.087369 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:06:52.087851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:06:52.087926 systemd[1]: kubelet.service: Consumed 108ms CPU time, 98.3M memory peak. Jan 23 01:06:52.089621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:06:52.377356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:06:52.385186 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:06:53.267933 kubelet[2020]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:06:53.267933 kubelet[2020]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:06:53.267933 kubelet[2020]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:06:53.267933 kubelet[2020]: I0123 01:06:53.267093 2020 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:06:53.610992 kubelet[2020]: I0123 01:06:53.610878 2020 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 01:06:53.611390 kubelet[2020]: I0123 01:06:53.611378 2020 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:06:53.611702 kubelet[2020]: I0123 01:06:53.611693 2020 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 01:06:53.655259 kubelet[2020]: I0123 01:06:53.654975 2020 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:06:53.675914 kubelet[2020]: I0123 01:06:53.674192 2020 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:06:53.676835 kubelet[2020]: I0123 01:06:53.676820 2020 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:06:53.677122 kubelet[2020]: I0123 01:06:53.677098 2020 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:06:53.677328 kubelet[2020]: I0123 01:06:53.677170 2020 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.2.223","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:06:53.678815 kubelet[2020]: I0123 01:06:53.678802 2020 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:06:53.678873 kubelet[2020]: I0123 01:06:53.678868 2020 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 01:06:53.679019 kubelet[2020]: I0123 01:06:53.679011 2020 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:06:53.684301 kubelet[2020]: I0123 01:06:53.684287 2020 kubelet.go:446] "Attempting to sync node with API server" Jan 23 01:06:53.684382 kubelet[2020]: I0123 01:06:53.684375 2020 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:06:53.684437 kubelet[2020]: I0123 01:06:53.684432 2020 kubelet.go:352] "Adding apiserver pod source" Jan 23 01:06:53.684482 kubelet[2020]: I0123 01:06:53.684476 2020 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:06:53.685064 kubelet[2020]: E0123 01:06:53.685017 2020 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:06:53.685136 kubelet[2020]: E0123 01:06:53.685115 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:06:53.690369 kubelet[2020]: I0123 01:06:53.689433 2020 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:06:53.690369 kubelet[2020]: I0123 01:06:53.689808 2020 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 01:06:53.690369 kubelet[2020]: W0123 01:06:53.689854 2020 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:06:53.692163 kubelet[2020]: I0123 01:06:53.692149 2020 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:06:53.692244 kubelet[2020]: I0123 01:06:53.692237 2020 server.go:1287] "Started kubelet" Jan 23 01:06:53.701807 kubelet[2020]: I0123 01:06:53.701697 2020 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:06:53.703911 kubelet[2020]: I0123 01:06:53.703069 2020 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:06:53.703911 kubelet[2020]: I0123 01:06:53.703404 2020 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:06:53.703911 kubelet[2020]: I0123 01:06:53.703428 2020 server.go:479] "Adding debug handlers to kubelet server" Jan 23 01:06:53.705778 kubelet[2020]: I0123 01:06:53.705759 2020 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:06:53.720529 kubelet[2020]: I0123 01:06:53.720509 2020 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:06:53.720821 kubelet[2020]: E0123 01:06:53.720807 2020 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.2.223\" not found" Jan 23 01:06:53.721197 kubelet[2020]: I0123 01:06:53.721187 2020 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:06:53.721301 kubelet[2020]: I0123 01:06:53.721295 2020 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:06:53.721396 kubelet[2020]: I0123 01:06:53.721370 2020 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:06:53.722978 kubelet[2020]: I0123 01:06:53.722959 2020 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:06:53.724055 kubelet[2020]: E0123 01:06:53.724030 2020 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:06:53.724511 kubelet[2020]: I0123 01:06:53.724501 2020 factory.go:221] Registration of the containerd container factory successfully Jan 23 01:06:53.724579 kubelet[2020]: I0123 01:06:53.724574 2020 factory.go:221] Registration of the systemd container factory successfully Jan 23 01:06:53.726451 kubelet[2020]: E0123 01:06:53.726429 2020 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.2.223\" not found" node="10.0.2.223" Jan 23 01:06:53.746886 kubelet[2020]: I0123 01:06:53.746868 2020 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:06:53.747074 kubelet[2020]: I0123 01:06:53.747066 2020 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:06:53.747289 kubelet[2020]: I0123 01:06:53.747122 2020 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:06:53.751288 kubelet[2020]: I0123 01:06:53.751273 2020 policy_none.go:49] "None policy: Start" Jan 23 01:06:53.751375 kubelet[2020]: I0123 01:06:53.751369 2020 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:06:53.751432 kubelet[2020]: I0123 01:06:53.751415 2020 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:06:53.764128 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:06:53.776264 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:06:53.785328 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:06:53.793931 kubelet[2020]: I0123 01:06:53.793886 2020 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 01:06:53.794460 kubelet[2020]: I0123 01:06:53.794121 2020 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:06:53.794460 kubelet[2020]: I0123 01:06:53.794148 2020 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:06:53.795116 kubelet[2020]: I0123 01:06:53.794587 2020 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:06:53.796639 kubelet[2020]: E0123 01:06:53.796574 2020 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:06:53.796639 kubelet[2020]: E0123 01:06:53.796624 2020 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.2.223\" not found" Jan 23 01:06:53.818979 kubelet[2020]: I0123 01:06:53.818949 2020 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 01:06:53.820495 kubelet[2020]: I0123 01:06:53.820169 2020 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 01:06:53.820495 kubelet[2020]: I0123 01:06:53.820188 2020 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 01:06:53.820495 kubelet[2020]: I0123 01:06:53.820205 2020 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:06:53.820495 kubelet[2020]: I0123 01:06:53.820211 2020 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 01:06:53.820495 kubelet[2020]: E0123 01:06:53.820328 2020 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 23 01:06:53.896919 kubelet[2020]: I0123 01:06:53.896160 2020 kubelet_node_status.go:75] "Attempting to register node" node="10.0.2.223" Jan 23 01:06:53.901150 kubelet[2020]: I0123 01:06:53.901128 2020 kubelet_node_status.go:78] "Successfully registered node" node="10.0.2.223" Jan 23 01:06:53.912497 kubelet[2020]: I0123 01:06:53.912462 2020 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 23 01:06:53.912835 containerd[1604]: time="2026-01-23T01:06:53.912750860Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:06:53.913190 kubelet[2020]: I0123 01:06:53.913036 2020 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 23 01:06:53.923233 sudo[1891]: pam_unix(sudo:session): session closed for user root Jan 23 01:06:54.019928 sshd[1890]: Connection closed by 20.161.92.111 port 34244 Jan 23 01:06:54.020957 sshd-session[1887]: pam_unix(sshd:session): session closed for user core Jan 23 01:06:54.030424 systemd-logind[1587]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:06:54.031751 systemd[1]: sshd@8-10.0.2.223:22-20.161.92.111:34244.service: Deactivated successfully. Jan 23 01:06:54.037372 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:06:54.037839 systemd[1]: session-9.scope: Consumed 691ms CPU time, 73.1M memory peak. Jan 23 01:06:54.041388 systemd-logind[1587]: Removed session 9. Jan 23 01:06:54.616708 kubelet[2020]: I0123 01:06:54.616176 2020 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 01:06:54.616708 kubelet[2020]: W0123 01:06:54.616543 2020 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 01:06:54.616708 kubelet[2020]: W0123 01:06:54.616608 2020 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 01:06:54.616708 kubelet[2020]: W0123 01:06:54.616662 2020 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 01:06:54.685532 kubelet[2020]: I0123 01:06:54.685446 2020 apiserver.go:52] "Watching apiserver" Jan 23 01:06:54.685756 kubelet[2020]: E0123 01:06:54.685728 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:06:54.697733 systemd[1]: Created slice kubepods-burstable-podb386b6df_9ccb_4071_81bf_293ed9f93b64.slice - libcontainer container kubepods-burstable-podb386b6df_9ccb_4071_81bf_293ed9f93b64.slice. Jan 23 01:06:54.714710 systemd[1]: Created slice kubepods-besteffort-podd117539c_9ccd_4dc3_ab1d_36a831ea928d.slice - libcontainer container kubepods-besteffort-podd117539c_9ccd_4dc3_ab1d_36a831ea928d.slice. Jan 23 01:06:54.722041 kubelet[2020]: I0123 01:06:54.722010 2020 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:06:54.728071 kubelet[2020]: I0123 01:06:54.727829 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-etc-cni-netd\") pod \"cilium-6wq9t\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " pod="kube-system/cilium-6wq9t" Jan 23 01:06:54.728071 kubelet[2020]: I0123 01:06:54.727883 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-xtables-lock\") pod \"cilium-6wq9t\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " pod="kube-system/cilium-6wq9t" Jan 23 01:06:54.728071 kubelet[2020]: I0123 01:06:54.727930 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qjkj\" (UniqueName: \"kubernetes.io/projected/d117539c-9ccd-4dc3-ab1d-36a831ea928d-kube-api-access-8qjkj\") pod \"kube-proxy-67cqm\" (UID: \"d117539c-9ccd-4dc3-ab1d-36a831ea928d\") " pod="kube-system/kube-proxy-67cqm" Jan 23 01:06:54.728071 kubelet[2020]: I0123 01:06:54.727953 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-cilium-run\") pod \"cilium-6wq9t\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " pod="kube-system/cilium-6wq9t" Jan 23 01:06:54.728071 kubelet[2020]: I0123 01:06:54.727972 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-cni-path\") pod \"cilium-6wq9t\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " pod="kube-system/cilium-6wq9t" Jan 23 01:06:54.728071 kubelet[2020]: I0123 01:06:54.727990 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d117539c-9ccd-4dc3-ab1d-36a831ea928d-kube-proxy\") pod \"kube-proxy-67cqm\" (UID: \"d117539c-9ccd-4dc3-ab1d-36a831ea928d\") " pod="kube-system/kube-proxy-67cqm" Jan 23 01:06:54.728334 kubelet[2020]: I0123 01:06:54.728025 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-cilium-cgroup\") pod \"cilium-6wq9t\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " pod="kube-system/cilium-6wq9t" Jan 23 01:06:54.728334 kubelet[2020]: I0123 01:06:54.728076 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b386b6df-9ccb-4071-81bf-293ed9f93b64-clustermesh-secrets\") pod \"cilium-6wq9t\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " pod="kube-system/cilium-6wq9t" Jan 23 01:06:54.728334 kubelet[2020]: I0123 01:06:54.728094 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-host-proc-sys-kernel\") pod \"cilium-6wq9t\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " pod="kube-system/cilium-6wq9t" Jan 23 01:06:54.728334 kubelet[2020]: I0123 01:06:54.728107 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b386b6df-9ccb-4071-81bf-293ed9f93b64-hubble-tls\") pod \"cilium-6wq9t\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " pod="kube-system/cilium-6wq9t" Jan 23 01:06:54.728334 kubelet[2020]: I0123 01:06:54.728121 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6dlz\" (UniqueName: \"kubernetes.io/projected/b386b6df-9ccb-4071-81bf-293ed9f93b64-kube-api-access-b6dlz\") pod \"cilium-6wq9t\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " pod="kube-system/cilium-6wq9t" Jan 23 01:06:54.728496 kubelet[2020]: I0123 01:06:54.728149 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d117539c-9ccd-4dc3-ab1d-36a831ea928d-lib-modules\") pod \"kube-proxy-67cqm\" (UID: \"d117539c-9ccd-4dc3-ab1d-36a831ea928d\") " pod="kube-system/kube-proxy-67cqm" Jan 23 01:06:54.728496 kubelet[2020]: I0123 01:06:54.728166 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-bpf-maps\") pod \"cilium-6wq9t\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " pod="kube-system/cilium-6wq9t" Jan 23 01:06:54.728496 kubelet[2020]: I0123 01:06:54.728181 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-lib-modules\") pod \"cilium-6wq9t\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " pod="kube-system/cilium-6wq9t" Jan 23 01:06:54.728496 kubelet[2020]: I0123 01:06:54.728195 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b386b6df-9ccb-4071-81bf-293ed9f93b64-cilium-config-path\") pod \"cilium-6wq9t\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " pod="kube-system/cilium-6wq9t" Jan 23 01:06:54.728496 kubelet[2020]: I0123 01:06:54.728217 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-host-proc-sys-net\") pod \"cilium-6wq9t\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " pod="kube-system/cilium-6wq9t" Jan 23 01:06:54.728496 kubelet[2020]: I0123 01:06:54.728247 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d117539c-9ccd-4dc3-ab1d-36a831ea928d-xtables-lock\") pod \"kube-proxy-67cqm\" (UID: \"d117539c-9ccd-4dc3-ab1d-36a831ea928d\") " pod="kube-system/kube-proxy-67cqm" Jan 23 01:06:54.728680 kubelet[2020]: I0123 01:06:54.728281 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-hostproc\") pod \"cilium-6wq9t\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " pod="kube-system/cilium-6wq9t" Jan 23 01:06:55.014635 containerd[1604]: time="2026-01-23T01:06:55.013784198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6wq9t,Uid:b386b6df-9ccb-4071-81bf-293ed9f93b64,Namespace:kube-system,Attempt:0,}" Jan 23 01:06:55.023638 containerd[1604]: time="2026-01-23T01:06:55.023125654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-67cqm,Uid:d117539c-9ccd-4dc3-ab1d-36a831ea928d,Namespace:kube-system,Attempt:0,}" Jan 23 01:06:55.657246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2656108333.mount: Deactivated successfully. Jan 23 01:06:55.674344 containerd[1604]: time="2026-01-23T01:06:55.674291720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:06:55.677417 containerd[1604]: time="2026-01-23T01:06:55.677330890Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321158" Jan 23 01:06:55.679980 containerd[1604]: time="2026-01-23T01:06:55.679054145Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:06:55.680363 containerd[1604]: time="2026-01-23T01:06:55.680317262Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:06:55.681694 containerd[1604]: time="2026-01-23T01:06:55.681666070Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 01:06:55.685260 containerd[1604]: time="2026-01-23T01:06:55.685223751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:06:55.686102 kubelet[2020]: E0123 01:06:55.686060 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:06:55.686555 containerd[1604]: time="2026-01-23T01:06:55.686516782Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 667.064006ms" Jan 23 01:06:55.687786 containerd[1604]: time="2026-01-23T01:06:55.687622478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 660.046763ms" Jan 23 01:06:55.730417 containerd[1604]: time="2026-01-23T01:06:55.730369741Z" level=info msg="connecting to shim eddc4a9080365cc26897bac3c70ef0b304dd07ec871d954970463fda5aa45c5c" address="unix:///run/containerd/s/f09d43e2a628161bd24b63b04371b7b9b266c5c14436950c3dc639754574343a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:06:55.730976 containerd[1604]: time="2026-01-23T01:06:55.730953669Z" level=info msg="connecting to shim 35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4" address="unix:///run/containerd/s/03cb00807e49f8ac6e795007ea64e5ebf015e3166f04e165b41ba6b9dbcebe14" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:06:55.753061 systemd[1]: Started cri-containerd-35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4.scope - libcontainer container 35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4. Jan 23 01:06:55.757493 systemd[1]: Started cri-containerd-eddc4a9080365cc26897bac3c70ef0b304dd07ec871d954970463fda5aa45c5c.scope - libcontainer container eddc4a9080365cc26897bac3c70ef0b304dd07ec871d954970463fda5aa45c5c. Jan 23 01:06:55.789311 containerd[1604]: time="2026-01-23T01:06:55.789281390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6wq9t,Uid:b386b6df-9ccb-4071-81bf-293ed9f93b64,Namespace:kube-system,Attempt:0,} returns sandbox id \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\"" Jan 23 01:06:55.791448 containerd[1604]: time="2026-01-23T01:06:55.791356735Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 01:06:55.792144 containerd[1604]: time="2026-01-23T01:06:55.792093879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-67cqm,Uid:d117539c-9ccd-4dc3-ab1d-36a831ea928d,Namespace:kube-system,Attempt:0,} returns sandbox id \"eddc4a9080365cc26897bac3c70ef0b304dd07ec871d954970463fda5aa45c5c\"" Jan 23 01:06:56.686426 kubelet[2020]: E0123 01:06:56.686310 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:06:57.687033 kubelet[2020]: E0123 01:06:57.686968 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:06:58.688195 kubelet[2020]: E0123 01:06:58.688148 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:06:59.607598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3997959736.mount: Deactivated successfully. Jan 23 01:06:59.688774 kubelet[2020]: E0123 01:06:59.688737 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:00.689018 kubelet[2020]: E0123 01:07:00.688985 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:01.488463 containerd[1604]: time="2026-01-23T01:07:01.488261513Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:01.489783 containerd[1604]: time="2026-01-23T01:07:01.489625649Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 01:07:01.491038 containerd[1604]: time="2026-01-23T01:07:01.490970733Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:01.492161 containerd[1604]: time="2026-01-23T01:07:01.492135212Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.70042096s" Jan 23 01:07:01.492218 containerd[1604]: time="2026-01-23T01:07:01.492161940Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 01:07:01.493494 containerd[1604]: time="2026-01-23T01:07:01.493280195Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 01:07:01.495491 containerd[1604]: time="2026-01-23T01:07:01.495454223Z" level=info msg="CreateContainer within sandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:07:01.525020 containerd[1604]: time="2026-01-23T01:07:01.524019031Z" level=info msg="Container aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:01.584075 containerd[1604]: time="2026-01-23T01:07:01.583882272Z" level=info msg="CreateContainer within sandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c\"" Jan 23 01:07:01.584762 containerd[1604]: time="2026-01-23T01:07:01.584730821Z" level=info msg="StartContainer for \"aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c\"" Jan 23 01:07:01.585814 containerd[1604]: time="2026-01-23T01:07:01.585781306Z" level=info msg="connecting to shim aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c" address="unix:///run/containerd/s/03cb00807e49f8ac6e795007ea64e5ebf015e3166f04e165b41ba6b9dbcebe14" protocol=ttrpc version=3 Jan 23 01:07:01.608050 systemd[1]: Started cri-containerd-aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c.scope - libcontainer container aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c. Jan 23 01:07:01.643477 containerd[1604]: time="2026-01-23T01:07:01.643449674Z" level=info msg="StartContainer for \"aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c\" returns successfully" Jan 23 01:07:01.650428 systemd[1]: cri-containerd-aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c.scope: Deactivated successfully. Jan 23 01:07:01.653387 containerd[1604]: time="2026-01-23T01:07:01.653290723Z" level=info msg="received container exit event container_id:\"aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c\" id:\"aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c\" pid:2201 exited_at:{seconds:1769130421 nanos:652838400}" Jan 23 01:07:01.672206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c-rootfs.mount: Deactivated successfully. Jan 23 01:07:01.689510 kubelet[2020]: E0123 01:07:01.689474 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:02.690070 kubelet[2020]: E0123 01:07:02.690004 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:03.691012 kubelet[2020]: E0123 01:07:03.690923 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:04.691585 kubelet[2020]: E0123 01:07:04.691532 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:04.862738 containerd[1604]: time="2026-01-23T01:07:04.861106415Z" level=info msg="CreateContainer within sandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:07:04.883406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3897000474.mount: Deactivated successfully. Jan 23 01:07:04.890144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount366051504.mount: Deactivated successfully. Jan 23 01:07:04.890525 containerd[1604]: time="2026-01-23T01:07:04.890116161Z" level=info msg="Container ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:04.908533 containerd[1604]: time="2026-01-23T01:07:04.908479633Z" level=info msg="CreateContainer within sandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2\"" Jan 23 01:07:04.909371 containerd[1604]: time="2026-01-23T01:07:04.909332480Z" level=info msg="StartContainer for \"ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2\"" Jan 23 01:07:04.912591 containerd[1604]: time="2026-01-23T01:07:04.912549128Z" level=info msg="connecting to shim ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2" address="unix:///run/containerd/s/03cb00807e49f8ac6e795007ea64e5ebf015e3166f04e165b41ba6b9dbcebe14" protocol=ttrpc version=3 Jan 23 01:07:04.946142 systemd[1]: Started cri-containerd-ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2.scope - libcontainer container ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2. Jan 23 01:07:04.987950 containerd[1604]: time="2026-01-23T01:07:04.987762488Z" level=info msg="StartContainer for \"ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2\" returns successfully" Jan 23 01:07:04.998109 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:07:04.998316 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:07:04.998957 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:07:05.001578 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:07:05.003390 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:07:05.008039 systemd[1]: cri-containerd-ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2.scope: Deactivated successfully. Jan 23 01:07:05.008381 containerd[1604]: time="2026-01-23T01:07:05.008354648Z" level=info msg="received container exit event container_id:\"ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2\" id:\"ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2\" pid:2248 exited_at:{seconds:1769130425 nanos:7613390}" Jan 23 01:07:05.022420 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:07:05.691909 kubelet[2020]: E0123 01:07:05.691793 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:05.807583 containerd[1604]: time="2026-01-23T01:07:05.807112250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:05.808378 containerd[1604]: time="2026-01-23T01:07:05.808363392Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161925" Jan 23 01:07:05.809815 containerd[1604]: time="2026-01-23T01:07:05.809800832Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:05.811966 containerd[1604]: time="2026-01-23T01:07:05.811937153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:05.812350 containerd[1604]: time="2026-01-23T01:07:05.812325578Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 4.31902352s" Jan 23 01:07:05.812398 containerd[1604]: time="2026-01-23T01:07:05.812355647Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 01:07:05.814671 containerd[1604]: time="2026-01-23T01:07:05.814652556Z" level=info msg="CreateContainer within sandbox \"eddc4a9080365cc26897bac3c70ef0b304dd07ec871d954970463fda5aa45c5c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:07:05.825217 containerd[1604]: time="2026-01-23T01:07:05.825193001Z" level=info msg="Container 3ee0acc6320329d885a77c6850b8afd968308ec906bba5f5b8fa282256560dab: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:05.841072 containerd[1604]: time="2026-01-23T01:07:05.841047173Z" level=info msg="CreateContainer within sandbox \"eddc4a9080365cc26897bac3c70ef0b304dd07ec871d954970463fda5aa45c5c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3ee0acc6320329d885a77c6850b8afd968308ec906bba5f5b8fa282256560dab\"" Jan 23 01:07:05.841651 containerd[1604]: time="2026-01-23T01:07:05.841630420Z" level=info msg="StartContainer for \"3ee0acc6320329d885a77c6850b8afd968308ec906bba5f5b8fa282256560dab\"" Jan 23 01:07:05.842838 containerd[1604]: time="2026-01-23T01:07:05.842818358Z" level=info msg="connecting to shim 3ee0acc6320329d885a77c6850b8afd968308ec906bba5f5b8fa282256560dab" address="unix:///run/containerd/s/f09d43e2a628161bd24b63b04371b7b9b266c5c14436950c3dc639754574343a" protocol=ttrpc version=3 Jan 23 01:07:05.866010 systemd[1]: Started cri-containerd-3ee0acc6320329d885a77c6850b8afd968308ec906bba5f5b8fa282256560dab.scope - libcontainer container 3ee0acc6320329d885a77c6850b8afd968308ec906bba5f5b8fa282256560dab. Jan 23 01:07:05.873505 containerd[1604]: time="2026-01-23T01:07:05.873467542Z" level=info msg="CreateContainer within sandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:07:05.880632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2-rootfs.mount: Deactivated successfully. Jan 23 01:07:05.888972 containerd[1604]: time="2026-01-23T01:07:05.888946413Z" level=info msg="Container c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:05.907288 containerd[1604]: time="2026-01-23T01:07:05.907102728Z" level=info msg="CreateContainer within sandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9\"" Jan 23 01:07:05.909154 containerd[1604]: time="2026-01-23T01:07:05.908192947Z" level=info msg="StartContainer for \"c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9\"" Jan 23 01:07:05.909932 containerd[1604]: time="2026-01-23T01:07:05.909912333Z" level=info msg="connecting to shim c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9" address="unix:///run/containerd/s/03cb00807e49f8ac6e795007ea64e5ebf015e3166f04e165b41ba6b9dbcebe14" protocol=ttrpc version=3 Jan 23 01:07:05.932125 systemd[1]: Started cri-containerd-c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9.scope - libcontainer container c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9. Jan 23 01:07:05.940446 containerd[1604]: time="2026-01-23T01:07:05.940200779Z" level=info msg="StartContainer for \"3ee0acc6320329d885a77c6850b8afd968308ec906bba5f5b8fa282256560dab\" returns successfully" Jan 23 01:07:05.989755 containerd[1604]: time="2026-01-23T01:07:05.989019124Z" level=info msg="StartContainer for \"c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9\" returns successfully" Jan 23 01:07:05.990732 systemd[1]: cri-containerd-c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9.scope: Deactivated successfully. Jan 23 01:07:05.994858 containerd[1604]: time="2026-01-23T01:07:05.994824542Z" level=info msg="received container exit event container_id:\"c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9\" id:\"c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9\" pid:2335 exited_at:{seconds:1769130425 nanos:993929733}" Jan 23 01:07:06.015683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9-rootfs.mount: Deactivated successfully. Jan 23 01:07:06.692493 kubelet[2020]: E0123 01:07:06.692447 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:06.881202 containerd[1604]: time="2026-01-23T01:07:06.881164267Z" level=info msg="CreateContainer within sandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:07:06.886055 kubelet[2020]: I0123 01:07:06.886002 2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-67cqm" podStartSLOduration=3.865256872 podStartE2EDuration="13.88598325s" podCreationTimestamp="2026-01-23 01:06:53 +0000 UTC" firstStartedPulling="2026-01-23 01:06:55.792659793 +0000 UTC m=+3.403863854" lastFinishedPulling="2026-01-23 01:07:05.81338617 +0000 UTC m=+13.424590232" observedRunningTime="2026-01-23 01:07:06.885580831 +0000 UTC m=+14.496784936" watchObservedRunningTime="2026-01-23 01:07:06.88598325 +0000 UTC m=+14.497187357" Jan 23 01:07:06.892815 containerd[1604]: time="2026-01-23T01:07:06.892337052Z" level=info msg="Container e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:06.917125 containerd[1604]: time="2026-01-23T01:07:06.917078158Z" level=info msg="CreateContainer within sandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05\"" Jan 23 01:07:06.917816 containerd[1604]: time="2026-01-23T01:07:06.917693669Z" level=info msg="StartContainer for \"e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05\"" Jan 23 01:07:06.918791 containerd[1604]: time="2026-01-23T01:07:06.918765978Z" level=info msg="connecting to shim e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05" address="unix:///run/containerd/s/03cb00807e49f8ac6e795007ea64e5ebf015e3166f04e165b41ba6b9dbcebe14" protocol=ttrpc version=3 Jan 23 01:07:06.947227 systemd[1]: Started cri-containerd-e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05.scope - libcontainer container e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05. Jan 23 01:07:06.979747 systemd[1]: cri-containerd-e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05.scope: Deactivated successfully. Jan 23 01:07:06.983037 containerd[1604]: time="2026-01-23T01:07:06.982963590Z" level=info msg="received container exit event container_id:\"e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05\" id:\"e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05\" pid:2509 exited_at:{seconds:1769130426 nanos:980678855}" Jan 23 01:07:06.991852 containerd[1604]: time="2026-01-23T01:07:06.991805829Z" level=info msg="StartContainer for \"e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05\" returns successfully" Jan 23 01:07:07.002549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05-rootfs.mount: Deactivated successfully. Jan 23 01:07:07.693199 kubelet[2020]: E0123 01:07:07.693127 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:07.888499 containerd[1604]: time="2026-01-23T01:07:07.888456799Z" level=info msg="CreateContainer within sandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:07:07.908574 containerd[1604]: time="2026-01-23T01:07:07.908518973Z" level=info msg="Container 70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:07.923175 containerd[1604]: time="2026-01-23T01:07:07.923081971Z" level=info msg="CreateContainer within sandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\"" Jan 23 01:07:07.924973 containerd[1604]: time="2026-01-23T01:07:07.923983495Z" level=info msg="StartContainer for \"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\"" Jan 23 01:07:07.925168 containerd[1604]: time="2026-01-23T01:07:07.925144243Z" level=info msg="connecting to shim 70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb" address="unix:///run/containerd/s/03cb00807e49f8ac6e795007ea64e5ebf015e3166f04e165b41ba6b9dbcebe14" protocol=ttrpc version=3 Jan 23 01:07:07.950250 systemd[1]: Started cri-containerd-70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb.scope - libcontainer container 70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb. Jan 23 01:07:08.000312 containerd[1604]: time="2026-01-23T01:07:08.000211351Z" level=info msg="StartContainer for \"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\" returns successfully" Jan 23 01:07:08.064974 kubelet[2020]: I0123 01:07:08.064557 2020 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 01:07:08.302571 kernel: Initializing XFRM netlink socket Jan 23 01:07:08.695088 kubelet[2020]: E0123 01:07:08.694994 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:09.241476 update_engine[1588]: I20260123 01:07:09.241268 1588 update_attempter.cc:509] Updating boot flags... Jan 23 01:07:09.696733 kubelet[2020]: E0123 01:07:09.696687 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:09.951531 systemd-networkd[1489]: cilium_host: Link UP Jan 23 01:07:09.951734 systemd-networkd[1489]: cilium_net: Link UP Jan 23 01:07:09.953165 systemd-networkd[1489]: cilium_net: Gained carrier Jan 23 01:07:09.953480 systemd-networkd[1489]: cilium_host: Gained carrier Jan 23 01:07:10.067224 systemd-networkd[1489]: cilium_net: Gained IPv6LL Jan 23 01:07:10.096375 systemd-networkd[1489]: cilium_vxlan: Link UP Jan 23 01:07:10.096386 systemd-networkd[1489]: cilium_vxlan: Gained carrier Jan 23 01:07:10.267108 systemd-networkd[1489]: cilium_host: Gained IPv6LL Jan 23 01:07:10.350965 kernel: NET: Registered PF_ALG protocol family Jan 23 01:07:10.697402 kubelet[2020]: E0123 01:07:10.697346 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:10.966787 systemd-networkd[1489]: lxc_health: Link UP Jan 23 01:07:10.974119 systemd-networkd[1489]: lxc_health: Gained carrier Jan 23 01:07:11.476986 systemd-networkd[1489]: cilium_vxlan: Gained IPv6LL Jan 23 01:07:11.698025 kubelet[2020]: E0123 01:07:11.697955 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:11.786613 kubelet[2020]: I0123 01:07:11.786210 2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6wq9t" podStartSLOduration=13.084059481 podStartE2EDuration="18.786190347s" podCreationTimestamp="2026-01-23 01:06:53 +0000 UTC" firstStartedPulling="2026-01-23 01:06:55.791034785 +0000 UTC m=+3.402238843" lastFinishedPulling="2026-01-23 01:07:01.493165635 +0000 UTC m=+9.104369709" observedRunningTime="2026-01-23 01:07:08.914833796 +0000 UTC m=+16.526037971" watchObservedRunningTime="2026-01-23 01:07:11.786190347 +0000 UTC m=+19.397394418" Jan 23 01:07:12.243151 systemd-networkd[1489]: lxc_health: Gained IPv6LL Jan 23 01:07:12.698749 kubelet[2020]: E0123 01:07:12.698629 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:13.685238 kubelet[2020]: E0123 01:07:13.684842 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:13.699786 kubelet[2020]: E0123 01:07:13.699696 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:14.700290 kubelet[2020]: E0123 01:07:14.700248 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:15.056099 kubelet[2020]: I0123 01:07:15.055981 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn9n8\" (UniqueName: \"kubernetes.io/projected/9c4a8d56-be7a-420e-b9e5-f76cae9231a2-kube-api-access-vn9n8\") pod \"nginx-deployment-7fcdb87857-ssdvl\" (UID: \"9c4a8d56-be7a-420e-b9e5-f76cae9231a2\") " pod="default/nginx-deployment-7fcdb87857-ssdvl" Jan 23 01:07:15.057957 systemd[1]: Created slice kubepods-besteffort-pod9c4a8d56_be7a_420e_b9e5_f76cae9231a2.slice - libcontainer container kubepods-besteffort-pod9c4a8d56_be7a_420e_b9e5_f76cae9231a2.slice. Jan 23 01:07:15.361225 containerd[1604]: time="2026-01-23T01:07:15.360771132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-ssdvl,Uid:9c4a8d56-be7a-420e-b9e5-f76cae9231a2,Namespace:default,Attempt:0,}" Jan 23 01:07:15.384139 systemd-networkd[1489]: lxc6893361b78e8: Link UP Jan 23 01:07:15.390383 kernel: eth0: renamed from tmp29995 Jan 23 01:07:15.392108 systemd-networkd[1489]: lxc6893361b78e8: Gained carrier Jan 23 01:07:15.514243 containerd[1604]: time="2026-01-23T01:07:15.513860160Z" level=info msg="connecting to shim 29995e08af33fa870cfb13913bc5d3748d11d41381d5cc9326b8218199122d40" address="unix:///run/containerd/s/71d33dc46a4f03cddad252f18fb026c1dbb8efc045583e81595c86721850df16" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:15.542060 systemd[1]: Started cri-containerd-29995e08af33fa870cfb13913bc5d3748d11d41381d5cc9326b8218199122d40.scope - libcontainer container 29995e08af33fa870cfb13913bc5d3748d11d41381d5cc9326b8218199122d40. Jan 23 01:07:15.585156 containerd[1604]: time="2026-01-23T01:07:15.585083482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-ssdvl,Uid:9c4a8d56-be7a-420e-b9e5-f76cae9231a2,Namespace:default,Attempt:0,} returns sandbox id \"29995e08af33fa870cfb13913bc5d3748d11d41381d5cc9326b8218199122d40\"" Jan 23 01:07:15.586393 containerd[1604]: time="2026-01-23T01:07:15.586364409Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 01:07:15.700817 kubelet[2020]: E0123 01:07:15.700767 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:16.467092 systemd-networkd[1489]: lxc6893361b78e8: Gained IPv6LL Jan 23 01:07:16.701578 kubelet[2020]: E0123 01:07:16.701363 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:17.701986 kubelet[2020]: E0123 01:07:17.701942 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:17.802627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028885668.mount: Deactivated successfully. Jan 23 01:07:18.502956 containerd[1604]: time="2026-01-23T01:07:18.502913418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:18.504387 containerd[1604]: time="2026-01-23T01:07:18.504248757Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 23 01:07:18.505718 containerd[1604]: time="2026-01-23T01:07:18.505698015Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:18.508562 containerd[1604]: time="2026-01-23T01:07:18.508538976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:18.509277 containerd[1604]: time="2026-01-23T01:07:18.509149598Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 2.922734572s" Jan 23 01:07:18.509277 containerd[1604]: time="2026-01-23T01:07:18.509176036Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 01:07:18.511265 containerd[1604]: time="2026-01-23T01:07:18.511244452Z" level=info msg="CreateContainer within sandbox \"29995e08af33fa870cfb13913bc5d3748d11d41381d5cc9326b8218199122d40\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 23 01:07:18.523157 containerd[1604]: time="2026-01-23T01:07:18.522723350Z" level=info msg="Container 155689f432fc0c500e0be382a69f57262c07813e122e3789f82e6422e73337b6: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:18.526681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4063114283.mount: Deactivated successfully. Jan 23 01:07:18.533122 containerd[1604]: time="2026-01-23T01:07:18.533045523Z" level=info msg="CreateContainer within sandbox \"29995e08af33fa870cfb13913bc5d3748d11d41381d5cc9326b8218199122d40\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"155689f432fc0c500e0be382a69f57262c07813e122e3789f82e6422e73337b6\"" Jan 23 01:07:18.533594 containerd[1604]: time="2026-01-23T01:07:18.533579058Z" level=info msg="StartContainer for \"155689f432fc0c500e0be382a69f57262c07813e122e3789f82e6422e73337b6\"" Jan 23 01:07:18.534443 containerd[1604]: time="2026-01-23T01:07:18.534389953Z" level=info msg="connecting to shim 155689f432fc0c500e0be382a69f57262c07813e122e3789f82e6422e73337b6" address="unix:///run/containerd/s/71d33dc46a4f03cddad252f18fb026c1dbb8efc045583e81595c86721850df16" protocol=ttrpc version=3 Jan 23 01:07:18.562068 systemd[1]: Started cri-containerd-155689f432fc0c500e0be382a69f57262c07813e122e3789f82e6422e73337b6.scope - libcontainer container 155689f432fc0c500e0be382a69f57262c07813e122e3789f82e6422e73337b6. Jan 23 01:07:18.589821 containerd[1604]: time="2026-01-23T01:07:18.589744623Z" level=info msg="StartContainer for \"155689f432fc0c500e0be382a69f57262c07813e122e3789f82e6422e73337b6\" returns successfully" Jan 23 01:07:18.702397 kubelet[2020]: E0123 01:07:18.702353 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:18.942178 kubelet[2020]: I0123 01:07:18.942009 2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-ssdvl" podStartSLOduration=1.017739934 podStartE2EDuration="3.941870317s" podCreationTimestamp="2026-01-23 01:07:15 +0000 UTC" firstStartedPulling="2026-01-23 01:07:15.586004212 +0000 UTC m=+23.197208268" lastFinishedPulling="2026-01-23 01:07:18.510134584 +0000 UTC m=+26.121338651" observedRunningTime="2026-01-23 01:07:18.941292287 +0000 UTC m=+26.552496459" watchObservedRunningTime="2026-01-23 01:07:18.941870317 +0000 UTC m=+26.553074437" Jan 23 01:07:19.703618 kubelet[2020]: E0123 01:07:19.703555 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:20.704775 kubelet[2020]: E0123 01:07:20.704686 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:21.705347 kubelet[2020]: E0123 01:07:21.705257 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:22.705813 kubelet[2020]: E0123 01:07:22.705744 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:23.706639 kubelet[2020]: E0123 01:07:23.706555 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:24.690145 kubelet[2020]: I0123 01:07:24.689720 2020 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:07:24.707049 kubelet[2020]: E0123 01:07:24.706964 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:25.299503 systemd[1]: Created slice kubepods-besteffort-pod5bfa723c_92c6_443b_a516_8de48e68335d.slice - libcontainer container kubepods-besteffort-pod5bfa723c_92c6_443b_a516_8de48e68335d.slice. Jan 23 01:07:25.315592 kubelet[2020]: I0123 01:07:25.315466 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5bfa723c-92c6-443b-a516-8de48e68335d-data\") pod \"nfs-server-provisioner-0\" (UID: \"5bfa723c-92c6-443b-a516-8de48e68335d\") " pod="default/nfs-server-provisioner-0" Jan 23 01:07:25.315592 kubelet[2020]: I0123 01:07:25.315529 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4528\" (UniqueName: \"kubernetes.io/projected/5bfa723c-92c6-443b-a516-8de48e68335d-kube-api-access-g4528\") pod \"nfs-server-provisioner-0\" (UID: \"5bfa723c-92c6-443b-a516-8de48e68335d\") " pod="default/nfs-server-provisioner-0" Jan 23 01:07:25.607387 containerd[1604]: time="2026-01-23T01:07:25.606699164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5bfa723c-92c6-443b-a516-8de48e68335d,Namespace:default,Attempt:0,}" Jan 23 01:07:25.646424 kernel: eth0: renamed from tmp2df15 Jan 23 01:07:25.655026 systemd-networkd[1489]: lxc81e45eb4f382: Link UP Jan 23 01:07:25.656737 systemd-networkd[1489]: lxc81e45eb4f382: Gained carrier Jan 23 01:07:25.707804 kubelet[2020]: E0123 01:07:25.707672 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:25.840565 containerd[1604]: time="2026-01-23T01:07:25.840133687Z" level=info msg="connecting to shim 2df1546e681024a537a0eaadce6b41de7cf03b4b515bfb20b532f11bafea7237" address="unix:///run/containerd/s/7f6c6d2605de394c0ef22e240fa7e3dc26db5cd09106e49a7c6b99ec6231f3ca" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:25.863041 systemd[1]: Started cri-containerd-2df1546e681024a537a0eaadce6b41de7cf03b4b515bfb20b532f11bafea7237.scope - libcontainer container 2df1546e681024a537a0eaadce6b41de7cf03b4b515bfb20b532f11bafea7237. Jan 23 01:07:25.908373 containerd[1604]: time="2026-01-23T01:07:25.908308312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5bfa723c-92c6-443b-a516-8de48e68335d,Namespace:default,Attempt:0,} returns sandbox id \"2df1546e681024a537a0eaadce6b41de7cf03b4b515bfb20b532f11bafea7237\"" Jan 23 01:07:25.909920 containerd[1604]: time="2026-01-23T01:07:25.909867422Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 23 01:07:26.708480 kubelet[2020]: E0123 01:07:26.708431 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:27.283564 systemd-networkd[1489]: lxc81e45eb4f382: Gained IPv6LL Jan 23 01:07:27.709220 kubelet[2020]: E0123 01:07:27.709190 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:27.877913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249531538.mount: Deactivated successfully. Jan 23 01:07:28.709971 kubelet[2020]: E0123 01:07:28.709942 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:29.545000 containerd[1604]: time="2026-01-23T01:07:29.544870990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:29.547510 containerd[1604]: time="2026-01-23T01:07:29.547132333Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039474" Jan 23 01:07:29.549295 containerd[1604]: time="2026-01-23T01:07:29.549245126Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:29.554872 containerd[1604]: time="2026-01-23T01:07:29.554818286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:29.557963 containerd[1604]: time="2026-01-23T01:07:29.557317895Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.647414302s" Jan 23 01:07:29.557963 containerd[1604]: time="2026-01-23T01:07:29.557383501Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 23 01:07:29.562395 containerd[1604]: time="2026-01-23T01:07:29.562233820Z" level=info msg="CreateContainer within sandbox \"2df1546e681024a537a0eaadce6b41de7cf03b4b515bfb20b532f11bafea7237\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 23 01:07:29.578504 containerd[1604]: time="2026-01-23T01:07:29.578426194Z" level=info msg="Container be3a8166059a1127065362db264a54962fb4b1d634f6ce4b05eb78a8d4af1377: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:29.598193 containerd[1604]: time="2026-01-23T01:07:29.598124070Z" level=info msg="CreateContainer within sandbox \"2df1546e681024a537a0eaadce6b41de7cf03b4b515bfb20b532f11bafea7237\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"be3a8166059a1127065362db264a54962fb4b1d634f6ce4b05eb78a8d4af1377\"" Jan 23 01:07:29.599354 containerd[1604]: time="2026-01-23T01:07:29.599306348Z" level=info msg="StartContainer for \"be3a8166059a1127065362db264a54962fb4b1d634f6ce4b05eb78a8d4af1377\"" Jan 23 01:07:29.601826 containerd[1604]: time="2026-01-23T01:07:29.601762764Z" level=info msg="connecting to shim be3a8166059a1127065362db264a54962fb4b1d634f6ce4b05eb78a8d4af1377" address="unix:///run/containerd/s/7f6c6d2605de394c0ef22e240fa7e3dc26db5cd09106e49a7c6b99ec6231f3ca" protocol=ttrpc version=3 Jan 23 01:07:29.628067 systemd[1]: Started cri-containerd-be3a8166059a1127065362db264a54962fb4b1d634f6ce4b05eb78a8d4af1377.scope - libcontainer container be3a8166059a1127065362db264a54962fb4b1d634f6ce4b05eb78a8d4af1377. Jan 23 01:07:29.660324 containerd[1604]: time="2026-01-23T01:07:29.660284547Z" level=info msg="StartContainer for \"be3a8166059a1127065362db264a54962fb4b1d634f6ce4b05eb78a8d4af1377\" returns successfully" Jan 23 01:07:29.711062 kubelet[2020]: E0123 01:07:29.711019 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:29.974606 kubelet[2020]: I0123 01:07:29.974460 2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.324332757 podStartE2EDuration="4.974395633s" podCreationTimestamp="2026-01-23 01:07:25 +0000 UTC" firstStartedPulling="2026-01-23 01:07:25.909533721 +0000 UTC m=+33.520737795" lastFinishedPulling="2026-01-23 01:07:29.559596525 +0000 UTC m=+37.170800671" observedRunningTime="2026-01-23 01:07:29.973705237 +0000 UTC m=+37.584909438" watchObservedRunningTime="2026-01-23 01:07:29.974395633 +0000 UTC m=+37.585599815" Jan 23 01:07:30.712107 kubelet[2020]: E0123 01:07:30.712031 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:31.712559 kubelet[2020]: E0123 01:07:31.712464 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:32.713321 kubelet[2020]: E0123 01:07:32.713231 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:33.685641 kubelet[2020]: E0123 01:07:33.685569 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:33.713480 kubelet[2020]: E0123 01:07:33.713369 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:34.714139 kubelet[2020]: E0123 01:07:34.714070 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:35.715222 kubelet[2020]: E0123 01:07:35.715165 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:36.716207 kubelet[2020]: E0123 01:07:36.716110 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:37.716836 kubelet[2020]: E0123 01:07:37.716726 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:38.717427 kubelet[2020]: E0123 01:07:38.717343 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:39.597073 systemd[1]: Created slice kubepods-besteffort-pod402574e1_5f16_4ee5_a317_0dded13c0975.slice - libcontainer container kubepods-besteffort-pod402574e1_5f16_4ee5_a317_0dded13c0975.slice. Jan 23 01:07:39.608500 kubelet[2020]: I0123 01:07:39.608436 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-19159bfd-25ed-44b5-a416-e8c96500800a\" (UniqueName: \"kubernetes.io/nfs/402574e1-5f16-4ee5-a317-0dded13c0975-pvc-19159bfd-25ed-44b5-a416-e8c96500800a\") pod \"test-pod-1\" (UID: \"402574e1-5f16-4ee5-a317-0dded13c0975\") " pod="default/test-pod-1" Jan 23 01:07:39.608500 kubelet[2020]: I0123 01:07:39.608469 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrk67\" (UniqueName: \"kubernetes.io/projected/402574e1-5f16-4ee5-a317-0dded13c0975-kube-api-access-vrk67\") pod \"test-pod-1\" (UID: \"402574e1-5f16-4ee5-a317-0dded13c0975\") " pod="default/test-pod-1" Jan 23 01:07:39.718171 kubelet[2020]: E0123 01:07:39.718119 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:39.796951 kernel: netfs: FS-Cache loaded Jan 23 01:07:39.867237 kernel: RPC: Registered named UNIX socket transport module. Jan 23 01:07:39.867440 kernel: RPC: Registered udp transport module. Jan 23 01:07:39.867496 kernel: RPC: Registered tcp transport module. Jan 23 01:07:39.867544 kernel: RPC: Registered tcp-with-tls transport module. Jan 23 01:07:39.868142 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 23 01:07:40.048725 kernel: NFS: Registering the id_resolver key type Jan 23 01:07:40.048966 kernel: Key type id_resolver registered Jan 23 01:07:40.049033 kernel: Key type id_legacy registered Jan 23 01:07:40.081233 nfsidmap[3354]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 23 01:07:40.084673 nfsidmap[3354]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 23 01:07:40.089737 nfsidmap[3355]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 23 01:07:40.090560 nfsidmap[3355]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 23 01:07:40.114591 nfsrahead[3357]: setting /var/lib/kubelet/pods/402574e1-5f16-4ee5-a317-0dded13c0975/volumes/kubernetes.io~nfs/pvc-19159bfd-25ed-44b5-a416-e8c96500800a readahead to 128 Jan 23 01:07:40.200179 containerd[1604]: time="2026-01-23T01:07:40.200072986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:402574e1-5f16-4ee5-a317-0dded13c0975,Namespace:default,Attempt:0,}" Jan 23 01:07:40.228621 systemd-networkd[1489]: lxce8cbff68530b: Link UP Jan 23 01:07:40.233915 kernel: eth0: renamed from tmpa3079 Jan 23 01:07:40.235210 systemd-networkd[1489]: lxce8cbff68530b: Gained carrier Jan 23 01:07:40.372153 containerd[1604]: time="2026-01-23T01:07:40.372113205Z" level=info msg="connecting to shim a307925eebf81075d8287c84a52af7ca3d03ea0facb883aa1f7fbed32f7641bf" address="unix:///run/containerd/s/2b7c7022925de953cd8e33163eceb0e3a4bf8d6885470b2d2c25e5d70189b847" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:40.396207 systemd[1]: Started cri-containerd-a307925eebf81075d8287c84a52af7ca3d03ea0facb883aa1f7fbed32f7641bf.scope - libcontainer container a307925eebf81075d8287c84a52af7ca3d03ea0facb883aa1f7fbed32f7641bf. Jan 23 01:07:40.445880 containerd[1604]: time="2026-01-23T01:07:40.445839758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:402574e1-5f16-4ee5-a317-0dded13c0975,Namespace:default,Attempt:0,} returns sandbox id \"a307925eebf81075d8287c84a52af7ca3d03ea0facb883aa1f7fbed32f7641bf\"" Jan 23 01:07:40.446968 containerd[1604]: time="2026-01-23T01:07:40.446872796Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 01:07:40.719733 kubelet[2020]: E0123 01:07:40.719683 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:40.841457 containerd[1604]: time="2026-01-23T01:07:40.841371888Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:40.843102 containerd[1604]: time="2026-01-23T01:07:40.843028009Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 23 01:07:40.852118 containerd[1604]: time="2026-01-23T01:07:40.852044226Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 405.035865ms" Jan 23 01:07:40.852118 containerd[1604]: time="2026-01-23T01:07:40.852112293Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 01:07:40.856502 containerd[1604]: time="2026-01-23T01:07:40.856426691Z" level=info msg="CreateContainer within sandbox \"a307925eebf81075d8287c84a52af7ca3d03ea0facb883aa1f7fbed32f7641bf\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 23 01:07:40.873923 containerd[1604]: time="2026-01-23T01:07:40.873085811Z" level=info msg="Container 604e5e66f693ba236ad2d512d9d19331bbf8dc7d4185a0cf671e7e9be9c71d31: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:40.894465 containerd[1604]: time="2026-01-23T01:07:40.894372515Z" level=info msg="CreateContainer within sandbox \"a307925eebf81075d8287c84a52af7ca3d03ea0facb883aa1f7fbed32f7641bf\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"604e5e66f693ba236ad2d512d9d19331bbf8dc7d4185a0cf671e7e9be9c71d31\"" Jan 23 01:07:40.896168 containerd[1604]: time="2026-01-23T01:07:40.896094149Z" level=info msg="StartContainer for \"604e5e66f693ba236ad2d512d9d19331bbf8dc7d4185a0cf671e7e9be9c71d31\"" Jan 23 01:07:40.898419 containerd[1604]: time="2026-01-23T01:07:40.898357860Z" level=info msg="connecting to shim 604e5e66f693ba236ad2d512d9d19331bbf8dc7d4185a0cf671e7e9be9c71d31" address="unix:///run/containerd/s/2b7c7022925de953cd8e33163eceb0e3a4bf8d6885470b2d2c25e5d70189b847" protocol=ttrpc version=3 Jan 23 01:07:40.944185 systemd[1]: Started cri-containerd-604e5e66f693ba236ad2d512d9d19331bbf8dc7d4185a0cf671e7e9be9c71d31.scope - libcontainer container 604e5e66f693ba236ad2d512d9d19331bbf8dc7d4185a0cf671e7e9be9c71d31. Jan 23 01:07:40.988399 containerd[1604]: time="2026-01-23T01:07:40.987285389Z" level=info msg="StartContainer for \"604e5e66f693ba236ad2d512d9d19331bbf8dc7d4185a0cf671e7e9be9c71d31\" returns successfully" Jan 23 01:07:41.002090 kubelet[2020]: I0123 01:07:41.002033 2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.595018474 podStartE2EDuration="15.002018883s" podCreationTimestamp="2026-01-23 01:07:26 +0000 UTC" firstStartedPulling="2026-01-23 01:07:40.4464965 +0000 UTC m=+48.057700556" lastFinishedPulling="2026-01-23 01:07:40.85349684 +0000 UTC m=+48.464700965" observedRunningTime="2026-01-23 01:07:41.001475303 +0000 UTC m=+48.612679409" watchObservedRunningTime="2026-01-23 01:07:41.002018883 +0000 UTC m=+48.613222977" Jan 23 01:07:41.299246 systemd-networkd[1489]: lxce8cbff68530b: Gained IPv6LL Jan 23 01:07:41.720272 kubelet[2020]: E0123 01:07:41.720220 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:42.720680 kubelet[2020]: E0123 01:07:42.720576 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:43.721358 kubelet[2020]: E0123 01:07:43.721266 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:44.721723 kubelet[2020]: E0123 01:07:44.721681 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:45.722419 kubelet[2020]: E0123 01:07:45.722361 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:46.722858 kubelet[2020]: E0123 01:07:46.722805 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:47.724151 kubelet[2020]: E0123 01:07:47.724056 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:48.724409 kubelet[2020]: E0123 01:07:48.724259 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:48.832580 containerd[1604]: time="2026-01-23T01:07:48.832521296Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:07:48.843122 containerd[1604]: time="2026-01-23T01:07:48.843044781Z" level=info msg="StopContainer for \"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\" with timeout 2 (s)" Jan 23 01:07:48.843454 containerd[1604]: time="2026-01-23T01:07:48.843421739Z" level=info msg="Stop container \"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\" with signal terminated" Jan 23 01:07:48.861279 systemd-networkd[1489]: lxc_health: Link DOWN Jan 23 01:07:48.861293 systemd-networkd[1489]: lxc_health: Lost carrier Jan 23 01:07:48.884496 systemd[1]: cri-containerd-70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb.scope: Deactivated successfully. Jan 23 01:07:48.885477 systemd[1]: cri-containerd-70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb.scope: Consumed 6.219s CPU time, 120.3M memory peak, 104K read from disk, 13.3M written to disk. Jan 23 01:07:48.889649 containerd[1604]: time="2026-01-23T01:07:48.889605421Z" level=info msg="received container exit event container_id:\"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\" id:\"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\" pid:2548 exited_at:{seconds:1769130468 nanos:888626703}" Jan 23 01:07:48.922240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb-rootfs.mount: Deactivated successfully. Jan 23 01:07:49.414338 containerd[1604]: time="2026-01-23T01:07:49.414219142Z" level=info msg="StopContainer for \"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\" returns successfully" Jan 23 01:07:49.415114 containerd[1604]: time="2026-01-23T01:07:49.414846613Z" level=info msg="StopPodSandbox for \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\"" Jan 23 01:07:49.415114 containerd[1604]: time="2026-01-23T01:07:49.414915593Z" level=info msg="Container to stop \"aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:07:49.415114 containerd[1604]: time="2026-01-23T01:07:49.414928097Z" level=info msg="Container to stop \"ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:07:49.415114 containerd[1604]: time="2026-01-23T01:07:49.414938635Z" level=info msg="Container to stop \"c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:07:49.415114 containerd[1604]: time="2026-01-23T01:07:49.414947594Z" level=info msg="Container to stop \"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:07:49.415114 containerd[1604]: time="2026-01-23T01:07:49.414956116Z" level=info msg="Container to stop \"e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:07:49.423175 systemd[1]: cri-containerd-35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4.scope: Deactivated successfully. Jan 23 01:07:49.424846 containerd[1604]: time="2026-01-23T01:07:49.424623691Z" level=info msg="received sandbox exit event container_id:\"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" id:\"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" exit_status:137 exited_at:{seconds:1769130469 nanos:424040128}" monitor_name=podsandbox Jan 23 01:07:49.446358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4-rootfs.mount: Deactivated successfully. Jan 23 01:07:49.451556 containerd[1604]: time="2026-01-23T01:07:49.451322665Z" level=info msg="shim disconnected" id=35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4 namespace=k8s.io Jan 23 01:07:49.451815 containerd[1604]: time="2026-01-23T01:07:49.451675245Z" level=warning msg="cleaning up after shim disconnected" id=35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4 namespace=k8s.io Jan 23 01:07:49.451815 containerd[1604]: time="2026-01-23T01:07:49.451692033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:07:49.468069 containerd[1604]: time="2026-01-23T01:07:49.468023959Z" level=info msg="TearDown network for sandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" successfully" Jan 23 01:07:49.468920 containerd[1604]: time="2026-01-23T01:07:49.468256453Z" level=info msg="received sandbox container exit event sandbox_id:\"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" exit_status:137 exited_at:{seconds:1769130469 nanos:424040128}" monitor_name=criService Jan 23 01:07:49.469012 containerd[1604]: time="2026-01-23T01:07:49.468994972Z" level=info msg="StopPodSandbox for \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" returns successfully" Jan 23 01:07:49.469619 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4-shm.mount: Deactivated successfully. Jan 23 01:07:49.570715 kubelet[2020]: I0123 01:07:49.570659 2020 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b386b6df-9ccb-4071-81bf-293ed9f93b64-clustermesh-secrets\") pod \"b386b6df-9ccb-4071-81bf-293ed9f93b64\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " Jan 23 01:07:49.570974 kubelet[2020]: I0123 01:07:49.570739 2020 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-host-proc-sys-kernel\") pod \"b386b6df-9ccb-4071-81bf-293ed9f93b64\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " Jan 23 01:07:49.570974 kubelet[2020]: I0123 01:07:49.570778 2020 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-lib-modules\") pod \"b386b6df-9ccb-4071-81bf-293ed9f93b64\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " Jan 23 01:07:49.570974 kubelet[2020]: I0123 01:07:49.570817 2020 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-cilium-cgroup\") pod \"b386b6df-9ccb-4071-81bf-293ed9f93b64\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " Jan 23 01:07:49.570974 kubelet[2020]: I0123 01:07:49.570850 2020 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-host-proc-sys-net\") pod \"b386b6df-9ccb-4071-81bf-293ed9f93b64\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " Jan 23 01:07:49.570974 kubelet[2020]: I0123 01:07:49.570919 2020 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-bpf-maps\") pod \"b386b6df-9ccb-4071-81bf-293ed9f93b64\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " Jan 23 01:07:49.570974 kubelet[2020]: I0123 01:07:49.570961 2020 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6dlz\" (UniqueName: \"kubernetes.io/projected/b386b6df-9ccb-4071-81bf-293ed9f93b64-kube-api-access-b6dlz\") pod \"b386b6df-9ccb-4071-81bf-293ed9f93b64\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " Jan 23 01:07:49.571241 kubelet[2020]: I0123 01:07:49.570998 2020 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-cilium-run\") pod \"b386b6df-9ccb-4071-81bf-293ed9f93b64\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " Jan 23 01:07:49.571241 kubelet[2020]: I0123 01:07:49.571035 2020 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b386b6df-9ccb-4071-81bf-293ed9f93b64-hubble-tls\") pod \"b386b6df-9ccb-4071-81bf-293ed9f93b64\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " Jan 23 01:07:49.571241 kubelet[2020]: I0123 01:07:49.571070 2020 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-cni-path\") pod \"b386b6df-9ccb-4071-81bf-293ed9f93b64\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " Jan 23 01:07:49.571241 kubelet[2020]: I0123 01:07:49.571102 2020 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-hostproc\") pod \"b386b6df-9ccb-4071-81bf-293ed9f93b64\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " Jan 23 01:07:49.571241 kubelet[2020]: I0123 01:07:49.571148 2020 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b386b6df-9ccb-4071-81bf-293ed9f93b64-cilium-config-path\") pod \"b386b6df-9ccb-4071-81bf-293ed9f93b64\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " Jan 23 01:07:49.571241 kubelet[2020]: I0123 01:07:49.571184 2020 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-etc-cni-netd\") pod \"b386b6df-9ccb-4071-81bf-293ed9f93b64\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " Jan 23 01:07:49.571558 kubelet[2020]: I0123 01:07:49.571218 2020 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-xtables-lock\") pod \"b386b6df-9ccb-4071-81bf-293ed9f93b64\" (UID: \"b386b6df-9ccb-4071-81bf-293ed9f93b64\") " Jan 23 01:07:49.571558 kubelet[2020]: I0123 01:07:49.571311 2020 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b386b6df-9ccb-4071-81bf-293ed9f93b64" (UID: "b386b6df-9ccb-4071-81bf-293ed9f93b64"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:07:49.571558 kubelet[2020]: I0123 01:07:49.571373 2020 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b386b6df-9ccb-4071-81bf-293ed9f93b64" (UID: "b386b6df-9ccb-4071-81bf-293ed9f93b64"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:07:49.571558 kubelet[2020]: I0123 01:07:49.571410 2020 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b386b6df-9ccb-4071-81bf-293ed9f93b64" (UID: "b386b6df-9ccb-4071-81bf-293ed9f93b64"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:07:49.571558 kubelet[2020]: I0123 01:07:49.571442 2020 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b386b6df-9ccb-4071-81bf-293ed9f93b64" (UID: "b386b6df-9ccb-4071-81bf-293ed9f93b64"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:07:49.571777 kubelet[2020]: I0123 01:07:49.571472 2020 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b386b6df-9ccb-4071-81bf-293ed9f93b64" (UID: "b386b6df-9ccb-4071-81bf-293ed9f93b64"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:07:49.571777 kubelet[2020]: I0123 01:07:49.571503 2020 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b386b6df-9ccb-4071-81bf-293ed9f93b64" (UID: "b386b6df-9ccb-4071-81bf-293ed9f93b64"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:07:49.573377 kubelet[2020]: I0123 01:07:49.573006 2020 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b386b6df-9ccb-4071-81bf-293ed9f93b64" (UID: "b386b6df-9ccb-4071-81bf-293ed9f93b64"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:07:49.573377 kubelet[2020]: I0123 01:07:49.573144 2020 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-hostproc" (OuterVolumeSpecName: "hostproc") pod "b386b6df-9ccb-4071-81bf-293ed9f93b64" (UID: "b386b6df-9ccb-4071-81bf-293ed9f93b64"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:07:49.573377 kubelet[2020]: I0123 01:07:49.573225 2020 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-cni-path" (OuterVolumeSpecName: "cni-path") pod "b386b6df-9ccb-4071-81bf-293ed9f93b64" (UID: "b386b6df-9ccb-4071-81bf-293ed9f93b64"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:07:49.575221 kubelet[2020]: I0123 01:07:49.575153 2020 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b386b6df-9ccb-4071-81bf-293ed9f93b64" (UID: "b386b6df-9ccb-4071-81bf-293ed9f93b64"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:07:49.579406 kubelet[2020]: I0123 01:07:49.579365 2020 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b386b6df-9ccb-4071-81bf-293ed9f93b64-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b386b6df-9ccb-4071-81bf-293ed9f93b64" (UID: "b386b6df-9ccb-4071-81bf-293ed9f93b64"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:07:49.580415 kubelet[2020]: I0123 01:07:49.580363 2020 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b386b6df-9ccb-4071-81bf-293ed9f93b64-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b386b6df-9ccb-4071-81bf-293ed9f93b64" (UID: "b386b6df-9ccb-4071-81bf-293ed9f93b64"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:07:49.584788 kubelet[2020]: I0123 01:07:49.584717 2020 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b386b6df-9ccb-4071-81bf-293ed9f93b64-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b386b6df-9ccb-4071-81bf-293ed9f93b64" (UID: "b386b6df-9ccb-4071-81bf-293ed9f93b64"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:07:49.585107 kubelet[2020]: I0123 01:07:49.585035 2020 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b386b6df-9ccb-4071-81bf-293ed9f93b64-kube-api-access-b6dlz" (OuterVolumeSpecName: "kube-api-access-b6dlz") pod "b386b6df-9ccb-4071-81bf-293ed9f93b64" (UID: "b386b6df-9ccb-4071-81bf-293ed9f93b64"). InnerVolumeSpecName "kube-api-access-b6dlz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:07:49.588451 systemd[1]: var-lib-kubelet-pods-b386b6df\x2d9ccb\x2d4071\x2d81bf\x2d293ed9f93b64-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 01:07:49.588646 systemd[1]: var-lib-kubelet-pods-b386b6df\x2d9ccb\x2d4071\x2d81bf\x2d293ed9f93b64-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 01:07:49.594815 systemd[1]: var-lib-kubelet-pods-b386b6df\x2d9ccb\x2d4071\x2d81bf\x2d293ed9f93b64-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db6dlz.mount: Deactivated successfully. Jan 23 01:07:49.672518 kubelet[2020]: I0123 01:07:49.672401 2020 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-cilium-run\") on node \"10.0.2.223\" DevicePath \"\"" Jan 23 01:07:49.673982 kubelet[2020]: I0123 01:07:49.672972 2020 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b386b6df-9ccb-4071-81bf-293ed9f93b64-hubble-tls\") on node \"10.0.2.223\" DevicePath \"\"" Jan 23 01:07:49.673982 kubelet[2020]: I0123 01:07:49.672992 2020 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b6dlz\" (UniqueName: \"kubernetes.io/projected/b386b6df-9ccb-4071-81bf-293ed9f93b64-kube-api-access-b6dlz\") on node \"10.0.2.223\" DevicePath \"\"" Jan 23 01:07:49.673982 kubelet[2020]: I0123 01:07:49.673006 2020 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-hostproc\") on node \"10.0.2.223\" DevicePath \"\"" Jan 23 01:07:49.673982 kubelet[2020]: I0123 01:07:49.673855 2020 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b386b6df-9ccb-4071-81bf-293ed9f93b64-cilium-config-path\") on node \"10.0.2.223\" DevicePath \"\"" Jan 23 01:07:49.673982 kubelet[2020]: I0123 01:07:49.673869 2020 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-etc-cni-netd\") on node \"10.0.2.223\" DevicePath \"\"" Jan 23 01:07:49.673982 kubelet[2020]: I0123 01:07:49.673879 2020 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-xtables-lock\") on node \"10.0.2.223\" DevicePath \"\"" Jan 23 01:07:49.673982 kubelet[2020]: I0123 01:07:49.673905 2020 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-cni-path\") on node \"10.0.2.223\" DevicePath \"\"" Jan 23 01:07:49.673982 kubelet[2020]: I0123 01:07:49.673915 2020 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-host-proc-sys-kernel\") on node \"10.0.2.223\" DevicePath \"\"" Jan 23 01:07:49.674242 kubelet[2020]: I0123 01:07:49.673924 2020 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-lib-modules\") on node \"10.0.2.223\" DevicePath \"\"" Jan 23 01:07:49.674242 kubelet[2020]: I0123 01:07:49.673933 2020 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-cilium-cgroup\") on node \"10.0.2.223\" DevicePath \"\"" Jan 23 01:07:49.674242 kubelet[2020]: I0123 01:07:49.673942 2020 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b386b6df-9ccb-4071-81bf-293ed9f93b64-clustermesh-secrets\") on node \"10.0.2.223\" DevicePath \"\"" Jan 23 01:07:49.674242 kubelet[2020]: I0123 01:07:49.673951 2020 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-bpf-maps\") on node \"10.0.2.223\" DevicePath \"\"" Jan 23 01:07:49.674242 kubelet[2020]: I0123 01:07:49.673959 2020 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b386b6df-9ccb-4071-81bf-293ed9f93b64-host-proc-sys-net\") on node \"10.0.2.223\" DevicePath \"\"" Jan 23 01:07:49.725186 kubelet[2020]: E0123 01:07:49.725131 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:49.836536 systemd[1]: Removed slice kubepods-burstable-podb386b6df_9ccb_4071_81bf_293ed9f93b64.slice - libcontainer container kubepods-burstable-podb386b6df_9ccb_4071_81bf_293ed9f93b64.slice. Jan 23 01:07:49.837062 systemd[1]: kubepods-burstable-podb386b6df_9ccb_4071_81bf_293ed9f93b64.slice: Consumed 6.309s CPU time, 120.8M memory peak, 104K read from disk, 13.3M written to disk. Jan 23 01:07:50.025022 kubelet[2020]: I0123 01:07:50.024366 2020 scope.go:117] "RemoveContainer" containerID="70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb" Jan 23 01:07:50.031753 containerd[1604]: time="2026-01-23T01:07:50.030767200Z" level=info msg="RemoveContainer for \"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\"" Jan 23 01:07:50.041347 containerd[1604]: time="2026-01-23T01:07:50.041209852Z" level=info msg="RemoveContainer for \"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\" returns successfully" Jan 23 01:07:50.041965 kubelet[2020]: I0123 01:07:50.041854 2020 scope.go:117] "RemoveContainer" containerID="e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05" Jan 23 01:07:50.044984 containerd[1604]: time="2026-01-23T01:07:50.044755127Z" level=info msg="RemoveContainer for \"e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05\"" Jan 23 01:07:50.052173 containerd[1604]: time="2026-01-23T01:07:50.052127432Z" level=info msg="RemoveContainer for \"e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05\" returns successfully" Jan 23 01:07:50.052605 kubelet[2020]: I0123 01:07:50.052577 2020 scope.go:117] "RemoveContainer" containerID="c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9" Jan 23 01:07:50.057931 containerd[1604]: time="2026-01-23T01:07:50.057627611Z" level=info msg="RemoveContainer for \"c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9\"" Jan 23 01:07:50.064953 containerd[1604]: time="2026-01-23T01:07:50.064816556Z" level=info msg="RemoveContainer for \"c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9\" returns successfully" Jan 23 01:07:50.065403 kubelet[2020]: I0123 01:07:50.065367 2020 scope.go:117] "RemoveContainer" containerID="ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2" Jan 23 01:07:50.068872 containerd[1604]: time="2026-01-23T01:07:50.068803886Z" level=info msg="RemoveContainer for \"ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2\"" Jan 23 01:07:50.074680 containerd[1604]: time="2026-01-23T01:07:50.074621798Z" level=info msg="RemoveContainer for \"ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2\" returns successfully" Jan 23 01:07:50.075107 kubelet[2020]: I0123 01:07:50.074928 2020 scope.go:117] "RemoveContainer" containerID="aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c" Jan 23 01:07:50.078211 containerd[1604]: time="2026-01-23T01:07:50.078060603Z" level=info msg="RemoveContainer for \"aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c\"" Jan 23 01:07:50.095401 containerd[1604]: time="2026-01-23T01:07:50.095360439Z" level=info msg="RemoveContainer for \"aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c\" returns successfully" Jan 23 01:07:50.095693 kubelet[2020]: I0123 01:07:50.095670 2020 scope.go:117] "RemoveContainer" containerID="70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb" Jan 23 01:07:50.096216 containerd[1604]: time="2026-01-23T01:07:50.096155433Z" level=error msg="ContainerStatus for \"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\": not found" Jan 23 01:07:50.096505 kubelet[2020]: E0123 01:07:50.096354 2020 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\": not found" containerID="70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb" Jan 23 01:07:50.096505 kubelet[2020]: I0123 01:07:50.096399 2020 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb"} err="failed to get container status \"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\": rpc error: code = NotFound desc = an error occurred when try to find container \"70256954906fcfcff159e2de89fe3468b73c9a154e6c1ba73aaa9147cfefdcbb\": not found" Jan 23 01:07:50.096505 kubelet[2020]: I0123 01:07:50.096465 2020 scope.go:117] "RemoveContainer" containerID="e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05" Jan 23 01:07:50.096962 containerd[1604]: time="2026-01-23T01:07:50.096930102Z" level=error msg="ContainerStatus for \"e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05\": not found" Jan 23 01:07:50.097160 kubelet[2020]: E0123 01:07:50.097060 2020 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05\": not found" containerID="e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05" Jan 23 01:07:50.097160 kubelet[2020]: I0123 01:07:50.097082 2020 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05"} err="failed to get container status \"e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05\": rpc error: code = NotFound desc = an error occurred when try to find container \"e48f611dde1ae7fdc32bec2d62758ad13719a3c9e785ed6cce4c36b78db66d05\": not found" Jan 23 01:07:50.097160 kubelet[2020]: I0123 01:07:50.097103 2020 scope.go:117] "RemoveContainer" containerID="c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9" Jan 23 01:07:50.097396 containerd[1604]: time="2026-01-23T01:07:50.097369206Z" level=error msg="ContainerStatus for \"c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9\": not found" Jan 23 01:07:50.097570 kubelet[2020]: E0123 01:07:50.097491 2020 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9\": not found" containerID="c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9" Jan 23 01:07:50.097570 kubelet[2020]: I0123 01:07:50.097510 2020 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9"} err="failed to get container status \"c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9\": rpc error: code = NotFound desc = an error occurred when try to find container \"c418ca9e0dac10fa06a4ae6290624d31e6b71ddce3fd747e138ebfd938443cf9\": not found" Jan 23 01:07:50.097570 kubelet[2020]: I0123 01:07:50.097523 2020 scope.go:117] "RemoveContainer" containerID="ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2" Jan 23 01:07:50.097709 containerd[1604]: time="2026-01-23T01:07:50.097682016Z" level=error msg="ContainerStatus for \"ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2\": not found" Jan 23 01:07:50.097833 kubelet[2020]: E0123 01:07:50.097808 2020 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2\": not found" containerID="ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2" Jan 23 01:07:50.097872 kubelet[2020]: I0123 01:07:50.097844 2020 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2"} err="failed to get container status \"ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce8ff61d3dd6ce3d828e3b03cd5055a58a4113663e17dbfbc33630735c32cba2\": not found" Jan 23 01:07:50.097872 kubelet[2020]: I0123 01:07:50.097868 2020 scope.go:117] "RemoveContainer" containerID="aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c" Jan 23 01:07:50.098123 containerd[1604]: time="2026-01-23T01:07:50.098060187Z" level=error msg="ContainerStatus for \"aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c\": not found" Jan 23 01:07:50.098257 kubelet[2020]: E0123 01:07:50.098222 2020 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c\": not found" containerID="aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c" Jan 23 01:07:50.098257 kubelet[2020]: I0123 01:07:50.098240 2020 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c"} err="failed to get container status \"aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"aaa52c3dddb28019107e71641ed8635d6c19d75dfd974fb96647741020fe4e3c\": not found" Jan 23 01:07:50.726306 kubelet[2020]: E0123 01:07:50.726221 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:51.727190 kubelet[2020]: E0123 01:07:51.727118 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:51.825364 kubelet[2020]: I0123 01:07:51.825313 2020 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b386b6df-9ccb-4071-81bf-293ed9f93b64" path="/var/lib/kubelet/pods/b386b6df-9ccb-4071-81bf-293ed9f93b64/volumes" Jan 23 01:07:52.473015 kubelet[2020]: I0123 01:07:52.472936 2020 memory_manager.go:355] "RemoveStaleState removing state" podUID="b386b6df-9ccb-4071-81bf-293ed9f93b64" containerName="cilium-agent" Jan 23 01:07:52.490974 systemd[1]: Created slice kubepods-burstable-pod0822e569_9f83_47e0_bb36_75cff97d2ca2.slice - libcontainer container kubepods-burstable-pod0822e569_9f83_47e0_bb36_75cff97d2ca2.slice. Jan 23 01:07:52.521182 systemd[1]: Created slice kubepods-besteffort-pode50b42ed_313c_4ebf_93e3_91ba7a3c4786.slice - libcontainer container kubepods-besteffort-pode50b42ed_313c_4ebf_93e3_91ba7a3c4786.slice. Jan 23 01:07:52.595464 kubelet[2020]: I0123 01:07:52.595389 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0822e569-9f83-47e0-bb36-75cff97d2ca2-cilium-run\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.595464 kubelet[2020]: I0123 01:07:52.595449 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0822e569-9f83-47e0-bb36-75cff97d2ca2-hubble-tls\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.595464 kubelet[2020]: I0123 01:07:52.595477 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0822e569-9f83-47e0-bb36-75cff97d2ca2-cilium-config-path\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.595942 kubelet[2020]: I0123 01:07:52.595510 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0822e569-9f83-47e0-bb36-75cff97d2ca2-cilium-cgroup\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.595942 kubelet[2020]: I0123 01:07:52.595535 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0822e569-9f83-47e0-bb36-75cff97d2ca2-etc-cni-netd\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.595942 kubelet[2020]: I0123 01:07:52.595557 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0822e569-9f83-47e0-bb36-75cff97d2ca2-cni-path\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.595942 kubelet[2020]: I0123 01:07:52.595578 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0822e569-9f83-47e0-bb36-75cff97d2ca2-clustermesh-secrets\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.595942 kubelet[2020]: I0123 01:07:52.595600 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n76q7\" (UniqueName: \"kubernetes.io/projected/0822e569-9f83-47e0-bb36-75cff97d2ca2-kube-api-access-n76q7\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.595942 kubelet[2020]: I0123 01:07:52.595621 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0822e569-9f83-47e0-bb36-75cff97d2ca2-bpf-maps\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.596368 kubelet[2020]: I0123 01:07:52.595641 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0822e569-9f83-47e0-bb36-75cff97d2ca2-hostproc\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.596368 kubelet[2020]: I0123 01:07:52.595661 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0822e569-9f83-47e0-bb36-75cff97d2ca2-cilium-ipsec-secrets\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.596368 kubelet[2020]: I0123 01:07:52.595681 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0822e569-9f83-47e0-bb36-75cff97d2ca2-host-proc-sys-net\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.596368 kubelet[2020]: I0123 01:07:52.595702 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0822e569-9f83-47e0-bb36-75cff97d2ca2-host-proc-sys-kernel\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.596368 kubelet[2020]: I0123 01:07:52.595730 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0822e569-9f83-47e0-bb36-75cff97d2ca2-lib-modules\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.596368 kubelet[2020]: I0123 01:07:52.595751 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0822e569-9f83-47e0-bb36-75cff97d2ca2-xtables-lock\") pod \"cilium-ct5ph\" (UID: \"0822e569-9f83-47e0-bb36-75cff97d2ca2\") " pod="kube-system/cilium-ct5ph" Jan 23 01:07:52.596763 kubelet[2020]: I0123 01:07:52.595774 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb5wd\" (UniqueName: \"kubernetes.io/projected/e50b42ed-313c-4ebf-93e3-91ba7a3c4786-kube-api-access-hb5wd\") pod \"cilium-operator-6c4d7847fc-xdvnx\" (UID: \"e50b42ed-313c-4ebf-93e3-91ba7a3c4786\") " pod="kube-system/cilium-operator-6c4d7847fc-xdvnx" Jan 23 01:07:52.596763 kubelet[2020]: I0123 01:07:52.595800 2020 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e50b42ed-313c-4ebf-93e3-91ba7a3c4786-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xdvnx\" (UID: \"e50b42ed-313c-4ebf-93e3-91ba7a3c4786\") " pod="kube-system/cilium-operator-6c4d7847fc-xdvnx" Jan 23 01:07:52.728596 kubelet[2020]: E0123 01:07:52.728414 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:52.807978 containerd[1604]: time="2026-01-23T01:07:52.807832565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ct5ph,Uid:0822e569-9f83-47e0-bb36-75cff97d2ca2,Namespace:kube-system,Attempt:0,}" Jan 23 01:07:52.826927 containerd[1604]: time="2026-01-23T01:07:52.826853271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xdvnx,Uid:e50b42ed-313c-4ebf-93e3-91ba7a3c4786,Namespace:kube-system,Attempt:0,}" Jan 23 01:07:52.837159 containerd[1604]: time="2026-01-23T01:07:52.837097960Z" level=info msg="connecting to shim ea4f1d3f8eb0af3e0636db0ace6a78224dc5a08dc46ae5947c3d88147ffa4d12" address="unix:///run/containerd/s/1a45a4c5be7fba3185b149cecb3633d84ff2a9e7c4a1d19e9f71436c7af2b99d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:52.855042 containerd[1604]: time="2026-01-23T01:07:52.854987990Z" level=info msg="connecting to shim dbe27ba0f2ad28793321c39c6f947dc0fad4db246c98ea750eabb4a56fcdf469" address="unix:///run/containerd/s/548631a857924be2a49a1251462915e7f725956e39e2b2f5ccb983356475d9fe" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:52.871188 systemd[1]: Started cri-containerd-ea4f1d3f8eb0af3e0636db0ace6a78224dc5a08dc46ae5947c3d88147ffa4d12.scope - libcontainer container ea4f1d3f8eb0af3e0636db0ace6a78224dc5a08dc46ae5947c3d88147ffa4d12. Jan 23 01:07:52.888077 systemd[1]: Started cri-containerd-dbe27ba0f2ad28793321c39c6f947dc0fad4db246c98ea750eabb4a56fcdf469.scope - libcontainer container dbe27ba0f2ad28793321c39c6f947dc0fad4db246c98ea750eabb4a56fcdf469. Jan 23 01:07:52.918359 containerd[1604]: time="2026-01-23T01:07:52.918322576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ct5ph,Uid:0822e569-9f83-47e0-bb36-75cff97d2ca2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea4f1d3f8eb0af3e0636db0ace6a78224dc5a08dc46ae5947c3d88147ffa4d12\"" Jan 23 01:07:52.922559 containerd[1604]: time="2026-01-23T01:07:52.922530550Z" level=info msg="CreateContainer within sandbox \"ea4f1d3f8eb0af3e0636db0ace6a78224dc5a08dc46ae5947c3d88147ffa4d12\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:07:52.931741 containerd[1604]: time="2026-01-23T01:07:52.931712603Z" level=info msg="Container 5a4af6c5c7c23553cf0f993c4fcb0996659849f6a3691ceaa52becf227f21f8e: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:52.955034 containerd[1604]: time="2026-01-23T01:07:52.954946862Z" level=info msg="CreateContainer within sandbox \"ea4f1d3f8eb0af3e0636db0ace6a78224dc5a08dc46ae5947c3d88147ffa4d12\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5a4af6c5c7c23553cf0f993c4fcb0996659849f6a3691ceaa52becf227f21f8e\"" Jan 23 01:07:52.955459 containerd[1604]: time="2026-01-23T01:07:52.955412765Z" level=info msg="StartContainer for \"5a4af6c5c7c23553cf0f993c4fcb0996659849f6a3691ceaa52becf227f21f8e\"" Jan 23 01:07:52.956401 containerd[1604]: time="2026-01-23T01:07:52.956376258Z" level=info msg="connecting to shim 5a4af6c5c7c23553cf0f993c4fcb0996659849f6a3691ceaa52becf227f21f8e" address="unix:///run/containerd/s/1a45a4c5be7fba3185b149cecb3633d84ff2a9e7c4a1d19e9f71436c7af2b99d" protocol=ttrpc version=3 Jan 23 01:07:52.958626 containerd[1604]: time="2026-01-23T01:07:52.958603931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xdvnx,Uid:e50b42ed-313c-4ebf-93e3-91ba7a3c4786,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbe27ba0f2ad28793321c39c6f947dc0fad4db246c98ea750eabb4a56fcdf469\"" Jan 23 01:07:52.960997 containerd[1604]: time="2026-01-23T01:07:52.960977977Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 01:07:52.976080 systemd[1]: Started cri-containerd-5a4af6c5c7c23553cf0f993c4fcb0996659849f6a3691ceaa52becf227f21f8e.scope - libcontainer container 5a4af6c5c7c23553cf0f993c4fcb0996659849f6a3691ceaa52becf227f21f8e. Jan 23 01:07:53.002701 containerd[1604]: time="2026-01-23T01:07:53.002050918Z" level=info msg="StartContainer for \"5a4af6c5c7c23553cf0f993c4fcb0996659849f6a3691ceaa52becf227f21f8e\" returns successfully" Jan 23 01:07:53.007359 systemd[1]: cri-containerd-5a4af6c5c7c23553cf0f993c4fcb0996659849f6a3691ceaa52becf227f21f8e.scope: Deactivated successfully. Jan 23 01:07:53.010669 containerd[1604]: time="2026-01-23T01:07:53.010618866Z" level=info msg="received container exit event container_id:\"5a4af6c5c7c23553cf0f993c4fcb0996659849f6a3691ceaa52becf227f21f8e\" id:\"5a4af6c5c7c23553cf0f993c4fcb0996659849f6a3691ceaa52becf227f21f8e\" pid:3665 exited_at:{seconds:1769130473 nanos:10272084}" Jan 23 01:07:53.685172 kubelet[2020]: E0123 01:07:53.685112 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:53.728830 kubelet[2020]: E0123 01:07:53.728597 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:53.730047 containerd[1604]: time="2026-01-23T01:07:53.729991244Z" level=info msg="StopPodSandbox for \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\"" Jan 23 01:07:53.730236 containerd[1604]: time="2026-01-23T01:07:53.730196111Z" level=info msg="TearDown network for sandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" successfully" Jan 23 01:07:53.730236 containerd[1604]: time="2026-01-23T01:07:53.730226906Z" level=info msg="StopPodSandbox for \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" returns successfully" Jan 23 01:07:53.730850 containerd[1604]: time="2026-01-23T01:07:53.730806509Z" level=info msg="RemovePodSandbox for \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\"" Jan 23 01:07:53.730955 containerd[1604]: time="2026-01-23T01:07:53.730851177Z" level=info msg="Forcibly stopping sandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\"" Jan 23 01:07:53.731022 containerd[1604]: time="2026-01-23T01:07:53.730995613Z" level=info msg="TearDown network for sandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" successfully" Jan 23 01:07:53.733081 containerd[1604]: time="2026-01-23T01:07:53.733046974Z" level=info msg="Ensure that sandbox 35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4 in task-service has been cleanup successfully" Jan 23 01:07:53.740146 containerd[1604]: time="2026-01-23T01:07:53.740030054Z" level=info msg="RemovePodSandbox \"35ebc8da8b36de05764302aa5fcd8e9c1ad5d4050097fe2a0b837a39727669b4\" returns successfully" Jan 23 01:07:53.818546 kubelet[2020]: E0123 01:07:53.818421 2020 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 01:07:54.041265 containerd[1604]: time="2026-01-23T01:07:54.041129838Z" level=info msg="CreateContainer within sandbox \"ea4f1d3f8eb0af3e0636db0ace6a78224dc5a08dc46ae5947c3d88147ffa4d12\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:07:54.058944 containerd[1604]: time="2026-01-23T01:07:54.057813136Z" level=info msg="Container 5b230a9e61169f9bf3d29cd40a645e0c0212b0c482012e176440d974cbf8cbb9: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:54.065426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767680929.mount: Deactivated successfully. Jan 23 01:07:54.070528 containerd[1604]: time="2026-01-23T01:07:54.070491833Z" level=info msg="CreateContainer within sandbox \"ea4f1d3f8eb0af3e0636db0ace6a78224dc5a08dc46ae5947c3d88147ffa4d12\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5b230a9e61169f9bf3d29cd40a645e0c0212b0c482012e176440d974cbf8cbb9\"" Jan 23 01:07:54.071232 containerd[1604]: time="2026-01-23T01:07:54.071208766Z" level=info msg="StartContainer for \"5b230a9e61169f9bf3d29cd40a645e0c0212b0c482012e176440d974cbf8cbb9\"" Jan 23 01:07:54.072222 containerd[1604]: time="2026-01-23T01:07:54.072192628Z" level=info msg="connecting to shim 5b230a9e61169f9bf3d29cd40a645e0c0212b0c482012e176440d974cbf8cbb9" address="unix:///run/containerd/s/1a45a4c5be7fba3185b149cecb3633d84ff2a9e7c4a1d19e9f71436c7af2b99d" protocol=ttrpc version=3 Jan 23 01:07:54.101089 systemd[1]: Started cri-containerd-5b230a9e61169f9bf3d29cd40a645e0c0212b0c482012e176440d974cbf8cbb9.scope - libcontainer container 5b230a9e61169f9bf3d29cd40a645e0c0212b0c482012e176440d974cbf8cbb9. Jan 23 01:07:54.146886 containerd[1604]: time="2026-01-23T01:07:54.146257100Z" level=info msg="StartContainer for \"5b230a9e61169f9bf3d29cd40a645e0c0212b0c482012e176440d974cbf8cbb9\" returns successfully" Jan 23 01:07:54.153250 systemd[1]: cri-containerd-5b230a9e61169f9bf3d29cd40a645e0c0212b0c482012e176440d974cbf8cbb9.scope: Deactivated successfully. Jan 23 01:07:54.157516 containerd[1604]: time="2026-01-23T01:07:54.157410639Z" level=info msg="received container exit event container_id:\"5b230a9e61169f9bf3d29cd40a645e0c0212b0c482012e176440d974cbf8cbb9\" id:\"5b230a9e61169f9bf3d29cd40a645e0c0212b0c482012e176440d974cbf8cbb9\" pid:3712 exited_at:{seconds:1769130474 nanos:156380380}" Jan 23 01:07:54.178201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b230a9e61169f9bf3d29cd40a645e0c0212b0c482012e176440d974cbf8cbb9-rootfs.mount: Deactivated successfully. Jan 23 01:07:54.729088 kubelet[2020]: E0123 01:07:54.729036 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:54.908884 containerd[1604]: time="2026-01-23T01:07:54.908293591Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:54.910054 containerd[1604]: time="2026-01-23T01:07:54.910023704Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 01:07:54.911764 containerd[1604]: time="2026-01-23T01:07:54.911744469Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:54.912818 containerd[1604]: time="2026-01-23T01:07:54.912798085Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.951791324s" Jan 23 01:07:54.912914 containerd[1604]: time="2026-01-23T01:07:54.912901299Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 01:07:54.914584 containerd[1604]: time="2026-01-23T01:07:54.914562912Z" level=info msg="CreateContainer within sandbox \"dbe27ba0f2ad28793321c39c6f947dc0fad4db246c98ea750eabb4a56fcdf469\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 01:07:54.927192 containerd[1604]: time="2026-01-23T01:07:54.923563455Z" level=info msg="Container 15152ae2133a8696f0f9de8447ae74f266e732b0e42ae3c51fcd65758985e648: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:54.934670 containerd[1604]: time="2026-01-23T01:07:54.934628594Z" level=info msg="CreateContainer within sandbox \"dbe27ba0f2ad28793321c39c6f947dc0fad4db246c98ea750eabb4a56fcdf469\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"15152ae2133a8696f0f9de8447ae74f266e732b0e42ae3c51fcd65758985e648\"" Jan 23 01:07:54.935501 containerd[1604]: time="2026-01-23T01:07:54.935408465Z" level=info msg="StartContainer for \"15152ae2133a8696f0f9de8447ae74f266e732b0e42ae3c51fcd65758985e648\"" Jan 23 01:07:54.936433 containerd[1604]: time="2026-01-23T01:07:54.936370238Z" level=info msg="connecting to shim 15152ae2133a8696f0f9de8447ae74f266e732b0e42ae3c51fcd65758985e648" address="unix:///run/containerd/s/548631a857924be2a49a1251462915e7f725956e39e2b2f5ccb983356475d9fe" protocol=ttrpc version=3 Jan 23 01:07:54.963133 systemd[1]: Started cri-containerd-15152ae2133a8696f0f9de8447ae74f266e732b0e42ae3c51fcd65758985e648.scope - libcontainer container 15152ae2133a8696f0f9de8447ae74f266e732b0e42ae3c51fcd65758985e648. Jan 23 01:07:54.996016 containerd[1604]: time="2026-01-23T01:07:54.995598948Z" level=info msg="StartContainer for \"15152ae2133a8696f0f9de8447ae74f266e732b0e42ae3c51fcd65758985e648\" returns successfully" Jan 23 01:07:55.054370 containerd[1604]: time="2026-01-23T01:07:55.054239347Z" level=info msg="CreateContainer within sandbox \"ea4f1d3f8eb0af3e0636db0ace6a78224dc5a08dc46ae5947c3d88147ffa4d12\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:07:55.069517 kubelet[2020]: I0123 01:07:55.069126 2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xdvnx" podStartSLOduration=1.116255556 podStartE2EDuration="3.069107037s" podCreationTimestamp="2026-01-23 01:07:52 +0000 UTC" firstStartedPulling="2026-01-23 01:07:52.96063927 +0000 UTC m=+60.571843327" lastFinishedPulling="2026-01-23 01:07:54.91349075 +0000 UTC m=+62.524694808" observedRunningTime="2026-01-23 01:07:55.052536879 +0000 UTC m=+62.663740958" watchObservedRunningTime="2026-01-23 01:07:55.069107037 +0000 UTC m=+62.680311102" Jan 23 01:07:55.091085 containerd[1604]: time="2026-01-23T01:07:55.091042911Z" level=info msg="Container c8390c3cf5884e1324ee58c02642452c3c2c9a48add2ebe249d2518ca04e8018: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:55.103631 containerd[1604]: time="2026-01-23T01:07:55.103580596Z" level=info msg="CreateContainer within sandbox \"ea4f1d3f8eb0af3e0636db0ace6a78224dc5a08dc46ae5947c3d88147ffa4d12\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c8390c3cf5884e1324ee58c02642452c3c2c9a48add2ebe249d2518ca04e8018\"" Jan 23 01:07:55.104159 containerd[1604]: time="2026-01-23T01:07:55.104133648Z" level=info msg="StartContainer for \"c8390c3cf5884e1324ee58c02642452c3c2c9a48add2ebe249d2518ca04e8018\"" Jan 23 01:07:55.105421 containerd[1604]: time="2026-01-23T01:07:55.105388238Z" level=info msg="connecting to shim c8390c3cf5884e1324ee58c02642452c3c2c9a48add2ebe249d2518ca04e8018" address="unix:///run/containerd/s/1a45a4c5be7fba3185b149cecb3633d84ff2a9e7c4a1d19e9f71436c7af2b99d" protocol=ttrpc version=3 Jan 23 01:07:55.125059 systemd[1]: Started cri-containerd-c8390c3cf5884e1324ee58c02642452c3c2c9a48add2ebe249d2518ca04e8018.scope - libcontainer container c8390c3cf5884e1324ee58c02642452c3c2c9a48add2ebe249d2518ca04e8018. Jan 23 01:07:55.176821 systemd[1]: cri-containerd-c8390c3cf5884e1324ee58c02642452c3c2c9a48add2ebe249d2518ca04e8018.scope: Deactivated successfully. Jan 23 01:07:55.179506 containerd[1604]: time="2026-01-23T01:07:55.179452736Z" level=info msg="received container exit event container_id:\"c8390c3cf5884e1324ee58c02642452c3c2c9a48add2ebe249d2518ca04e8018\" id:\"c8390c3cf5884e1324ee58c02642452c3c2c9a48add2ebe249d2518ca04e8018\" pid:3806 exited_at:{seconds:1769130475 nanos:179291269}" Jan 23 01:07:55.187161 containerd[1604]: time="2026-01-23T01:07:55.187120042Z" level=info msg="StartContainer for \"c8390c3cf5884e1324ee58c02642452c3c2c9a48add2ebe249d2518ca04e8018\" returns successfully" Jan 23 01:07:55.620828 kubelet[2020]: I0123 01:07:55.620719 2020 setters.go:602] "Node became not ready" node="10.0.2.223" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T01:07:55Z","lastTransitionTime":"2026-01-23T01:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 01:07:55.730136 kubelet[2020]: E0123 01:07:55.730005 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:56.063303 containerd[1604]: time="2026-01-23T01:07:56.063201424Z" level=info msg="CreateContainer within sandbox \"ea4f1d3f8eb0af3e0636db0ace6a78224dc5a08dc46ae5947c3d88147ffa4d12\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:07:56.082062 containerd[1604]: time="2026-01-23T01:07:56.081980272Z" level=info msg="Container 108a5dcc819fa21f14682c8f9c7d9e85964b8169e51d971bdb79b7c0c45d17e6: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:56.092972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1413664830.mount: Deactivated successfully. Jan 23 01:07:56.096784 containerd[1604]: time="2026-01-23T01:07:56.096721977Z" level=info msg="CreateContainer within sandbox \"ea4f1d3f8eb0af3e0636db0ace6a78224dc5a08dc46ae5947c3d88147ffa4d12\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"108a5dcc819fa21f14682c8f9c7d9e85964b8169e51d971bdb79b7c0c45d17e6\"" Jan 23 01:07:56.097808 containerd[1604]: time="2026-01-23T01:07:56.097753185Z" level=info msg="StartContainer for \"108a5dcc819fa21f14682c8f9c7d9e85964b8169e51d971bdb79b7c0c45d17e6\"" Jan 23 01:07:56.099533 containerd[1604]: time="2026-01-23T01:07:56.099483370Z" level=info msg="connecting to shim 108a5dcc819fa21f14682c8f9c7d9e85964b8169e51d971bdb79b7c0c45d17e6" address="unix:///run/containerd/s/1a45a4c5be7fba3185b149cecb3633d84ff2a9e7c4a1d19e9f71436c7af2b99d" protocol=ttrpc version=3 Jan 23 01:07:56.128127 systemd[1]: Started cri-containerd-108a5dcc819fa21f14682c8f9c7d9e85964b8169e51d971bdb79b7c0c45d17e6.scope - libcontainer container 108a5dcc819fa21f14682c8f9c7d9e85964b8169e51d971bdb79b7c0c45d17e6. Jan 23 01:07:56.158076 systemd[1]: cri-containerd-108a5dcc819fa21f14682c8f9c7d9e85964b8169e51d971bdb79b7c0c45d17e6.scope: Deactivated successfully. Jan 23 01:07:56.159807 containerd[1604]: time="2026-01-23T01:07:56.159765997Z" level=info msg="received container exit event container_id:\"108a5dcc819fa21f14682c8f9c7d9e85964b8169e51d971bdb79b7c0c45d17e6\" id:\"108a5dcc819fa21f14682c8f9c7d9e85964b8169e51d971bdb79b7c0c45d17e6\" pid:3845 exited_at:{seconds:1769130476 nanos:159553036}" Jan 23 01:07:56.171531 containerd[1604]: time="2026-01-23T01:07:56.171340283Z" level=info msg="StartContainer for \"108a5dcc819fa21f14682c8f9c7d9e85964b8169e51d971bdb79b7c0c45d17e6\" returns successfully" Jan 23 01:07:56.184103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-108a5dcc819fa21f14682c8f9c7d9e85964b8169e51d971bdb79b7c0c45d17e6-rootfs.mount: Deactivated successfully. Jan 23 01:07:56.730711 kubelet[2020]: E0123 01:07:56.730612 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:57.078654 containerd[1604]: time="2026-01-23T01:07:57.077754164Z" level=info msg="CreateContainer within sandbox \"ea4f1d3f8eb0af3e0636db0ace6a78224dc5a08dc46ae5947c3d88147ffa4d12\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:07:57.097925 containerd[1604]: time="2026-01-23T01:07:57.097138465Z" level=info msg="Container d0f7b7b94a2ccb6f14e6f6700f0fd14f586603b01caebbcd45aad0ffc9760347: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:57.110830 containerd[1604]: time="2026-01-23T01:07:57.110780619Z" level=info msg="CreateContainer within sandbox \"ea4f1d3f8eb0af3e0636db0ace6a78224dc5a08dc46ae5947c3d88147ffa4d12\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d0f7b7b94a2ccb6f14e6f6700f0fd14f586603b01caebbcd45aad0ffc9760347\"" Jan 23 01:07:57.111792 containerd[1604]: time="2026-01-23T01:07:57.111757008Z" level=info msg="StartContainer for \"d0f7b7b94a2ccb6f14e6f6700f0fd14f586603b01caebbcd45aad0ffc9760347\"" Jan 23 01:07:57.114309 containerd[1604]: time="2026-01-23T01:07:57.114268123Z" level=info msg="connecting to shim d0f7b7b94a2ccb6f14e6f6700f0fd14f586603b01caebbcd45aad0ffc9760347" address="unix:///run/containerd/s/1a45a4c5be7fba3185b149cecb3633d84ff2a9e7c4a1d19e9f71436c7af2b99d" protocol=ttrpc version=3 Jan 23 01:07:57.146145 systemd[1]: Started cri-containerd-d0f7b7b94a2ccb6f14e6f6700f0fd14f586603b01caebbcd45aad0ffc9760347.scope - libcontainer container d0f7b7b94a2ccb6f14e6f6700f0fd14f586603b01caebbcd45aad0ffc9760347. Jan 23 01:07:57.194529 containerd[1604]: time="2026-01-23T01:07:57.194474252Z" level=info msg="StartContainer for \"d0f7b7b94a2ccb6f14e6f6700f0fd14f586603b01caebbcd45aad0ffc9760347\" returns successfully" Jan 23 01:07:57.474920 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_256)) Jan 23 01:07:57.731877 kubelet[2020]: E0123 01:07:57.731665 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:58.732448 kubelet[2020]: E0123 01:07:58.732341 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:07:59.733291 kubelet[2020]: E0123 01:07:59.733232 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:00.499133 systemd-networkd[1489]: lxc_health: Link UP Jan 23 01:08:00.501201 systemd-networkd[1489]: lxc_health: Gained carrier Jan 23 01:08:00.733998 kubelet[2020]: E0123 01:08:00.733947 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:00.825594 kubelet[2020]: I0123 01:08:00.825442 2020 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ct5ph" podStartSLOduration=8.825424665 podStartE2EDuration="8.825424665s" podCreationTimestamp="2026-01-23 01:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:07:58.092764321 +0000 UTC m=+65.703968462" watchObservedRunningTime="2026-01-23 01:08:00.825424665 +0000 UTC m=+68.436628741" Jan 23 01:08:01.734176 kubelet[2020]: E0123 01:08:01.734107 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:02.420151 systemd-networkd[1489]: lxc_health: Gained IPv6LL Jan 23 01:08:02.735038 kubelet[2020]: E0123 01:08:02.734967 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:03.735501 kubelet[2020]: E0123 01:08:03.735406 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:04.735933 kubelet[2020]: E0123 01:08:04.735796 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:05.736171 kubelet[2020]: E0123 01:08:05.736059 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:06.737181 kubelet[2020]: E0123 01:08:06.737112 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:07.738159 kubelet[2020]: E0123 01:08:07.738077 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:08.738636 kubelet[2020]: E0123 01:08:08.738583 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:09.739442 kubelet[2020]: E0123 01:08:09.739369 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:10.740125 kubelet[2020]: E0123 01:08:10.740029 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:11.740879 kubelet[2020]: E0123 01:08:11.740799 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:12.741423 kubelet[2020]: E0123 01:08:12.741317 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:13.684927 kubelet[2020]: E0123 01:08:13.684784 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:13.742329 kubelet[2020]: E0123 01:08:13.742234 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:14.742830 kubelet[2020]: E0123 01:08:14.742777 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:15.743473 kubelet[2020]: E0123 01:08:15.743394 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:16.744512 kubelet[2020]: E0123 01:08:16.744445 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:17.745688 kubelet[2020]: E0123 01:08:17.745579 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:18.746741 kubelet[2020]: E0123 01:08:18.746618 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:19.746933 kubelet[2020]: E0123 01:08:19.746883 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:20.748193 kubelet[2020]: E0123 01:08:20.748068 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:21.749038 kubelet[2020]: E0123 01:08:21.748932 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:22.749495 kubelet[2020]: E0123 01:08:22.749409 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:23.750146 kubelet[2020]: E0123 01:08:23.750047 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:24.751299 kubelet[2020]: E0123 01:08:24.751214 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:25.751788 kubelet[2020]: E0123 01:08:25.751741 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:26.752079 kubelet[2020]: E0123 01:08:26.751969 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:27.753347 kubelet[2020]: E0123 01:08:27.753248 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:28.754650 kubelet[2020]: E0123 01:08:28.754532 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:29.755276 kubelet[2020]: E0123 01:08:29.755162 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:30.755806 kubelet[2020]: E0123 01:08:30.755717 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:31.756728 kubelet[2020]: E0123 01:08:31.756663 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:32.308736 kubelet[2020]: E0123 01:08:32.308601 2020 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.2.217:59438->10.0.2.208:2379: read: connection timed out" Jan 23 01:08:32.757297 kubelet[2020]: E0123 01:08:32.757205 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:33.107705 systemd[1]: cri-containerd-15152ae2133a8696f0f9de8447ae74f266e732b0e42ae3c51fcd65758985e648.scope: Deactivated successfully. Jan 23 01:08:33.110833 containerd[1604]: time="2026-01-23T01:08:33.110769204Z" level=info msg="received container exit event container_id:\"15152ae2133a8696f0f9de8447ae74f266e732b0e42ae3c51fcd65758985e648\" id:\"15152ae2133a8696f0f9de8447ae74f266e732b0e42ae3c51fcd65758985e648\" pid:3773 exit_status:1 exited_at:{seconds:1769130513 nanos:110475890}" Jan 23 01:08:33.144128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15152ae2133a8696f0f9de8447ae74f266e732b0e42ae3c51fcd65758985e648-rootfs.mount: Deactivated successfully. Jan 23 01:08:33.179331 kubelet[2020]: I0123 01:08:33.179235 2020 scope.go:117] "RemoveContainer" containerID="15152ae2133a8696f0f9de8447ae74f266e732b0e42ae3c51fcd65758985e648" Jan 23 01:08:33.182150 containerd[1604]: time="2026-01-23T01:08:33.182079986Z" level=info msg="CreateContainer within sandbox \"dbe27ba0f2ad28793321c39c6f947dc0fad4db246c98ea750eabb4a56fcdf469\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Jan 23 01:08:33.199016 containerd[1604]: time="2026-01-23T01:08:33.198498007Z" level=info msg="Container b8cbcd3b0a6262dc990f92a2d621534bc0a91d52c9c56d73d61fb15b522bbe53: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:08:33.216320 containerd[1604]: time="2026-01-23T01:08:33.216269913Z" level=info msg="CreateContainer within sandbox \"dbe27ba0f2ad28793321c39c6f947dc0fad4db246c98ea750eabb4a56fcdf469\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"b8cbcd3b0a6262dc990f92a2d621534bc0a91d52c9c56d73d61fb15b522bbe53\"" Jan 23 01:08:33.217142 containerd[1604]: time="2026-01-23T01:08:33.217087202Z" level=info msg="StartContainer for \"b8cbcd3b0a6262dc990f92a2d621534bc0a91d52c9c56d73d61fb15b522bbe53\"" Jan 23 01:08:33.218434 containerd[1604]: time="2026-01-23T01:08:33.218312408Z" level=info msg="connecting to shim b8cbcd3b0a6262dc990f92a2d621534bc0a91d52c9c56d73d61fb15b522bbe53" address="unix:///run/containerd/s/548631a857924be2a49a1251462915e7f725956e39e2b2f5ccb983356475d9fe" protocol=ttrpc version=3 Jan 23 01:08:33.246079 systemd[1]: Started cri-containerd-b8cbcd3b0a6262dc990f92a2d621534bc0a91d52c9c56d73d61fb15b522bbe53.scope - libcontainer container b8cbcd3b0a6262dc990f92a2d621534bc0a91d52c9c56d73d61fb15b522bbe53. Jan 23 01:08:33.287121 containerd[1604]: time="2026-01-23T01:08:33.287060292Z" level=info msg="StartContainer for \"b8cbcd3b0a6262dc990f92a2d621534bc0a91d52c9c56d73d61fb15b522bbe53\" returns successfully" Jan 23 01:08:33.685263 kubelet[2020]: E0123 01:08:33.685155 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:33.758502 kubelet[2020]: E0123 01:08:33.758393 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:34.424965 kubelet[2020]: E0123 01:08:34.419792 2020 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.2.217:59254->10.0.2.208:2379: read: connection timed out" event="&Event{ObjectMeta:{cilium-operator-6c4d7847fc-xdvnx.188d36d4176c137f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:cilium-operator-6c4d7847fc-xdvnx,UID:e50b42ed-313c-4ebf-93e3-91ba7a3c4786,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{cilium-operator},},Reason:Pulled,Message:Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine,Source:EventSource{Component:kubelet,Host:10.0.2.223,},FirstTimestamp:2026-01-23 01:08:33.180398463 +0000 UTC m=+100.791602542,LastTimestamp:2026-01-23 01:08:33.180398463 +0000 UTC m=+100.791602542,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.2.223,}" Jan 23 01:08:34.759170 kubelet[2020]: E0123 01:08:34.759125 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:35.759605 kubelet[2020]: E0123 01:08:35.759524 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:36.330209 kubelet[2020]: E0123 01:08:36.330099 2020 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T01:08:26Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T01:08:26Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T01:08:26Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T01:08:26Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":63836358},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\\\",\\\"registry.k8s.io/kube-proxy:v1.32.11\\\"],\\\"sizeBytes\\\":31160918},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\\\",\\\"registry.k8s.io/pause:3.10\\\"],\\\"sizeBytes\\\":320368}]}}\" for node \"10.0.2.223\": Patch \"https://10.0.2.217:6443/api/v1/nodes/10.0.2.223/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 01:08:36.760804 kubelet[2020]: E0123 01:08:36.760704 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:36.787746 kubelet[2020]: E0123 01:08:36.787462 2020 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.2.223\": rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.2.217:59364->10.0.2.208:2379: read: connection timed out" Jan 23 01:08:37.761670 kubelet[2020]: E0123 01:08:37.761538 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:38.763430 kubelet[2020]: E0123 01:08:38.763295 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:39.764985 kubelet[2020]: E0123 01:08:39.764885 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:40.765170 kubelet[2020]: E0123 01:08:40.765114 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:41.766254 kubelet[2020]: E0123 01:08:41.766169 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:42.309945 kubelet[2020]: E0123 01:08:42.309836 2020 controller.go:195] "Failed to update lease" err="Put \"https://10.0.2.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.2.223?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 01:08:42.418917 kubelet[2020]: I0123 01:08:42.418791 2020 status_manager.go:890] "Failed to get status for pod" podUID="e50b42ed-313c-4ebf-93e3-91ba7a3c4786" pod="kube-system/cilium-operator-6c4d7847fc-xdvnx" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.2.217:59378->10.0.2.208:2379: read: connection timed out" Jan 23 01:08:42.766740 kubelet[2020]: E0123 01:08:42.766626 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:43.768004 kubelet[2020]: E0123 01:08:43.767808 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:44.769008 kubelet[2020]: E0123 01:08:44.768933 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:45.769382 kubelet[2020]: E0123 01:08:45.769294 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:46.770067 kubelet[2020]: E0123 01:08:46.769947 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:46.788579 kubelet[2020]: E0123 01:08:46.788499 2020 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.2.223\": Get \"https://10.0.2.217:6443/api/v1/nodes/10.0.2.223?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 01:08:47.770849 kubelet[2020]: E0123 01:08:47.770755 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:48.771445 kubelet[2020]: E0123 01:08:48.771340 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:49.772222 kubelet[2020]: E0123 01:08:49.772154 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:50.773023 kubelet[2020]: E0123 01:08:50.772913 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:51.773675 kubelet[2020]: E0123 01:08:51.773614 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:52.310596 kubelet[2020]: E0123 01:08:52.310465 2020 controller.go:195] "Failed to update lease" err="Put \"https://10.0.2.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.2.223?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 01:08:52.774279 kubelet[2020]: E0123 01:08:52.774095 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:53.685523 kubelet[2020]: E0123 01:08:53.685416 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:53.775118 kubelet[2020]: E0123 01:08:53.775037 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:54.775276 kubelet[2020]: E0123 01:08:54.775189 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:55.775702 kubelet[2020]: E0123 01:08:55.775586 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:56.776218 kubelet[2020]: E0123 01:08:56.776130 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:56.789752 kubelet[2020]: E0123 01:08:56.789554 2020 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.2.223\": Get \"https://10.0.2.217:6443/api/v1/nodes/10.0.2.223?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 01:08:57.776610 kubelet[2020]: E0123 01:08:57.776415 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:58.777071 kubelet[2020]: E0123 01:08:58.776969 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:08:59.777384 kubelet[2020]: E0123 01:08:59.777282 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:09:00.778146 kubelet[2020]: E0123 01:09:00.778055 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:09:01.778592 kubelet[2020]: E0123 01:09:01.778519 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:09:02.311984 kubelet[2020]: E0123 01:09:02.311706 2020 controller.go:195] "Failed to update lease" err="Put \"https://10.0.2.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.2.223?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 01:09:02.779762 kubelet[2020]: E0123 01:09:02.779654 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:09:03.780647 kubelet[2020]: E0123 01:09:03.780496 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:09:04.781492 kubelet[2020]: E0123 01:09:04.781428 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:09:05.782426 kubelet[2020]: E0123 01:09:05.782357 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:09:06.782862 kubelet[2020]: E0123 01:09:06.782713 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:09:06.790583 kubelet[2020]: E0123 01:09:06.790457 2020 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.2.223\": Get \"https://10.0.2.217:6443/api/v1/nodes/10.0.2.223?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 01:09:06.790583 kubelet[2020]: E0123 01:09:06.790512 2020 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 23 01:09:07.783197 kubelet[2020]: E0123 01:09:07.783069 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:09:08.784309 kubelet[2020]: E0123 01:09:08.784217 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:09:09.785603 kubelet[2020]: E0123 01:09:09.785451 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:09:10.786152 kubelet[2020]: E0123 01:09:10.786082 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"