Jan 23 18:58:04.881490 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 18:58:04.881533 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:58:04.881548 kernel: BIOS-provided physical RAM map: Jan 23 18:58:04.881558 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:58:04.881568 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 18:58:04.881577 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 18:58:04.881591 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 18:58:04.881601 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 18:58:04.881610 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 18:58:04.881619 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 18:58:04.881629 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000007e93efff] usable Jan 23 18:58:04.881638 kernel: BIOS-e820: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 23 18:58:04.881648 kernel: BIOS-e820: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 23 18:58:04.881657 kernel: BIOS-e820: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 23 18:58:04.881671 kernel: BIOS-e820: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 23 18:58:04.881682 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 18:58:04.881692 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 18:58:04.881702 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 18:58:04.881712 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 23 18:58:04.881722 kernel: BIOS-e820: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 23 18:58:04.881732 kernel: BIOS-e820: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 23 18:58:04.881744 kernel: BIOS-e820: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 23 18:58:04.881754 kernel: BIOS-e820: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 23 18:58:04.881764 kernel: BIOS-e820: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 23 18:58:04.881774 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:58:04.881783 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:58:04.881793 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:58:04.881803 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 18:58:04.881813 kernel: NX (Execute Disable) protection: active Jan 23 18:58:04.881823 kernel: APIC: Static calls initialized Jan 23 18:58:04.881833 kernel: e820: update [mem 0x7df7f018-0x7df88a57] usable ==> usable Jan 23 18:58:04.881843 kernel: e820: update [mem 0x7df57018-0x7df7e457] usable ==> usable Jan 23 18:58:04.881853 kernel: extended physical RAM map: Jan 23 18:58:04.881866 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:58:04.881876 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 18:58:04.881886 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 18:58:04.881896 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 18:58:04.881906 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 18:58:04.881916 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 18:58:04.881926 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 18:58:04.881941 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000007df57017] usable Jan 23 18:58:04.881954 kernel: reserve setup_data: [mem 0x000000007df57018-0x000000007df7e457] usable Jan 23 18:58:04.881965 kernel: reserve setup_data: [mem 0x000000007df7e458-0x000000007df7f017] usable Jan 23 18:58:04.881975 kernel: reserve setup_data: [mem 0x000000007df7f018-0x000000007df88a57] usable Jan 23 18:58:04.881986 kernel: reserve setup_data: [mem 0x000000007df88a58-0x000000007e93efff] usable Jan 23 18:58:04.881996 kernel: reserve setup_data: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 23 18:58:04.882007 kernel: reserve setup_data: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 23 18:58:04.882017 kernel: reserve setup_data: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 23 18:58:04.882030 kernel: reserve setup_data: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 23 18:58:04.882040 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 18:58:04.882051 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 18:58:04.882062 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 18:58:04.882072 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 23 18:58:04.882083 kernel: reserve setup_data: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 23 18:58:04.882093 kernel: reserve setup_data: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 23 18:58:04.882138 kernel: reserve setup_data: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 23 18:58:04.882149 kernel: reserve setup_data: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 23 18:58:04.882159 kernel: reserve setup_data: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 23 18:58:04.882170 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:58:04.882183 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:58:04.882194 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:58:04.882219 kernel: reserve setup_data: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 18:58:04.882230 kernel: efi: EFI v2.7 by EDK II Jan 23 18:58:04.882240 kernel: efi: SMBIOS=0x7f972000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7dfd8018 RNG=0x7fb72018 Jan 23 18:58:04.882251 kernel: random: crng init done Jan 23 18:58:04.882262 kernel: efi: Remove mem139: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 18:58:04.882272 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 18:58:04.882283 kernel: secureboot: Secure boot disabled Jan 23 18:58:04.882293 kernel: SMBIOS 2.8 present. Jan 23 18:58:04.882304 kernel: DMI: STACKIT Cloud OpenStack Nova/Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 23 18:58:04.882315 kernel: DMI: Memory slots populated: 1/1 Jan 23 18:58:04.882327 kernel: Hypervisor detected: KVM Jan 23 18:58:04.882338 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 23 18:58:04.882348 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 18:58:04.882359 kernel: kvm-clock: using sched offset of 6849321489 cycles Jan 23 18:58:04.882370 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 18:58:04.882382 kernel: tsc: Detected 2294.594 MHz processor Jan 23 18:58:04.882393 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:58:04.882404 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:58:04.882415 kernel: last_pfn = 0x180000 max_arch_pfn = 0x10000000000 Jan 23 18:58:04.882426 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 18:58:04.882440 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:58:04.882451 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 23 18:58:04.882461 kernel: Using GB pages for direct mapping Jan 23 18:58:04.882472 kernel: ACPI: Early table checksum verification disabled Jan 23 18:58:04.882484 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 23 18:58:04.882495 kernel: ACPI: XSDT 0x000000007FB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jan 23 18:58:04.882506 kernel: ACPI: FACP 0x000000007FB77000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:58:04.882517 kernel: ACPI: DSDT 0x000000007FB78000 00423C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:58:04.882527 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 23 18:58:04.882541 kernel: ACPI: APIC 0x000000007FB76000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:58:04.882552 kernel: ACPI: MCFG 0x000000007FB75000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:58:04.882563 kernel: ACPI: WAET 0x000000007FB74000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:58:04.882573 kernel: ACPI: BGRT 0x000000007FB73000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 18:58:04.882584 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb77000-0x7fb770f3] Jan 23 18:58:04.882595 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb78000-0x7fb7c23b] Jan 23 18:58:04.882606 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 23 18:58:04.882617 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb76000-0x7fb7607f] Jan 23 18:58:04.882628 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb75000-0x7fb7503b] Jan 23 18:58:04.882640 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb74000-0x7fb74027] Jan 23 18:58:04.882652 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb73000-0x7fb73037] Jan 23 18:58:04.882662 kernel: No NUMA configuration found Jan 23 18:58:04.882674 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 23 18:58:04.882684 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jan 23 18:58:04.882695 kernel: Zone ranges: Jan 23 18:58:04.882706 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:58:04.882717 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 18:58:04.882728 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 23 18:58:04.882741 kernel: Device empty Jan 23 18:58:04.882752 kernel: Movable zone start for each node Jan 23 18:58:04.882762 kernel: Early memory node ranges Jan 23 18:58:04.882772 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 18:58:04.882783 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 23 18:58:04.882793 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 23 18:58:04.882803 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 23 18:58:04.882813 kernel: node 0: [mem 0x0000000000900000-0x000000007e93efff] Jan 23 18:58:04.882823 kernel: node 0: [mem 0x000000007ea00000-0x000000007ec70fff] Jan 23 18:58:04.882834 kernel: node 0: [mem 0x000000007ed85000-0x000000007f8ecfff] Jan 23 18:58:04.882854 kernel: node 0: [mem 0x000000007fbff000-0x000000007feaefff] Jan 23 18:58:04.882865 kernel: node 0: [mem 0x000000007feb5000-0x000000007feebfff] Jan 23 18:58:04.882876 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 23 18:58:04.882889 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 23 18:58:04.882901 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:58:04.882912 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 18:58:04.882923 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 23 18:58:04.882934 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:58:04.882947 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 23 18:58:04.882959 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 18:58:04.882970 kernel: On node 0, zone DMA32: 276 pages in unavailable ranges Jan 23 18:58:04.882981 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 18:58:04.882992 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 23 18:58:04.883003 kernel: On node 0, zone Normal: 276 pages in unavailable ranges Jan 23 18:58:04.883014 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 18:58:04.883026 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 18:58:04.883037 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:58:04.883051 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 18:58:04.883062 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 18:58:04.883073 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:58:04.883084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 18:58:04.883095 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 18:58:04.883121 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:58:04.883132 kernel: TSC deadline timer available Jan 23 18:58:04.883143 kernel: CPU topo: Max. logical packages: 2 Jan 23 18:58:04.883154 kernel: CPU topo: Max. logical dies: 2 Jan 23 18:58:04.883168 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:58:04.883179 kernel: CPU topo: Max. threads per core: 1 Jan 23 18:58:04.883190 kernel: CPU topo: Num. cores per package: 1 Jan 23 18:58:04.883201 kernel: CPU topo: Num. threads per package: 1 Jan 23 18:58:04.883213 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 18:58:04.883223 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 18:58:04.883235 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 18:58:04.883246 kernel: kvm-guest: setup PV sched yield Jan 23 18:58:04.883257 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 23 18:58:04.883270 kernel: Booting paravirtualized kernel on KVM Jan 23 18:58:04.883282 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:58:04.883294 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 18:58:04.883305 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 18:58:04.883316 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 18:58:04.883327 kernel: pcpu-alloc: [0] 0 1 Jan 23 18:58:04.883339 kernel: kvm-guest: PV spinlocks enabled Jan 23 18:58:04.883350 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:58:04.883363 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:58:04.883377 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:58:04.883388 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 18:58:04.883400 kernel: Fallback order for Node 0: 0 Jan 23 18:58:04.883411 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1046694 Jan 23 18:58:04.883422 kernel: Policy zone: Normal Jan 23 18:58:04.883433 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:58:04.883445 kernel: software IO TLB: area num 2. Jan 23 18:58:04.883456 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 18:58:04.883469 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:58:04.883481 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:58:04.883492 kernel: Dynamic Preempt: voluntary Jan 23 18:58:04.883503 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:58:04.883515 kernel: rcu: RCU event tracing is enabled. Jan 23 18:58:04.883527 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 18:58:04.883539 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:58:04.883550 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:58:04.883561 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:58:04.883572 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:58:04.883586 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 18:58:04.883598 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:58:04.883609 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:58:04.883621 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:58:04.883632 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 18:58:04.883643 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:58:04.883654 kernel: Console: colour dummy device 80x25 Jan 23 18:58:04.883666 kernel: printk: legacy console [tty0] enabled Jan 23 18:58:04.883677 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:58:04.883690 kernel: ACPI: Core revision 20240827 Jan 23 18:58:04.883702 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:58:04.883713 kernel: x2apic enabled Jan 23 18:58:04.883724 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:58:04.883736 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 18:58:04.883747 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 18:58:04.883758 kernel: kvm-guest: setup PV IPIs Jan 23 18:58:04.883770 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134287020, max_idle_ns: 440795320515 ns Jan 23 18:58:04.883781 kernel: Calibrating delay loop (skipped) preset value.. 4589.18 BogoMIPS (lpj=2294594) Jan 23 18:58:04.883795 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:58:04.883806 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 18:58:04.883817 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 18:58:04.883828 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:58:04.883839 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 23 18:58:04.883849 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 23 18:58:04.883860 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 23 18:58:04.883872 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 18:58:04.883883 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 18:58:04.883893 kernel: TAA: Mitigation: Clear CPU buffers Jan 23 18:58:04.883906 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 23 18:58:04.883917 kernel: active return thunk: its_return_thunk Jan 23 18:58:04.883928 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 18:58:04.883939 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:58:04.883950 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:58:04.883961 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:58:04.883972 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 18:58:04.883983 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 18:58:04.883994 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 18:58:04.884005 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 18:58:04.884015 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:58:04.884028 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 18:58:04.884039 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 18:58:04.884050 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 18:58:04.884061 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 23 18:58:04.884071 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 23 18:58:04.884082 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:58:04.884093 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:58:04.884124 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:58:04.884135 kernel: landlock: Up and running. Jan 23 18:58:04.884146 kernel: SELinux: Initializing. Jan 23 18:58:04.884157 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:58:04.884168 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:58:04.884182 kernel: smpboot: CPU0: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (family: 0x6, model: 0x6a, stepping: 0x6) Jan 23 18:58:04.884194 kernel: Performance Events: PEBS fmt0-, Icelake events, full-width counters, Intel PMU driver. Jan 23 18:58:04.884205 kernel: ... version: 2 Jan 23 18:58:04.884216 kernel: ... bit width: 48 Jan 23 18:58:04.884228 kernel: ... generic registers: 8 Jan 23 18:58:04.884239 kernel: ... value mask: 0000ffffffffffff Jan 23 18:58:04.884250 kernel: ... max period: 00007fffffffffff Jan 23 18:58:04.884261 kernel: ... fixed-purpose events: 3 Jan 23 18:58:04.884273 kernel: ... event mask: 00000007000000ff Jan 23 18:58:04.884286 kernel: signal: max sigframe size: 3632 Jan 23 18:58:04.884297 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:58:04.884308 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:58:04.884320 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:58:04.884331 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:58:04.884343 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:58:04.884354 kernel: .... node #0, CPUs: #1 Jan 23 18:58:04.884365 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 18:58:04.884377 kernel: smpboot: Total of 2 processors activated (9178.37 BogoMIPS) Jan 23 18:58:04.884391 kernel: Memory: 3945192K/4186776K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 236704K reserved, 0K cma-reserved) Jan 23 18:58:04.884402 kernel: devtmpfs: initialized Jan 23 18:58:04.884414 kernel: x86/mm: Memory block size: 128MB Jan 23 18:58:04.884425 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 23 18:58:04.884436 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 23 18:58:04.884448 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 23 18:58:04.884459 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 23 18:58:04.884470 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feb3000-0x7feb4fff] (8192 bytes) Jan 23 18:58:04.884482 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7ff70000-0x7fffffff] (589824 bytes) Jan 23 18:58:04.884496 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:58:04.884507 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 18:58:04.884518 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:58:04.884529 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:58:04.884541 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:58:04.884552 kernel: audit: type=2000 audit(1769194680.905:1): state=initialized audit_enabled=0 res=1 Jan 23 18:58:04.884563 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:58:04.884574 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:58:04.884585 kernel: cpuidle: using governor menu Jan 23 18:58:04.884599 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:58:04.884610 kernel: dca service started, version 1.12.1 Jan 23 18:58:04.884622 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 18:58:04.884633 kernel: PCI: Using configuration type 1 for base access Jan 23 18:58:04.884645 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:58:04.884656 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:58:04.884668 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:58:04.884679 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:58:04.884690 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:58:04.884704 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:58:04.884715 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:58:04.884726 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:58:04.884738 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:58:04.884749 kernel: ACPI: Interpreter enabled Jan 23 18:58:04.884760 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 18:58:04.884772 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:58:04.884783 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:58:04.884794 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 18:58:04.884808 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 18:58:04.884819 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 18:58:04.885004 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 18:58:04.885134 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 18:58:04.885242 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 18:58:04.885256 kernel: PCI host bridge to bus 0000:00 Jan 23 18:58:04.885360 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 18:58:04.885461 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 18:58:04.885555 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 18:58:04.885648 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 23 18:58:04.885740 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 18:58:04.885832 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38e800003fff window] Jan 23 18:58:04.885925 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 18:58:04.886047 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 18:58:04.886827 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 23 18:58:04.886944 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Jan 23 18:58:04.887047 kernel: pci 0000:00:01.0: BAR 2 [mem 0x38e800000000-0x38e800003fff 64bit pref] Jan 23 18:58:04.887167 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8439e000-0x8439efff] Jan 23 18:58:04.887269 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 18:58:04.887369 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 18:58:04.887482 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.887585 kernel: pci 0000:00:02.0: BAR 0 [mem 0x8439d000-0x8439dfff] Jan 23 18:58:04.887685 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 18:58:04.887786 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 23 18:58:04.887885 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 23 18:58:04.887984 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:58:04.888092 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.888212 kernel: pci 0000:00:02.1: BAR 0 [mem 0x8439c000-0x8439cfff] Jan 23 18:58:04.888312 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 18:58:04.888411 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 23 18:58:04.888510 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 18:58:04.888618 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.888718 kernel: pci 0000:00:02.2: BAR 0 [mem 0x8439b000-0x8439bfff] Jan 23 18:58:04.888820 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 18:58:04.888919 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 23 18:58:04.889017 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 18:58:04.889324 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.889433 kernel: pci 0000:00:02.3: BAR 0 [mem 0x8439a000-0x8439afff] Jan 23 18:58:04.889533 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 18:58:04.889633 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 23 18:58:04.889732 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 18:58:04.889841 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.889943 kernel: pci 0000:00:02.4: BAR 0 [mem 0x84399000-0x84399fff] Jan 23 18:58:04.890043 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 18:58:04.890170 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 23 18:58:04.890285 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 18:58:04.890392 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.890485 kernel: pci 0000:00:02.5: BAR 0 [mem 0x84398000-0x84398fff] Jan 23 18:58:04.890579 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 18:58:04.890668 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 23 18:58:04.890756 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 18:58:04.890852 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.890942 kernel: pci 0000:00:02.6: BAR 0 [mem 0x84397000-0x84397fff] Jan 23 18:58:04.891032 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 18:58:04.891138 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 23 18:58:04.891229 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 18:58:04.891329 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.891420 kernel: pci 0000:00:02.7: BAR 0 [mem 0x84396000-0x84396fff] Jan 23 18:58:04.891509 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 18:58:04.891599 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 23 18:58:04.891687 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 18:58:04.891788 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.891878 kernel: pci 0000:00:03.0: BAR 0 [mem 0x84395000-0x84395fff] Jan 23 18:58:04.891967 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 18:58:04.892057 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 23 18:58:04.892155 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 18:58:04.892250 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.892357 kernel: pci 0000:00:03.1: BAR 0 [mem 0x84394000-0x84394fff] Jan 23 18:58:04.892452 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 18:58:04.892541 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 23 18:58:04.892630 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 18:58:04.892727 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.892816 kernel: pci 0000:00:03.2: BAR 0 [mem 0x84393000-0x84393fff] Jan 23 18:58:04.892906 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 18:58:04.892995 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 23 18:58:04.893087 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 18:58:04.893193 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.893283 kernel: pci 0000:00:03.3: BAR 0 [mem 0x84392000-0x84392fff] Jan 23 18:58:04.893375 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 18:58:04.893464 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 23 18:58:04.893556 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 18:58:04.893653 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.893743 kernel: pci 0000:00:03.4: BAR 0 [mem 0x84391000-0x84391fff] Jan 23 18:58:04.893833 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 18:58:04.893923 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 23 18:58:04.894012 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 18:58:04.894118 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.896003 kernel: pci 0000:00:03.5: BAR 0 [mem 0x84390000-0x84390fff] Jan 23 18:58:04.896093 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 18:58:04.896232 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 23 18:58:04.896318 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 18:58:04.896411 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.896497 kernel: pci 0000:00:03.6: BAR 0 [mem 0x8438f000-0x8438ffff] Jan 23 18:58:04.896585 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 18:58:04.896676 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 23 18:58:04.896761 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 18:58:04.896852 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.896939 kernel: pci 0000:00:03.7: BAR 0 [mem 0x8438e000-0x8438efff] Jan 23 18:58:04.897024 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 18:58:04.898623 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 23 18:58:04.898734 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 18:58:04.898827 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.898916 kernel: pci 0000:00:04.0: BAR 0 [mem 0x8438d000-0x8438dfff] Jan 23 18:58:04.898999 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 18:58:04.899081 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 23 18:58:04.899177 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 18:58:04.899266 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.899350 kernel: pci 0000:00:04.1: BAR 0 [mem 0x8438c000-0x8438cfff] Jan 23 18:58:04.899436 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 18:58:04.899517 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 23 18:58:04.899599 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 18:58:04.899699 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.899782 kernel: pci 0000:00:04.2: BAR 0 [mem 0x8438b000-0x8438bfff] Jan 23 18:58:04.899864 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 18:58:04.899946 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 23 18:58:04.900029 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 18:58:04.900142 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.901223 kernel: pci 0000:00:04.3: BAR 0 [mem 0x8438a000-0x8438afff] Jan 23 18:58:04.901311 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 18:58:04.901394 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 23 18:58:04.901476 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 18:58:04.901565 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.901648 kernel: pci 0000:00:04.4: BAR 0 [mem 0x84389000-0x84389fff] Jan 23 18:58:04.901735 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 18:58:04.901818 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 23 18:58:04.901899 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 18:58:04.901988 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.902071 kernel: pci 0000:00:04.5: BAR 0 [mem 0x84388000-0x84388fff] Jan 23 18:58:04.902168 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 18:58:04.902265 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 23 18:58:04.902352 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 18:58:04.902437 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.902519 kernel: pci 0000:00:04.6: BAR 0 [mem 0x84387000-0x84387fff] Jan 23 18:58:04.902600 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 18:58:04.902682 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 23 18:58:04.902761 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 18:58:04.902846 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.902926 kernel: pci 0000:00:04.7: BAR 0 [mem 0x84386000-0x84386fff] Jan 23 18:58:04.903006 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 18:58:04.903085 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 23 18:58:04.905216 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 18:58:04.905381 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.905461 kernel: pci 0000:00:05.0: BAR 0 [mem 0x84385000-0x84385fff] Jan 23 18:58:04.905541 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 18:58:04.905620 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 23 18:58:04.905698 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 18:58:04.907657 kernel: pci 0000:00:05.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.907751 kernel: pci 0000:00:05.1: BAR 0 [mem 0x84384000-0x84384fff] Jan 23 18:58:04.907833 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 18:58:04.907907 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 23 18:58:04.907979 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 18:58:04.908059 kernel: pci 0000:00:05.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.908144 kernel: pci 0000:00:05.2: BAR 0 [mem 0x84383000-0x84383fff] Jan 23 18:58:04.908218 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 18:58:04.908290 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 18:58:04.908365 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 18:58:04.908444 kernel: pci 0000:00:05.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.908518 kernel: pci 0000:00:05.3: BAR 0 [mem 0x84382000-0x84382fff] Jan 23 18:58:04.908590 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 18:58:04.908661 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 18:58:04.908733 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 18:58:04.908811 kernel: pci 0000:00:05.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:58:04.908886 kernel: pci 0000:00:05.4: BAR 0 [mem 0x84381000-0x84381fff] Jan 23 18:58:04.908958 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 18:58:04.909029 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 23 18:58:04.911137 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 18:58:04.911262 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 18:58:04.911343 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 18:58:04.911426 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 18:58:04.911505 kernel: pci 0000:00:1f.2: BAR 4 [io 0x7040-0x705f] Jan 23 18:58:04.911579 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x84380000-0x84380fff] Jan 23 18:58:04.911660 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 18:58:04.911734 kernel: pci 0000:00:1f.3: BAR 4 [io 0x7000-0x703f] Jan 23 18:58:04.911817 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 23 18:58:04.911896 kernel: pci 0000:01:00.0: BAR 0 [mem 0x84200000-0x842000ff 64bit] Jan 23 18:58:04.911972 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 18:58:04.912050 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 23 18:58:04.912136 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 23 18:58:04.912211 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:58:04.912287 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 18:58:04.912374 kernel: pci_bus 0000:02: extended config space not accessible Jan 23 18:58:04.912386 kernel: acpiphp: Slot [1] registered Jan 23 18:58:04.912395 kernel: acpiphp: Slot [0] registered Jan 23 18:58:04.912403 kernel: acpiphp: Slot [2] registered Jan 23 18:58:04.912413 kernel: acpiphp: Slot [3] registered Jan 23 18:58:04.912421 kernel: acpiphp: Slot [4] registered Jan 23 18:58:04.912429 kernel: acpiphp: Slot [5] registered Jan 23 18:58:04.912437 kernel: acpiphp: Slot [6] registered Jan 23 18:58:04.912446 kernel: acpiphp: Slot [7] registered Jan 23 18:58:04.912454 kernel: acpiphp: Slot [8] registered Jan 23 18:58:04.912462 kernel: acpiphp: Slot [9] registered Jan 23 18:58:04.912470 kernel: acpiphp: Slot [10] registered Jan 23 18:58:04.912478 kernel: acpiphp: Slot [11] registered Jan 23 18:58:04.912488 kernel: acpiphp: Slot [12] registered Jan 23 18:58:04.912496 kernel: acpiphp: Slot [13] registered Jan 23 18:58:04.912505 kernel: acpiphp: Slot [14] registered Jan 23 18:58:04.912513 kernel: acpiphp: Slot [15] registered Jan 23 18:58:04.912521 kernel: acpiphp: Slot [16] registered Jan 23 18:58:04.912529 kernel: acpiphp: Slot [17] registered Jan 23 18:58:04.912537 kernel: acpiphp: Slot [18] registered Jan 23 18:58:04.912545 kernel: acpiphp: Slot [19] registered Jan 23 18:58:04.912553 kernel: acpiphp: Slot [20] registered Jan 23 18:58:04.912561 kernel: acpiphp: Slot [21] registered Jan 23 18:58:04.912572 kernel: acpiphp: Slot [22] registered Jan 23 18:58:04.912580 kernel: acpiphp: Slot [23] registered Jan 23 18:58:04.912589 kernel: acpiphp: Slot [24] registered Jan 23 18:58:04.912597 kernel: acpiphp: Slot [25] registered Jan 23 18:58:04.912605 kernel: acpiphp: Slot [26] registered Jan 23 18:58:04.912613 kernel: acpiphp: Slot [27] registered Jan 23 18:58:04.912621 kernel: acpiphp: Slot [28] registered Jan 23 18:58:04.912629 kernel: acpiphp: Slot [29] registered Jan 23 18:58:04.912637 kernel: acpiphp: Slot [30] registered Jan 23 18:58:04.912647 kernel: acpiphp: Slot [31] registered Jan 23 18:58:04.912730 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jan 23 18:58:04.912810 kernel: pci 0000:02:01.0: BAR 4 [io 0x6000-0x601f] Jan 23 18:58:04.912917 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 18:58:04.912928 kernel: acpiphp: Slot [0-2] registered Jan 23 18:58:04.913034 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 18:58:04.913126 kernel: pci 0000:03:00.0: BAR 1 [mem 0x83e00000-0x83e00fff] Jan 23 18:58:04.913204 kernel: pci 0000:03:00.0: BAR 4 [mem 0x380800000000-0x380800003fff 64bit pref] Jan 23 18:58:04.913284 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 18:58:04.913358 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 18:58:04.913369 kernel: acpiphp: Slot [0-3] registered Jan 23 18:58:04.913449 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 23 18:58:04.913526 kernel: pci 0000:04:00.0: BAR 1 [mem 0x83c00000-0x83c00fff] Jan 23 18:58:04.913601 kernel: pci 0000:04:00.0: BAR 4 [mem 0x381000000000-0x381000003fff 64bit pref] Jan 23 18:58:04.913674 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 18:58:04.913687 kernel: acpiphp: Slot [0-4] registered Jan 23 18:58:04.913767 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 18:58:04.913845 kernel: pci 0000:05:00.0: BAR 4 [mem 0x381800000000-0x381800003fff 64bit pref] Jan 23 18:58:04.913919 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 18:58:04.913930 kernel: acpiphp: Slot [0-5] registered Jan 23 18:58:04.914020 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 18:58:04.914184 kernel: pci 0000:06:00.0: BAR 1 [mem 0x83800000-0x83800fff] Jan 23 18:58:04.916260 kernel: pci 0000:06:00.0: BAR 4 [mem 0x382000000000-0x382000003fff 64bit pref] Jan 23 18:58:04.916343 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 18:58:04.916356 kernel: acpiphp: Slot [0-6] registered Jan 23 18:58:04.916432 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 18:58:04.916443 kernel: acpiphp: Slot [0-7] registered Jan 23 18:58:04.916516 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 18:58:04.916527 kernel: acpiphp: Slot [0-8] registered Jan 23 18:58:04.916606 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 18:58:04.916617 kernel: acpiphp: Slot [0-9] registered Jan 23 18:58:04.916693 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 18:58:04.916703 kernel: acpiphp: Slot [0-10] registered Jan 23 18:58:04.917145 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 18:58:04.917159 kernel: acpiphp: Slot [0-11] registered Jan 23 18:58:04.917238 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 18:58:04.917250 kernel: acpiphp: Slot [0-12] registered Jan 23 18:58:04.917330 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 18:58:04.917341 kernel: acpiphp: Slot [0-13] registered Jan 23 18:58:04.917415 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 18:58:04.917426 kernel: acpiphp: Slot [0-14] registered Jan 23 18:58:04.917498 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 18:58:04.917509 kernel: acpiphp: Slot [0-15] registered Jan 23 18:58:04.917582 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 18:58:04.917593 kernel: acpiphp: Slot [0-16] registered Jan 23 18:58:04.917667 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 18:58:04.917678 kernel: acpiphp: Slot [0-17] registered Jan 23 18:58:04.917751 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 18:58:04.917762 kernel: acpiphp: Slot [0-18] registered Jan 23 18:58:04.917834 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 18:58:04.917845 kernel: acpiphp: Slot [0-19] registered Jan 23 18:58:04.917918 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 18:58:04.917929 kernel: acpiphp: Slot [0-20] registered Jan 23 18:58:04.918003 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 18:58:04.918014 kernel: acpiphp: Slot [0-21] registered Jan 23 18:58:04.918087 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 18:58:04.918109 kernel: acpiphp: Slot [0-22] registered Jan 23 18:58:04.918187 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 18:58:04.918198 kernel: acpiphp: Slot [0-23] registered Jan 23 18:58:04.918285 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 18:58:04.918296 kernel: acpiphp: Slot [0-24] registered Jan 23 18:58:04.918372 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 18:58:04.918382 kernel: acpiphp: Slot [0-25] registered Jan 23 18:58:04.918455 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 18:58:04.918467 kernel: acpiphp: Slot [0-26] registered Jan 23 18:58:04.918538 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 18:58:04.918549 kernel: acpiphp: Slot [0-27] registered Jan 23 18:58:04.918623 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 18:58:04.918633 kernel: acpiphp: Slot [0-28] registered Jan 23 18:58:04.918709 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 18:58:04.918720 kernel: acpiphp: Slot [0-29] registered Jan 23 18:58:04.918793 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 18:58:04.918804 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 18:58:04.918812 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 18:58:04.918820 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 18:58:04.918828 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 18:58:04.918837 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 18:58:04.918847 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 18:58:04.918855 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 18:58:04.918863 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 18:58:04.918871 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 18:58:04.918880 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 18:58:04.918888 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 18:58:04.918896 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 18:58:04.918904 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 18:58:04.918913 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 18:58:04.918922 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 18:58:04.918931 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 18:58:04.918939 kernel: iommu: Default domain type: Translated Jan 23 18:58:04.918947 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:58:04.918955 kernel: efivars: Registered efivars operations Jan 23 18:58:04.918963 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:58:04.918972 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 18:58:04.918980 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 23 18:58:04.918988 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 23 18:58:04.918997 kernel: e820: reserve RAM buffer [mem 0x7df57018-0x7fffffff] Jan 23 18:58:04.919005 kernel: e820: reserve RAM buffer [mem 0x7df7f018-0x7fffffff] Jan 23 18:58:04.919013 kernel: e820: reserve RAM buffer [mem 0x7e93f000-0x7fffffff] Jan 23 18:58:04.919021 kernel: e820: reserve RAM buffer [mem 0x7ec71000-0x7fffffff] Jan 23 18:58:04.919029 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 23 18:58:04.919037 kernel: e820: reserve RAM buffer [mem 0x7feaf000-0x7fffffff] Jan 23 18:58:04.919046 kernel: e820: reserve RAM buffer [mem 0x7feec000-0x7fffffff] Jan 23 18:58:04.919332 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 18:58:04.919415 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 18:58:04.919493 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 18:58:04.919504 kernel: vgaarb: loaded Jan 23 18:58:04.919512 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 18:58:04.919521 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:58:04.919529 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:58:04.919538 kernel: pnp: PnP ACPI init Jan 23 18:58:04.919620 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 18:58:04.919632 kernel: pnp: PnP ACPI: found 5 devices Jan 23 18:58:04.919643 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:58:04.919652 kernel: NET: Registered PF_INET protocol family Jan 23 18:58:04.919660 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:58:04.919668 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 18:58:04.919677 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:58:04.919685 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 18:58:04.919693 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 18:58:04.919702 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 18:58:04.919710 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:58:04.919720 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:58:04.919728 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:58:04.919737 kernel: NET: Registered PF_XDP protocol family Jan 23 18:58:04.919814 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 18:58:04.919889 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 18:58:04.919965 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 18:58:04.920041 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 18:58:04.920129 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 18:58:04.920217 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 18:58:04.920292 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 18:58:04.920367 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 18:58:04.920442 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 23 18:58:04.921113 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 23 18:58:04.921199 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 23 18:58:04.921276 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 23 18:58:04.921352 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 23 18:58:04.921430 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 23 18:58:04.921505 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 23 18:58:04.921579 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 23 18:58:04.921653 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 23 18:58:04.921728 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 23 18:58:04.921802 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 23 18:58:04.921877 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 23 18:58:04.921953 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 23 18:58:04.922027 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 23 18:58:04.922651 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 23 18:58:04.922733 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 23 18:58:04.924182 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 23 18:58:04.924274 kernel: pci 0000:00:05.1: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 23 18:58:04.924352 kernel: pci 0000:00:05.2: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 23 18:58:04.924428 kernel: pci 0000:00:05.3: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 23 18:58:04.924507 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 23 18:58:04.924581 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Jan 23 18:58:04.924655 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Jan 23 18:58:04.924730 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Jan 23 18:58:04.924804 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Jan 23 18:58:04.924879 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Jan 23 18:58:04.924953 kernel: pci 0000:00:02.6: bridge window [io 0x8000-0x8fff]: assigned Jan 23 18:58:04.925027 kernel: pci 0000:00:02.7: bridge window [io 0x9000-0x9fff]: assigned Jan 23 18:58:04.925121 kernel: pci 0000:00:03.0: bridge window [io 0xa000-0xafff]: assigned Jan 23 18:58:04.925196 kernel: pci 0000:00:03.1: bridge window [io 0xb000-0xbfff]: assigned Jan 23 18:58:04.925269 kernel: pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]: assigned Jan 23 18:58:04.925342 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff]: assigned Jan 23 18:58:04.925416 kernel: pci 0000:00:03.4: bridge window [io 0xe000-0xefff]: assigned Jan 23 18:58:04.925489 kernel: pci 0000:00:03.5: bridge window [io 0xf000-0xffff]: assigned Jan 23 18:58:04.925563 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.925636 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.925712 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.925785 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.925859 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.925932 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.926006 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.926079 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.926167 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.926253 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.926330 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.926403 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.926476 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.926549 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.926623 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.926696 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.926769 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.926842 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.926917 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.926989 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.927062 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.927159 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.927233 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.927306 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.927379 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.927455 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.927528 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.927602 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.927675 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.927748 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.927820 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff]: assigned Jan 23 18:58:04.927895 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff]: assigned Jan 23 18:58:04.927968 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 18:58:04.928043 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff]: assigned Jan 23 18:58:04.928126 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff]: assigned Jan 23 18:58:04.928199 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 18:58:04.928272 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff]: assigned Jan 23 18:58:04.928344 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff]: assigned Jan 23 18:58:04.928417 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff]: assigned Jan 23 18:58:04.928490 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff]: assigned Jan 23 18:58:04.928562 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff]: assigned Jan 23 18:58:04.928637 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff]: assigned Jan 23 18:58:04.929257 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff]: assigned Jan 23 18:58:04.929340 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.929413 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.929488 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.929566 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.929639 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.929711 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.929785 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.929863 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.929938 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.930010 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.930083 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.930645 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.930730 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.930805 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.930879 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.930956 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.931031 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.931118 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.931193 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.931266 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.931339 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.931412 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.931485 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.931560 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.931633 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.931706 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.931778 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.931851 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.931925 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:58:04.931998 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 23 18:58:04.932080 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 18:58:04.932952 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 23 18:58:04.933038 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 23 18:58:04.933134 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:58:04.933211 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 18:58:04.933285 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 23 18:58:04.933615 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 23 18:58:04.933696 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:58:04.933776 kernel: pci 0000:03:00.0: ROM [mem 0x83e80000-0x83efffff pref]: assigned Jan 23 18:58:04.933850 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 18:58:04.933929 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 23 18:58:04.934002 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 18:58:04.934076 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 18:58:04.934168 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 23 18:58:04.934252 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 18:58:04.934325 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 18:58:04.934399 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 23 18:58:04.934472 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 18:58:04.934545 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 18:58:04.934621 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 23 18:58:04.934694 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 18:58:04.934767 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 18:58:04.934839 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 23 18:58:04.934913 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 18:58:04.934988 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 18:58:04.935061 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 23 18:58:04.935156 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 18:58:04.935230 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 18:58:04.935305 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 23 18:58:04.935378 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 18:58:04.935452 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 18:58:04.935524 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 23 18:58:04.935596 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 18:58:04.935669 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 18:58:04.935741 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 23 18:58:04.935813 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 18:58:04.935889 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 18:58:04.935962 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 23 18:58:04.936035 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 18:58:04.936123 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 18:58:04.936197 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 23 18:58:04.936270 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 18:58:04.936343 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 18:58:04.936415 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 23 18:58:04.936488 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 18:58:04.936563 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 18:58:04.936637 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 23 18:58:04.936709 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 18:58:04.936782 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 18:58:04.936854 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 23 18:58:04.937393 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 18:58:04.937474 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 18:58:04.937547 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 23 18:58:04.937621 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 18:58:04.937699 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 18:58:04.937772 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff] Jan 23 18:58:04.937845 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 23 18:58:04.937918 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 18:58:04.937991 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 18:58:04.938064 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff] Jan 23 18:58:04.938150 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 23 18:58:04.938238 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 18:58:04.938313 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 18:58:04.938385 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff] Jan 23 18:58:04.938459 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 23 18:58:04.938533 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 18:58:04.938606 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 18:58:04.938678 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff] Jan 23 18:58:04.938751 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 23 18:58:04.938826 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 18:58:04.938899 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 18:58:04.938972 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff] Jan 23 18:58:04.939045 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 23 18:58:04.939323 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 18:58:04.939403 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 18:58:04.939476 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff] Jan 23 18:58:04.939552 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 23 18:58:04.939625 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 18:58:04.939952 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 18:58:04.940033 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff] Jan 23 18:58:04.942202 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 23 18:58:04.942305 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 18:58:04.942383 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 18:58:04.942462 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff] Jan 23 18:58:04.942536 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 23 18:58:04.942611 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 18:58:04.942686 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 18:58:04.942760 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff] Jan 23 18:58:04.942833 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 23 18:58:04.942907 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 18:58:04.942983 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 18:58:04.943062 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff] Jan 23 18:58:04.943146 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 23 18:58:04.943220 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 18:58:04.943296 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 18:58:04.943369 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff] Jan 23 18:58:04.943443 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 18:58:04.943515 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 18:58:04.943594 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 18:58:04.943667 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff] Jan 23 18:58:04.943740 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 18:58:04.943815 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 18:58:04.943890 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 18:58:04.943963 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff] Jan 23 18:58:04.944037 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 23 18:58:04.944119 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 18:58:04.944195 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 18:58:04.944262 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 18:58:04.944328 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 18:58:04.944393 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 23 18:58:04.944457 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 18:58:04.944522 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x38e800003fff window] Jan 23 18:58:04.944598 kernel: pci_bus 0000:01: resource 0 [io 0x6000-0x6fff] Jan 23 18:58:04.944670 kernel: pci_bus 0000:01: resource 1 [mem 0x84000000-0x842fffff] Jan 23 18:58:04.944738 kernel: pci_bus 0000:01: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:58:04.944812 kernel: pci_bus 0000:02: resource 0 [io 0x6000-0x6fff] Jan 23 18:58:04.944884 kernel: pci_bus 0000:02: resource 1 [mem 0x84000000-0x841fffff] Jan 23 18:58:04.944954 kernel: pci_bus 0000:02: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:58:04.945028 kernel: pci_bus 0000:03: resource 1 [mem 0x83e00000-0x83ffffff] Jan 23 18:58:04.945097 kernel: pci_bus 0000:03: resource 2 [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 18:58:04.947002 kernel: pci_bus 0000:04: resource 1 [mem 0x83c00000-0x83dfffff] Jan 23 18:58:04.947072 kernel: pci_bus 0000:04: resource 2 [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 18:58:04.947214 kernel: pci_bus 0000:05: resource 1 [mem 0x83a00000-0x83bfffff] Jan 23 18:58:04.947282 kernel: pci_bus 0000:05: resource 2 [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 18:58:04.947351 kernel: pci_bus 0000:06: resource 1 [mem 0x83800000-0x839fffff] Jan 23 18:58:04.947416 kernel: pci_bus 0000:06: resource 2 [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 18:58:04.947489 kernel: pci_bus 0000:07: resource 1 [mem 0x83600000-0x837fffff] Jan 23 18:58:04.947553 kernel: pci_bus 0000:07: resource 2 [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 18:58:04.947624 kernel: pci_bus 0000:08: resource 1 [mem 0x83400000-0x835fffff] Jan 23 18:58:04.947688 kernel: pci_bus 0000:08: resource 2 [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 18:58:04.947756 kernel: pci_bus 0000:09: resource 1 [mem 0x83200000-0x833fffff] Jan 23 18:58:04.947820 kernel: pci_bus 0000:09: resource 2 [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 18:58:04.947890 kernel: pci_bus 0000:0a: resource 1 [mem 0x83000000-0x831fffff] Jan 23 18:58:04.947953 kernel: pci_bus 0000:0a: resource 2 [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 18:58:04.948021 kernel: pci_bus 0000:0b: resource 1 [mem 0x82e00000-0x82ffffff] Jan 23 18:58:04.948085 kernel: pci_bus 0000:0b: resource 2 [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 18:58:04.948162 kernel: pci_bus 0000:0c: resource 1 [mem 0x82c00000-0x82dfffff] Jan 23 18:58:04.948225 kernel: pci_bus 0000:0c: resource 2 [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 18:58:04.948298 kernel: pci_bus 0000:0d: resource 1 [mem 0x82a00000-0x82bfffff] Jan 23 18:58:04.948362 kernel: pci_bus 0000:0d: resource 2 [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 18:58:04.948429 kernel: pci_bus 0000:0e: resource 1 [mem 0x82800000-0x829fffff] Jan 23 18:58:04.948493 kernel: pci_bus 0000:0e: resource 2 [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 18:58:04.948561 kernel: pci_bus 0000:0f: resource 1 [mem 0x82600000-0x827fffff] Jan 23 18:58:04.948628 kernel: pci_bus 0000:0f: resource 2 [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 18:58:04.948697 kernel: pci_bus 0000:10: resource 1 [mem 0x82400000-0x825fffff] Jan 23 18:58:04.948761 kernel: pci_bus 0000:10: resource 2 [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 18:58:04.948830 kernel: pci_bus 0000:11: resource 1 [mem 0x82200000-0x823fffff] Jan 23 18:58:04.948893 kernel: pci_bus 0000:11: resource 2 [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 18:58:04.948962 kernel: pci_bus 0000:12: resource 0 [io 0xf000-0xffff] Jan 23 18:58:04.949026 kernel: pci_bus 0000:12: resource 1 [mem 0x82000000-0x821fffff] Jan 23 18:58:04.949091 kernel: pci_bus 0000:12: resource 2 [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 18:58:04.949549 kernel: pci_bus 0000:13: resource 0 [io 0xe000-0xefff] Jan 23 18:58:04.949616 kernel: pci_bus 0000:13: resource 1 [mem 0x81e00000-0x81ffffff] Jan 23 18:58:04.949680 kernel: pci_bus 0000:13: resource 2 [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 18:58:04.949749 kernel: pci_bus 0000:14: resource 0 [io 0xd000-0xdfff] Jan 23 18:58:04.949813 kernel: pci_bus 0000:14: resource 1 [mem 0x81c00000-0x81dfffff] Jan 23 18:58:04.949881 kernel: pci_bus 0000:14: resource 2 [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 18:58:04.949952 kernel: pci_bus 0000:15: resource 0 [io 0xc000-0xcfff] Jan 23 18:58:04.950016 kernel: pci_bus 0000:15: resource 1 [mem 0x81a00000-0x81bfffff] Jan 23 18:58:04.950079 kernel: pci_bus 0000:15: resource 2 [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 18:58:04.950203 kernel: pci_bus 0000:16: resource 0 [io 0xb000-0xbfff] Jan 23 18:58:04.950280 kernel: pci_bus 0000:16: resource 1 [mem 0x81800000-0x819fffff] Jan 23 18:58:04.950344 kernel: pci_bus 0000:16: resource 2 [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 18:58:04.950414 kernel: pci_bus 0000:17: resource 0 [io 0xa000-0xafff] Jan 23 18:58:04.950478 kernel: pci_bus 0000:17: resource 1 [mem 0x81600000-0x817fffff] Jan 23 18:58:04.950541 kernel: pci_bus 0000:17: resource 2 [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 18:58:04.950608 kernel: pci_bus 0000:18: resource 0 [io 0x9000-0x9fff] Jan 23 18:58:04.950671 kernel: pci_bus 0000:18: resource 1 [mem 0x81400000-0x815fffff] Jan 23 18:58:04.950735 kernel: pci_bus 0000:18: resource 2 [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 18:58:04.950801 kernel: pci_bus 0000:19: resource 0 [io 0x8000-0x8fff] Jan 23 18:58:04.950867 kernel: pci_bus 0000:19: resource 1 [mem 0x81200000-0x813fffff] Jan 23 18:58:04.950931 kernel: pci_bus 0000:19: resource 2 [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 18:58:04.951000 kernel: pci_bus 0000:1a: resource 0 [io 0x5000-0x5fff] Jan 23 18:58:04.951064 kernel: pci_bus 0000:1a: resource 1 [mem 0x81000000-0x811fffff] Jan 23 18:58:04.951139 kernel: pci_bus 0000:1a: resource 2 [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 18:58:04.951206 kernel: pci_bus 0000:1b: resource 0 [io 0x4000-0x4fff] Jan 23 18:58:04.951273 kernel: pci_bus 0000:1b: resource 1 [mem 0x80e00000-0x80ffffff] Jan 23 18:58:04.951336 kernel: pci_bus 0000:1b: resource 2 [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 18:58:04.951403 kernel: pci_bus 0000:1c: resource 0 [io 0x3000-0x3fff] Jan 23 18:58:04.951466 kernel: pci_bus 0000:1c: resource 1 [mem 0x80c00000-0x80dfffff] Jan 23 18:58:04.951529 kernel: pci_bus 0000:1c: resource 2 [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 18:58:04.951596 kernel: pci_bus 0000:1d: resource 0 [io 0x2000-0x2fff] Jan 23 18:58:04.951660 kernel: pci_bus 0000:1d: resource 1 [mem 0x80a00000-0x80bfffff] Jan 23 18:58:04.951726 kernel: pci_bus 0000:1d: resource 2 [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 18:58:04.951796 kernel: pci_bus 0000:1e: resource 0 [io 0x1000-0x1fff] Jan 23 18:58:04.951860 kernel: pci_bus 0000:1e: resource 1 [mem 0x80800000-0x809fffff] Jan 23 18:58:04.951923 kernel: pci_bus 0000:1e: resource 2 [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 18:58:04.951934 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 18:58:04.951943 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:58:04.951951 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 18:58:04.951959 kernel: software IO TLB: mapped [mem 0x0000000077ede000-0x000000007bede000] (64MB) Jan 23 18:58:04.951969 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 18:58:04.951977 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134287020, max_idle_ns: 440795320515 ns Jan 23 18:58:04.951985 kernel: Initialise system trusted keyrings Jan 23 18:58:04.951993 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 18:58:04.952001 kernel: Key type asymmetric registered Jan 23 18:58:04.952008 kernel: Asymmetric key parser 'x509' registered Jan 23 18:58:04.952016 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:58:04.952024 kernel: io scheduler mq-deadline registered Jan 23 18:58:04.952032 kernel: io scheduler kyber registered Jan 23 18:58:04.952041 kernel: io scheduler bfq registered Jan 23 18:58:04.952124 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 23 18:58:04.952197 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 23 18:58:04.952268 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 23 18:58:04.952338 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 23 18:58:04.952409 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 23 18:58:04.952478 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 23 18:58:04.952550 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 23 18:58:04.952619 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 23 18:58:04.952688 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 23 18:58:04.952757 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 23 18:58:04.952828 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 23 18:58:04.952898 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 23 18:58:04.952968 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 23 18:58:04.953036 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 23 18:58:04.953113 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 23 18:58:04.953183 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 23 18:58:04.953193 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 18:58:04.953263 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 23 18:58:04.953333 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 23 18:58:04.953401 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33 Jan 23 18:58:04.953469 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33 Jan 23 18:58:04.953537 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34 Jan 23 18:58:04.953605 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34 Jan 23 18:58:04.953678 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35 Jan 23 18:58:04.953746 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35 Jan 23 18:58:04.953815 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36 Jan 23 18:58:04.953882 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36 Jan 23 18:58:04.953952 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37 Jan 23 18:58:04.954023 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37 Jan 23 18:58:04.954092 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38 Jan 23 18:58:04.954177 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38 Jan 23 18:58:04.954264 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39 Jan 23 18:58:04.954334 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39 Jan 23 18:58:04.954345 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 18:58:04.954413 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40 Jan 23 18:58:04.954481 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40 Jan 23 18:58:04.954553 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 41 Jan 23 18:58:04.954621 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 41 Jan 23 18:58:04.954689 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 42 Jan 23 18:58:04.954758 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 42 Jan 23 18:58:04.954827 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 43 Jan 23 18:58:04.954895 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 43 Jan 23 18:58:04.954964 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 44 Jan 23 18:58:04.955032 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 44 Jan 23 18:58:04.955123 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 45 Jan 23 18:58:04.955193 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 45 Jan 23 18:58:04.955261 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 46 Jan 23 18:58:04.955329 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 46 Jan 23 18:58:04.955399 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 47 Jan 23 18:58:04.955467 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 47 Jan 23 18:58:04.955477 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 23 18:58:04.955544 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 48 Jan 23 18:58:04.955612 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 48 Jan 23 18:58:04.955683 kernel: pcieport 0000:00:05.1: PME: Signaling with IRQ 49 Jan 23 18:58:04.955752 kernel: pcieport 0000:00:05.1: AER: enabled with IRQ 49 Jan 23 18:58:04.955820 kernel: pcieport 0000:00:05.2: PME: Signaling with IRQ 50 Jan 23 18:58:04.955888 kernel: pcieport 0000:00:05.2: AER: enabled with IRQ 50 Jan 23 18:58:04.955956 kernel: pcieport 0000:00:05.3: PME: Signaling with IRQ 51 Jan 23 18:58:04.956024 kernel: pcieport 0000:00:05.3: AER: enabled with IRQ 51 Jan 23 18:58:04.956093 kernel: pcieport 0000:00:05.4: PME: Signaling with IRQ 52 Jan 23 18:58:04.956168 kernel: pcieport 0000:00:05.4: AER: enabled with IRQ 52 Jan 23 18:58:04.956181 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:58:04.956189 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:58:04.956197 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:58:04.956205 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 18:58:04.956213 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 18:58:04.956221 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 18:58:04.956293 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 18:58:04.956305 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 18:58:04.956370 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 18:58:04.956433 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T18:58:04 UTC (1769194684) Jan 23 18:58:04.956496 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 23 18:58:04.956505 kernel: intel_pstate: CPU model not supported Jan 23 18:58:04.956513 kernel: efifb: probing for efifb Jan 23 18:58:04.956521 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Jan 23 18:58:04.956529 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 18:58:04.956536 kernel: efifb: scrolling: redraw Jan 23 18:58:04.956544 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 18:58:04.956554 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 18:58:04.956561 kernel: fb0: EFI VGA frame buffer device Jan 23 18:58:04.956569 kernel: pstore: Using crash dump compression: deflate Jan 23 18:58:04.956577 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 18:58:04.956584 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:58:04.956592 kernel: Segment Routing with IPv6 Jan 23 18:58:04.956600 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:58:04.956608 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:58:04.956615 kernel: Key type dns_resolver registered Jan 23 18:58:04.956625 kernel: IPI shorthand broadcast: enabled Jan 23 18:58:04.956633 kernel: sched_clock: Marking stable (3956144164, 153612837)->(4230139959, -120382958) Jan 23 18:58:04.956641 kernel: registered taskstats version 1 Jan 23 18:58:04.956648 kernel: Loading compiled-in X.509 certificates Jan 23 18:58:04.956656 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 18:58:04.956664 kernel: Demotion targets for Node 0: null Jan 23 18:58:04.956672 kernel: Key type .fscrypt registered Jan 23 18:58:04.956679 kernel: Key type fscrypt-provisioning registered Jan 23 18:58:04.956687 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:58:04.956696 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:58:04.956704 kernel: ima: No architecture policies found Jan 23 18:58:04.956711 kernel: clk: Disabling unused clocks Jan 23 18:58:04.956719 kernel: Warning: unable to open an initial console. Jan 23 18:58:04.956727 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 18:58:04.956735 kernel: Write protecting the kernel read-only data: 40960k Jan 23 18:58:04.956742 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 18:58:04.956750 kernel: Run /init as init process Jan 23 18:58:04.956758 kernel: with arguments: Jan 23 18:58:04.956767 kernel: /init Jan 23 18:58:04.956775 kernel: with environment: Jan 23 18:58:04.956782 kernel: HOME=/ Jan 23 18:58:04.956789 kernel: TERM=linux Jan 23 18:58:04.956799 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:58:04.956809 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:58:04.956818 systemd[1]: Detected virtualization kvm. Jan 23 18:58:04.956826 systemd[1]: Detected architecture x86-64. Jan 23 18:58:04.956835 systemd[1]: Running in initrd. Jan 23 18:58:04.956843 systemd[1]: No hostname configured, using default hostname. Jan 23 18:58:04.956851 systemd[1]: Hostname set to . Jan 23 18:58:04.956860 systemd[1]: Initializing machine ID from VM UUID. Jan 23 18:58:04.956877 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:58:04.956886 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:58:04.956894 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:58:04.956903 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:58:04.956911 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:58:04.956920 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:58:04.956930 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:58:04.956939 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 18:58:04.956948 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 18:58:04.956956 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:58:04.956964 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:58:04.956972 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:58:04.956980 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:58:04.956990 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:58:04.956998 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:58:04.957006 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:58:04.957014 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:58:04.957023 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:58:04.957031 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:58:04.957039 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:58:04.957047 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:58:04.957057 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:58:04.957068 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:58:04.957076 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:58:04.957084 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:58:04.957092 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:58:04.957107 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:58:04.957116 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:58:04.957124 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:58:04.957132 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:58:04.957142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:58:04.957150 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:58:04.957159 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:58:04.957185 systemd-journald[223]: Collecting audit messages is disabled. Jan 23 18:58:04.957209 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:58:04.957217 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:58:04.957226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:58:04.957238 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:58:04.957246 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:58:04.957255 kernel: Bridge firewalling registered Jan 23 18:58:04.957263 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:58:04.957271 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:58:04.957280 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:58:04.957288 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:58:04.957297 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:58:04.957307 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:58:04.957315 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:58:04.957325 systemd-journald[223]: Journal started Jan 23 18:58:04.957344 systemd-journald[223]: Runtime Journal (/run/log/journal/1e971e41df424dfaa9501e0cf6ba5336) is 8M, max 78M, 70M free. Jan 23 18:58:04.961130 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:58:04.866665 systemd-modules-load[225]: Inserted module 'overlay' Jan 23 18:58:04.962868 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:58:04.914001 systemd-modules-load[225]: Inserted module 'br_netfilter' Jan 23 18:58:04.965210 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:58:04.973617 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:58:04.981911 systemd-tmpfiles[256]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:58:04.986815 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:58:04.988917 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:58:05.025381 systemd-resolved[296]: Positive Trust Anchors: Jan 23 18:58:05.026022 systemd-resolved[296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:58:05.026055 systemd-resolved[296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:58:05.030264 systemd-resolved[296]: Defaulting to hostname 'linux'. Jan 23 18:58:05.031016 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:58:05.032233 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:58:05.048119 kernel: SCSI subsystem initialized Jan 23 18:58:05.058115 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:58:05.068117 kernel: iscsi: registered transport (tcp) Jan 23 18:58:05.089238 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:58:05.089285 kernel: QLogic iSCSI HBA Driver Jan 23 18:58:05.104899 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:58:05.121374 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:58:05.122200 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:58:05.163351 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:58:05.165553 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:58:05.218154 kernel: raid6: avx512x4 gen() 43094 MB/s Jan 23 18:58:05.234146 kernel: raid6: avx512x2 gen() 46548 MB/s Jan 23 18:58:05.251156 kernel: raid6: avx512x1 gen() 44399 MB/s Jan 23 18:58:05.268151 kernel: raid6: avx2x4 gen() 34461 MB/s Jan 23 18:58:05.285158 kernel: raid6: avx2x2 gen() 33999 MB/s Jan 23 18:58:05.302490 kernel: raid6: avx2x1 gen() 26622 MB/s Jan 23 18:58:05.302588 kernel: raid6: using algorithm avx512x2 gen() 46548 MB/s Jan 23 18:58:05.320480 kernel: raid6: .... xor() 26784 MB/s, rmw enabled Jan 23 18:58:05.320558 kernel: raid6: using avx512x2 recovery algorithm Jan 23 18:58:05.340149 kernel: xor: automatically using best checksumming function avx Jan 23 18:58:05.477157 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:58:05.487367 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:58:05.492734 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:58:05.515993 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jan 23 18:58:05.520608 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:58:05.527328 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:58:05.551031 dracut-pre-trigger[488]: rd.md=0: removing MD RAID activation Jan 23 18:58:05.588538 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:58:05.594012 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:58:05.668002 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:58:05.670456 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:58:05.732121 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 23 18:58:05.761137 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:58:05.772129 kernel: virtio_blk virtio2: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 23 18:58:05.778125 kernel: ACPI: bus type USB registered Jan 23 18:58:05.784876 kernel: usbcore: registered new interface driver usbfs Jan 23 18:58:05.784921 kernel: usbcore: registered new interface driver hub Jan 23 18:58:05.784932 kernel: usbcore: registered new device driver usb Jan 23 18:58:05.788149 kernel: AES CTR mode by8 optimization enabled Jan 23 18:58:05.792178 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 18:58:05.792209 kernel: GPT:17805311 != 104857599 Jan 23 18:58:05.792220 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 18:58:05.793133 kernel: GPT:17805311 != 104857599 Jan 23 18:58:05.794669 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 18:58:05.794695 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:58:05.822689 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:58:05.824227 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:58:05.828197 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 18:58:05.826513 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:58:05.830340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:58:05.835112 kernel: libata version 3.00 loaded. Jan 23 18:58:05.839757 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:58:05.839853 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:58:05.842058 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:58:05.850024 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller Jan 23 18:58:05.850391 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1 Jan 23 18:58:05.850492 kernel: uhci_hcd 0000:02:01.0: detected 2 ports Jan 23 18:58:05.850587 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x00006000 Jan 23 18:58:05.850678 kernel: hub 1-0:1.0: USB hub found Jan 23 18:58:05.850782 kernel: hub 1-0:1.0: 2 ports detected Jan 23 18:58:05.850046 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:58:05.855439 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 18:58:05.855581 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 18:58:05.858404 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 18:58:05.858545 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 18:58:05.860174 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 18:58:05.862112 kernel: scsi host0: ahci Jan 23 18:58:05.863119 kernel: scsi host1: ahci Jan 23 18:58:05.865499 kernel: scsi host2: ahci Jan 23 18:58:05.865633 kernel: scsi host3: ahci Jan 23 18:58:05.870127 kernel: scsi host4: ahci Jan 23 18:58:05.872246 kernel: scsi host5: ahci Jan 23 18:58:05.875117 kernel: ata1: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380100 irq 61 lpm-pol 1 Jan 23 18:58:05.875146 kernel: ata2: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380180 irq 61 lpm-pol 1 Jan 23 18:58:05.876827 kernel: ata3: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380200 irq 61 lpm-pol 1 Jan 23 18:58:05.876866 kernel: ata4: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380280 irq 61 lpm-pol 1 Jan 23 18:58:05.879752 kernel: ata5: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380300 irq 61 lpm-pol 1 Jan 23 18:58:05.879780 kernel: ata6: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380380 irq 61 lpm-pol 1 Jan 23 18:58:05.880973 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:58:05.900511 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 18:58:05.919070 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 18:58:05.919478 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 18:58:05.927054 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 18:58:05.934173 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:58:05.935238 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:58:05.958116 disk-uuid[671]: Primary Header is updated. Jan 23 18:58:05.958116 disk-uuid[671]: Secondary Entries is updated. Jan 23 18:58:05.958116 disk-uuid[671]: Secondary Header is updated. Jan 23 18:58:05.967113 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:58:06.068141 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Jan 23 18:58:06.200557 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 18:58:06.200711 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 18:58:06.200754 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 18:58:06.206319 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 18:58:06.207858 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 18:58:06.214180 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 18:58:06.280177 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 18:58:06.293226 kernel: usbcore: registered new interface driver usbhid Jan 23 18:58:06.293310 kernel: usbhid: USB HID core driver Jan 23 18:58:06.306818 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 23 18:58:06.306892 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0 Jan 23 18:58:06.656587 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:58:06.658881 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:58:06.660590 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:58:06.662741 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:58:06.667000 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:58:06.713577 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:58:06.986197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:58:06.989557 disk-uuid[672]: The operation has completed successfully. Jan 23 18:58:07.090283 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:58:07.090417 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:58:07.128272 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 18:58:07.165547 sh[698]: Success Jan 23 18:58:07.200758 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:58:07.200863 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:58:07.202327 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:58:07.230147 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 18:58:07.321483 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:58:07.329289 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 18:58:07.344817 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 18:58:07.375174 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (710) Jan 23 18:58:07.380203 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 18:58:07.380281 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:58:07.403361 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:58:07.403450 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:58:07.406191 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 18:58:07.407696 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:58:07.409191 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:58:07.410452 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:58:07.417365 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:58:07.468138 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (747) Jan 23 18:58:07.476120 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:58:07.476171 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:58:07.490896 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:58:07.490951 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:58:07.502356 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:58:07.503875 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:58:07.507286 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:58:07.546477 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:58:07.549988 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:58:07.585146 systemd-networkd[880]: lo: Link UP Jan 23 18:58:07.585153 systemd-networkd[880]: lo: Gained carrier Jan 23 18:58:07.586187 systemd-networkd[880]: Enumeration completed Jan 23 18:58:07.586268 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:58:07.586867 systemd-networkd[880]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:58:07.586873 systemd-networkd[880]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:58:07.586879 systemd[1]: Reached target network.target - Network. Jan 23 18:58:07.587781 systemd-networkd[880]: eth0: Link UP Jan 23 18:58:07.589233 systemd-networkd[880]: eth0: Gained carrier Jan 23 18:58:07.589250 systemd-networkd[880]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:58:07.623228 systemd-networkd[880]: eth0: DHCPv4 address 10.0.5.167/25, gateway 10.0.5.129 acquired from 10.0.5.129 Jan 23 18:58:07.983482 ignition[836]: Ignition 2.22.0 Jan 23 18:58:07.983494 ignition[836]: Stage: fetch-offline Jan 23 18:58:07.983532 ignition[836]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:58:07.983541 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:58:07.986171 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:58:07.983627 ignition[836]: parsed url from cmdline: "" Jan 23 18:58:07.983630 ignition[836]: no config URL provided Jan 23 18:58:07.983635 ignition[836]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:58:07.983642 ignition[836]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:58:07.983647 ignition[836]: failed to fetch config: resource requires networking Jan 23 18:58:07.989218 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 18:58:07.983792 ignition[836]: Ignition finished successfully Jan 23 18:58:08.018083 ignition[890]: Ignition 2.22.0 Jan 23 18:58:08.018815 ignition[890]: Stage: fetch Jan 23 18:58:08.018956 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:58:08.018965 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:58:08.019036 ignition[890]: parsed url from cmdline: "" Jan 23 18:58:08.019040 ignition[890]: no config URL provided Jan 23 18:58:08.019044 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:58:08.019051 ignition[890]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:58:08.019148 ignition[890]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 18:58:08.019161 ignition[890]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 23 18:58:08.019180 ignition[890]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 18:58:08.923876 ignition[890]: GET result: OK Jan 23 18:58:08.924031 ignition[890]: parsing config with SHA512: 4d4028423ea2a7b843eace97bed9faac475b930062b07202a6ae16b5edf5dda71d5297a801ea36df262fb53ecbaac5143af50cbe241a64c4053920d1111fb7ab Jan 23 18:58:08.930350 unknown[890]: fetched base config from "system" Jan 23 18:58:08.930373 unknown[890]: fetched base config from "system" Jan 23 18:58:08.930386 unknown[890]: fetched user config from "openstack" Jan 23 18:58:08.932428 ignition[890]: fetch: fetch complete Jan 23 18:58:08.932443 ignition[890]: fetch: fetch passed Jan 23 18:58:08.932561 ignition[890]: Ignition finished successfully Jan 23 18:58:08.938743 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 18:58:08.942562 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:58:08.969512 systemd-networkd[880]: eth0: Gained IPv6LL Jan 23 18:58:09.010978 ignition[896]: Ignition 2.22.0 Jan 23 18:58:09.011004 ignition[896]: Stage: kargs Jan 23 18:58:09.012441 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:58:09.012466 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:58:09.013740 ignition[896]: kargs: kargs passed Jan 23 18:58:09.017467 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:58:09.013825 ignition[896]: Ignition finished successfully Jan 23 18:58:09.022176 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:58:09.057991 ignition[902]: Ignition 2.22.0 Jan 23 18:58:09.058005 ignition[902]: Stage: disks Jan 23 18:58:09.058167 ignition[902]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:58:09.058176 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:58:09.058757 ignition[902]: disks: disks passed Jan 23 18:58:09.058794 ignition[902]: Ignition finished successfully Jan 23 18:58:09.062607 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:58:09.064904 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:58:09.066793 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:58:09.067914 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:58:09.069378 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:58:09.070794 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:58:09.075359 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:58:09.114026 systemd-fsck[911]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 18:58:09.121268 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:58:09.125337 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:58:09.297133 kernel: EXT4-fs (vda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 18:58:09.298391 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:58:09.300541 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:58:09.304499 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:58:09.307559 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:58:09.309714 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 18:58:09.317238 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 23 18:58:09.318233 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:58:09.319090 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:58:09.325491 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:58:09.327826 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:58:09.344131 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (919) Jan 23 18:58:09.348340 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:58:09.348435 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:58:09.359868 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:58:09.359957 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:58:09.362965 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:58:09.660147 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:58:09.751630 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 18:58:09.765996 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Jan 23 18:58:09.777634 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 18:58:09.787012 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 18:58:10.006890 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:58:10.011044 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:58:10.014328 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:58:10.048648 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:58:10.053771 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:58:10.089446 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:58:10.101121 ignition[1036]: INFO : Ignition 2.22.0 Jan 23 18:58:10.101121 ignition[1036]: INFO : Stage: mount Jan 23 18:58:10.101121 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:58:10.101121 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:58:10.105196 ignition[1036]: INFO : mount: mount passed Jan 23 18:58:10.105196 ignition[1036]: INFO : Ignition finished successfully Jan 23 18:58:10.106510 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:58:10.729137 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:58:12.745163 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:58:16.757124 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:58:16.768610 coreos-metadata[921]: Jan 23 18:58:16.768 WARN failed to locate config-drive, using the metadata service API instead Jan 23 18:58:16.793393 coreos-metadata[921]: Jan 23 18:58:16.793 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 18:58:17.421336 coreos-metadata[921]: Jan 23 18:58:17.421 INFO Fetch successful Jan 23 18:58:17.421336 coreos-metadata[921]: Jan 23 18:58:17.421 INFO wrote hostname ci-4459-2-3-4-27da11ba20 to /sysroot/etc/hostname Jan 23 18:58:17.425493 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 23 18:58:17.425754 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 23 18:58:17.429258 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:58:17.473492 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:58:17.511166 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1054) Jan 23 18:58:17.516677 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:58:17.516748 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:58:17.528584 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:58:17.528664 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:58:17.535014 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:58:17.572856 ignition[1072]: INFO : Ignition 2.22.0 Jan 23 18:58:17.572856 ignition[1072]: INFO : Stage: files Jan 23 18:58:17.574266 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:58:17.574266 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:58:17.574266 ignition[1072]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:58:17.575853 ignition[1072]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:58:17.575853 ignition[1072]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:58:17.579783 ignition[1072]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:58:17.580329 ignition[1072]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:58:17.581148 unknown[1072]: wrote ssh authorized keys file for user: core Jan 23 18:58:17.581825 ignition[1072]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:58:17.586935 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:58:17.586935 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:58:17.588678 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:58:17.589206 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:58:17.589206 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:58:17.590523 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:58:17.590523 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:58:17.590523 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 18:58:17.877149 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 23 18:58:18.689123 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:58:18.693494 ignition[1072]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:58:18.693494 ignition[1072]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:58:18.693494 ignition[1072]: INFO : files: files passed Jan 23 18:58:18.693494 ignition[1072]: INFO : Ignition finished successfully Jan 23 18:58:18.696622 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:58:18.699143 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:58:18.701310 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:58:18.715171 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:58:18.715832 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:58:18.724781 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:58:18.726013 initrd-setup-root-after-ignition[1101]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:58:18.726571 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:58:18.728686 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:58:18.729733 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:58:18.731398 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:58:18.794314 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:58:18.794491 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:58:18.796343 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:58:18.797596 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:58:18.799168 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:58:18.800321 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:58:18.835556 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:58:18.837727 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:58:18.871544 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:58:18.872781 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:58:18.874587 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:58:18.876212 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:58:18.876402 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:58:18.878561 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:58:18.880155 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:58:18.881729 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:58:18.883208 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:58:18.884550 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:58:18.886085 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:58:18.887581 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:58:18.889201 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:58:18.890782 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:58:18.892312 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:58:18.893899 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:58:18.895393 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:58:18.895606 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:58:18.897606 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:58:18.899351 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:58:18.900764 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:58:18.900941 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:58:18.902355 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:58:18.902563 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:58:18.904616 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:58:18.904861 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:58:18.906451 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:58:18.906668 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:58:18.909278 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:58:18.914280 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:58:18.917408 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:58:18.917597 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:58:18.919429 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:58:18.920223 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:58:18.928651 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:58:18.931152 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:58:18.950475 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:58:18.955030 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:58:18.955161 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:58:18.958280 ignition[1125]: INFO : Ignition 2.22.0 Jan 23 18:58:18.960015 ignition[1125]: INFO : Stage: umount Jan 23 18:58:18.960015 ignition[1125]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:58:18.960015 ignition[1125]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:58:18.960015 ignition[1125]: INFO : umount: umount passed Jan 23 18:58:18.960015 ignition[1125]: INFO : Ignition finished successfully Jan 23 18:58:18.962512 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:58:18.962631 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:58:18.963735 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:58:18.963777 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:58:18.964487 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:58:18.964529 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:58:18.965346 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 18:58:18.965383 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 18:58:18.966214 systemd[1]: Stopped target network.target - Network. Jan 23 18:58:18.967064 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:58:18.967123 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:58:18.967968 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:58:18.968853 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:58:18.972147 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:58:18.972633 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:58:18.973485 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:58:18.974406 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:58:18.974440 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:58:18.975353 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:58:18.975384 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:58:18.976459 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:58:18.976503 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:58:18.977341 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:58:18.977377 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:58:18.978202 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:58:18.978243 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:58:18.979137 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:58:18.980086 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:58:18.985775 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:58:18.985882 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:58:18.989515 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 18:58:18.989764 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:58:18.989867 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:58:18.992257 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 18:58:18.992719 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:58:18.993784 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:58:18.993834 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:58:18.995536 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:58:18.995998 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:58:18.996044 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:58:18.996547 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:58:18.996583 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:58:18.997081 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:58:18.997166 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:58:18.998025 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:58:18.998066 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:58:18.998952 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:58:19.001538 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 18:58:19.001595 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:58:19.011359 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:58:19.011508 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:58:19.012351 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:58:19.012409 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:58:19.013805 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:58:19.013835 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:58:19.014789 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:58:19.014834 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:58:19.016180 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:58:19.016220 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:58:19.017559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:58:19.017605 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:58:19.021660 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:58:19.023545 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:58:19.023598 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:58:19.025213 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:58:19.025265 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:58:19.026639 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 18:58:19.026694 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:58:19.028990 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:58:19.029039 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:58:19.030635 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:58:19.030679 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:58:19.034553 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 18:58:19.034612 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 18:58:19.034653 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 18:58:19.034694 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:58:19.035073 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:58:19.036222 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:58:19.036980 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:58:19.037208 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:58:19.039786 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:58:19.042647 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:58:19.060840 systemd[1]: Switching root. Jan 23 18:58:19.117219 systemd-journald[223]: Journal stopped Jan 23 18:58:20.638640 systemd-journald[223]: Received SIGTERM from PID 1 (systemd). Jan 23 18:58:20.639310 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:58:20.639329 kernel: SELinux: policy capability open_perms=1 Jan 23 18:58:20.639342 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:58:20.639355 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:58:20.639365 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:58:20.639375 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:58:20.639384 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:58:20.639394 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:58:20.639404 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:58:20.639421 kernel: audit: type=1403 audit(1769194699.615:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 18:58:20.639432 systemd[1]: Successfully loaded SELinux policy in 108.894ms. Jan 23 18:58:20.639447 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.192ms. Jan 23 18:58:20.639459 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:58:20.639473 systemd[1]: Detected virtualization kvm. Jan 23 18:58:20.639483 systemd[1]: Detected architecture x86-64. Jan 23 18:58:20.639496 systemd[1]: Detected first boot. Jan 23 18:58:20.639507 systemd[1]: Hostname set to . Jan 23 18:58:20.639519 systemd[1]: Initializing machine ID from VM UUID. Jan 23 18:58:20.639531 zram_generator::config[1171]: No configuration found. Jan 23 18:58:20.639542 kernel: Guest personality initialized and is inactive Jan 23 18:58:20.639552 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 18:58:20.639567 kernel: Initialized host personality Jan 23 18:58:20.639576 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:58:20.639586 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:58:20.639597 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 18:58:20.639608 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:58:20.639619 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:58:20.639629 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:58:20.639640 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:58:20.639651 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:58:20.639661 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:58:20.639671 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:58:20.639681 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:58:20.639691 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:58:20.639703 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:58:20.639715 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:58:20.639725 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:58:20.639735 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:58:20.639746 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:58:20.639757 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:58:20.639768 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:58:20.639780 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:58:20.639791 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:58:20.639800 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:58:20.639813 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:58:20.639823 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:58:20.639833 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:58:20.639843 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:58:20.639853 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:58:20.639863 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:58:20.639875 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:58:20.639885 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:58:20.639895 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:58:20.639906 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:58:20.639916 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:58:20.639926 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:58:20.639936 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:58:20.639950 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:58:20.639960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:58:20.639972 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:58:20.639983 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:58:20.639996 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:58:20.640006 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:58:20.640016 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:58:20.640026 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:58:20.640036 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:58:20.640046 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:58:20.640057 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:58:20.640069 systemd[1]: Reached target machines.target - Containers. Jan 23 18:58:20.640079 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:58:20.640089 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:58:20.640113 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:58:20.640123 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:58:20.640132 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:58:20.640142 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:58:20.640152 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:58:20.640164 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:58:20.640175 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:58:20.640186 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:58:20.640196 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:58:20.640208 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:58:20.640220 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:58:20.640230 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:58:20.640242 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:58:20.640252 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:58:20.640262 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:58:20.640274 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:58:20.640285 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:58:20.640295 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:58:20.640305 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:58:20.640315 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 18:58:20.640325 systemd[1]: Stopped verity-setup.service. Jan 23 18:58:20.640336 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:58:20.640346 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:58:20.640356 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:58:20.640370 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:58:20.640380 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:58:20.640390 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:58:20.640400 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:58:20.640410 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:58:20.640420 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:58:20.640431 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:58:20.640441 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:58:20.640452 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:58:20.640462 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:58:20.640472 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:58:20.640482 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:58:20.640520 systemd-journald[1238]: Collecting audit messages is disabled. Jan 23 18:58:20.640548 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:58:20.640558 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:58:20.640568 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:58:20.640581 systemd-journald[1238]: Journal started Jan 23 18:58:20.640603 systemd-journald[1238]: Runtime Journal (/run/log/journal/1e971e41df424dfaa9501e0cf6ba5336) is 8M, max 78M, 70M free. Jan 23 18:58:20.375179 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:58:20.394018 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 18:58:20.394422 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:58:20.644170 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:58:20.645410 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:58:20.646414 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:58:20.647309 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:58:20.648114 kernel: loop: module loaded Jan 23 18:58:20.649652 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:58:20.649796 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:58:20.663152 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:58:20.663712 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:58:20.663739 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:58:20.665088 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:58:20.672257 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:58:20.672854 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:58:20.675246 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:58:20.681274 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:58:20.682412 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:58:20.686255 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:58:20.687184 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:58:20.691180 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:58:20.693827 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:58:20.698158 kernel: fuse: init (API version 7.41) Jan 23 18:58:20.701240 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:58:20.704371 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:58:20.704706 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:58:20.722347 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:58:20.728845 systemd-journald[1238]: Time spent on flushing to /var/log/journal/1e971e41df424dfaa9501e0cf6ba5336 is 41.882ms for 1694 entries. Jan 23 18:58:20.728845 systemd-journald[1238]: System Journal (/var/log/journal/1e971e41df424dfaa9501e0cf6ba5336) is 8M, max 584.8M, 576.8M free. Jan 23 18:58:20.777348 systemd-journald[1238]: Received client request to flush runtime journal. Jan 23 18:58:20.777390 kernel: loop0: detected capacity change from 0 to 110984 Jan 23 18:58:20.777410 kernel: ACPI: bus type drm_connector registered Jan 23 18:58:20.728803 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:58:20.734864 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:58:20.740037 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:58:20.740662 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 23 18:58:20.740673 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 23 18:58:20.750506 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:58:20.752979 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:58:20.753289 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:58:20.759069 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:58:20.784579 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:58:20.794795 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:58:20.805083 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:58:20.827121 kernel: loop1: detected capacity change from 0 to 128560 Jan 23 18:58:20.840341 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:58:20.845778 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:58:20.873609 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:58:20.877113 kernel: loop2: detected capacity change from 0 to 229808 Jan 23 18:58:20.877670 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jan 23 18:58:20.877900 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jan 23 18:58:20.883451 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:58:20.929135 kernel: loop3: detected capacity change from 0 to 1640 Jan 23 18:58:20.959872 kernel: loop4: detected capacity change from 0 to 110984 Jan 23 18:58:20.997218 kernel: loop5: detected capacity change from 0 to 128560 Jan 23 18:58:21.023156 kernel: loop6: detected capacity change from 0 to 229808 Jan 23 18:58:21.063140 kernel: loop7: detected capacity change from 0 to 1640 Jan 23 18:58:21.070819 (sd-merge)[1323]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-stackit'. Jan 23 18:58:21.071256 (sd-merge)[1323]: Merged extensions into '/usr'. Jan 23 18:58:21.078298 systemd[1]: Reload requested from client PID 1292 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:58:21.078312 systemd[1]: Reloading... Jan 23 18:58:21.144116 zram_generator::config[1348]: No configuration found. Jan 23 18:58:21.307119 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:58:21.307286 systemd[1]: Reloading finished in 228 ms. Jan 23 18:58:21.324960 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:58:21.328216 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:58:21.335004 systemd[1]: Starting ensure-sysext.service... Jan 23 18:58:21.336272 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:58:21.338343 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:58:21.361243 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:58:21.361725 systemd[1]: Reload requested from client PID 1392 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:58:21.361786 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:58:21.362008 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:58:21.362225 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 18:58:21.362883 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 18:58:21.363080 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Jan 23 18:58:21.363136 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Jan 23 18:58:21.363261 systemd[1]: Reloading... Jan 23 18:58:21.374227 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:58:21.374237 systemd-tmpfiles[1393]: Skipping /boot Jan 23 18:58:21.385964 systemd-udevd[1394]: Using default interface naming scheme 'v255'. Jan 23 18:58:21.387195 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:58:21.387204 systemd-tmpfiles[1393]: Skipping /boot Jan 23 18:58:21.462126 zram_generator::config[1418]: No configuration found. Jan 23 18:58:21.633120 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:58:21.654226 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 18:58:21.666116 kernel: ACPI: button: Power Button [PWRF] Jan 23 18:58:21.718797 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:58:21.720463 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:58:21.720829 systemd[1]: Reloading finished in 357 ms. Jan 23 18:58:21.732792 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:58:21.741392 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:58:21.746140 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 18:58:21.750603 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 18:58:21.750779 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 18:58:21.776156 systemd[1]: Finished ensure-sysext.service. Jan 23 18:58:21.784614 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:58:21.787282 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:58:21.790274 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:58:21.791340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:58:21.792305 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:58:21.799292 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:58:21.807702 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:58:21.823874 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:58:21.831479 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:58:21.835010 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 23 18:58:21.836325 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:58:21.839765 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:58:21.840331 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:58:21.856193 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:58:21.861268 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:58:21.872524 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:58:21.873976 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:58:21.878825 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:58:21.880227 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:58:21.881472 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:58:21.882123 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:58:21.882820 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:58:21.883458 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:58:21.884570 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:58:21.885163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:58:21.885753 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:58:21.886336 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:58:21.887334 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:58:21.897936 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:58:21.901549 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:58:21.907257 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 18:58:21.907317 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 18:58:21.907173 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:58:21.908023 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:58:21.913279 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:58:21.915633 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:58:21.921119 kernel: PTP clock support registered Jan 23 18:58:21.927718 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 23 18:58:21.927913 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 23 18:58:21.941525 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:58:21.949330 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:58:21.956137 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:58:21.959073 augenrules[1561]: No rules Jan 23 18:58:21.961186 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:58:21.961557 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:58:22.017840 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:58:22.021129 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 23 18:58:22.029117 kernel: Console: switching to colour dummy device 80x25 Jan 23 18:58:22.033513 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 23 18:58:22.033763 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 18:58:22.033783 kernel: [drm] features: -context_init Jan 23 18:58:22.035807 kernel: [drm] number of scanouts: 1 Jan 23 18:58:22.038494 kernel: [drm] number of cap sets: 0 Jan 23 18:58:22.038178 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:58:22.038466 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:58:22.041259 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 23 18:58:22.048427 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 23 18:58:22.054002 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 18:58:22.061406 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 18:58:22.092245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:58:22.092707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:58:22.098159 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:58:22.104240 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:58:22.121487 ldconfig[1282]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:58:22.121224 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:58:22.131201 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:58:22.137233 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:58:22.194378 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:58:22.239911 systemd-networkd[1539]: lo: Link UP Jan 23 18:58:22.239915 systemd-networkd[1539]: lo: Gained carrier Jan 23 18:58:22.239952 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:58:22.241986 systemd-networkd[1539]: Enumeration completed Jan 23 18:58:22.243362 systemd-resolved[1543]: Positive Trust Anchors: Jan 23 18:58:22.243667 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:58:22.243898 systemd-networkd[1539]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:58:22.243902 systemd-networkd[1539]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:58:22.244359 systemd-networkd[1539]: eth0: Link UP Jan 23 18:58:22.244456 systemd-networkd[1539]: eth0: Gained carrier Jan 23 18:58:22.244471 systemd-networkd[1539]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:58:22.246153 systemd-resolved[1543]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:58:22.246291 systemd-resolved[1543]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:58:22.247222 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:58:22.248283 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:58:22.254913 systemd-resolved[1543]: Using system hostname 'ci-4459-2-3-4-27da11ba20'. Jan 23 18:58:22.256225 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:58:22.257475 systemd[1]: Reached target network.target - Network. Jan 23 18:58:22.257544 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:58:22.257604 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:58:22.257753 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:58:22.257831 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:58:22.257901 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:58:22.258087 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:58:22.258227 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:58:22.258290 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:58:22.258361 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:58:22.258384 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:58:22.258432 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:58:22.261191 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:58:22.261314 systemd-networkd[1539]: eth0: DHCPv4 address 10.0.5.167/25, gateway 10.0.5.129 acquired from 10.0.5.129 Jan 23 18:58:22.263362 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:58:22.267255 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:58:22.268176 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:58:22.270039 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:58:22.280803 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:58:22.281725 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:58:22.283207 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:58:22.285861 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:58:22.287731 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:58:22.288574 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:58:22.289617 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:58:22.289654 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:58:22.292685 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 18:58:22.297185 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:58:22.302092 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 18:58:22.307223 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:58:22.310198 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:58:22.315946 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:58:22.320136 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:58:22.321070 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:58:22.322115 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:58:22.323560 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:58:22.328213 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:58:22.334361 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:58:22.337758 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:58:22.352706 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:58:22.355317 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:58:22.355803 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:58:22.357702 extend-filesystems[1604]: Found /dev/vda6 Jan 23 18:58:22.360304 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:58:22.364588 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:58:22.369905 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:58:22.373122 extend-filesystems[1604]: Found /dev/vda9 Jan 23 18:58:22.372997 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:58:22.374239 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:58:22.378669 jq[1601]: false Jan 23 18:58:22.377860 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:58:22.379067 extend-filesystems[1604]: Checking size of /dev/vda9 Jan 23 18:58:22.379238 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:58:22.382700 oslogin_cache_refresh[1605]: Refreshing passwd entry cache Jan 23 18:58:22.385324 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Refreshing passwd entry cache Jan 23 18:58:22.398042 jq[1615]: true Jan 23 18:58:22.405548 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Failure getting users, quitting Jan 23 18:58:22.405641 oslogin_cache_refresh[1605]: Failure getting users, quitting Jan 23 18:58:22.410126 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:58:22.410054 oslogin_cache_refresh[1605]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:58:22.410606 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Refreshing group entry cache Jan 23 18:58:22.410229 oslogin_cache_refresh[1605]: Refreshing group entry cache Jan 23 18:58:22.415812 extend-filesystems[1604]: Resized partition /dev/vda9 Jan 23 18:58:22.425859 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Failure getting groups, quitting Jan 23 18:58:22.425859 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:58:22.422719 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:58:22.419427 oslogin_cache_refresh[1605]: Failure getting groups, quitting Jan 23 18:58:22.422932 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:58:22.419441 oslogin_cache_refresh[1605]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:58:22.420874 chronyd[1596]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 18:58:22.427916 extend-filesystems[1639]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:58:22.428417 chronyd[1596]: Loaded seccomp filter (level 2) Jan 23 18:58:22.430395 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 18:58:22.433401 (ntainerd)[1628]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 18:58:22.440512 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 12499963 blocks Jan 23 18:58:22.450789 dbus-daemon[1599]: [system] SELinux support is enabled Jan 23 18:58:22.452426 update_engine[1613]: I20260123 18:58:22.451232 1613 main.cc:92] Flatcar Update Engine starting Jan 23 18:58:22.452289 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:58:22.455533 jq[1631]: true Jan 23 18:58:22.461464 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:58:22.461511 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:58:22.463700 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:58:22.463723 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:58:22.465038 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:58:22.466176 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:58:22.470899 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:58:22.478416 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:58:22.486856 update_engine[1613]: I20260123 18:58:22.477209 1613 update_check_scheduler.cc:74] Next update check in 11m28s Jan 23 18:58:22.525671 systemd-logind[1612]: New seat seat0. Jan 23 18:58:22.603421 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:58:22.898656 systemd-logind[1612]: Watching system buttons on /dev/input/event3 (Power Button) Jan 23 18:58:22.898674 systemd-logind[1612]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:58:22.899225 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:58:22.930280 containerd[1628]: time="2026-01-23T18:58:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:58:22.947369 containerd[1628]: time="2026-01-23T18:58:22.947343980Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 18:58:22.948577 sshd_keygen[1640]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:58:22.956878 containerd[1628]: time="2026-01-23T18:58:22.956794763Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.942µs" Jan 23 18:58:22.956878 containerd[1628]: time="2026-01-23T18:58:22.956823128Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:58:22.956878 containerd[1628]: time="2026-01-23T18:58:22.956839828Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:58:22.967968 containerd[1628]: time="2026-01-23T18:58:22.967551184Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:58:22.967968 containerd[1628]: time="2026-01-23T18:58:22.967577437Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:58:22.967968 containerd[1628]: time="2026-01-23T18:58:22.967597940Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:58:22.967968 containerd[1628]: time="2026-01-23T18:58:22.967641962Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:58:22.967968 containerd[1628]: time="2026-01-23T18:58:22.967652495Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:58:22.967968 containerd[1628]: time="2026-01-23T18:58:22.967837379Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:58:22.967968 containerd[1628]: time="2026-01-23T18:58:22.967849030Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:58:22.967968 containerd[1628]: time="2026-01-23T18:58:22.967857815Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:58:22.967968 containerd[1628]: time="2026-01-23T18:58:22.967864987Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:58:22.967968 containerd[1628]: time="2026-01-23T18:58:22.967924219Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:58:22.968220 bash[1659]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:58:22.968993 containerd[1628]: time="2026-01-23T18:58:22.968974735Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:58:22.969077 containerd[1628]: time="2026-01-23T18:58:22.969065752Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:58:22.969134 containerd[1628]: time="2026-01-23T18:58:22.969126183Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:58:22.969234 containerd[1628]: time="2026-01-23T18:58:22.969183749Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:58:22.969532 containerd[1628]: time="2026-01-23T18:58:22.969522365Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:58:22.969649 containerd[1628]: time="2026-01-23T18:58:22.969617615Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:58:22.971232 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:58:22.978093 systemd[1]: Starting sshkeys.service... Jan 23 18:58:22.979830 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:58:22.984295 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:58:22.995960 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:58:23.025752 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:58:22.996154 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:58:23.002534 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 18:58:23.009351 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 18:58:23.011416 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:58:23.029610 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:58:23.032479 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:58:23.034352 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:58:23.035762 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:58:23.052562 containerd[1628]: time="2026-01-23T18:58:23.052513954Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:58:23.052639 containerd[1628]: time="2026-01-23T18:58:23.052581098Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:58:23.052639 containerd[1628]: time="2026-01-23T18:58:23.052602202Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:58:23.052639 containerd[1628]: time="2026-01-23T18:58:23.052623417Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:58:23.052639 containerd[1628]: time="2026-01-23T18:58:23.052636280Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:58:23.052639 containerd[1628]: time="2026-01-23T18:58:23.052645923Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:58:23.052780 containerd[1628]: time="2026-01-23T18:58:23.052659091Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:58:23.052780 containerd[1628]: time="2026-01-23T18:58:23.052671655Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:58:23.052780 containerd[1628]: time="2026-01-23T18:58:23.052688667Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:58:23.052780 containerd[1628]: time="2026-01-23T18:58:23.052698544Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:58:23.052780 containerd[1628]: time="2026-01-23T18:58:23.052707670Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:58:23.052780 containerd[1628]: time="2026-01-23T18:58:23.052720779Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:58:23.052889 containerd[1628]: time="2026-01-23T18:58:23.052850372Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:58:23.052889 containerd[1628]: time="2026-01-23T18:58:23.052868440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:58:23.052889 containerd[1628]: time="2026-01-23T18:58:23.052887042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:58:23.052940 containerd[1628]: time="2026-01-23T18:58:23.052899845Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:58:23.052940 containerd[1628]: time="2026-01-23T18:58:23.052910969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:58:23.052940 containerd[1628]: time="2026-01-23T18:58:23.052923198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:58:23.052940 containerd[1628]: time="2026-01-23T18:58:23.052932791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:58:23.053051 containerd[1628]: time="2026-01-23T18:58:23.052959645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:58:23.053051 containerd[1628]: time="2026-01-23T18:58:23.052974221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:58:23.053051 containerd[1628]: time="2026-01-23T18:58:23.052984270Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:58:23.053051 containerd[1628]: time="2026-01-23T18:58:23.052994216Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:58:23.053051 containerd[1628]: time="2026-01-23T18:58:23.053038613Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:58:23.053051 containerd[1628]: time="2026-01-23T18:58:23.053050408Z" level=info msg="Start snapshots syncer" Jan 23 18:58:23.053166 containerd[1628]: time="2026-01-23T18:58:23.053073570Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:58:23.053408 containerd[1628]: time="2026-01-23T18:58:23.053367031Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:58:23.053521 containerd[1628]: time="2026-01-23T18:58:23.053423067Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:58:23.056123 containerd[1628]: time="2026-01-23T18:58:23.054789834Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:58:23.056123 containerd[1628]: time="2026-01-23T18:58:23.054945506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:58:23.056123 containerd[1628]: time="2026-01-23T18:58:23.054979245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:58:23.056123 containerd[1628]: time="2026-01-23T18:58:23.054993212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:58:23.056123 containerd[1628]: time="2026-01-23T18:58:23.055004055Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:58:23.056123 containerd[1628]: time="2026-01-23T18:58:23.055017237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:58:23.056123 containerd[1628]: time="2026-01-23T18:58:23.055027737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:58:23.056123 containerd[1628]: time="2026-01-23T18:58:23.055039265Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:58:23.056123 containerd[1628]: time="2026-01-23T18:58:23.055071635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:58:23.056123 containerd[1628]: time="2026-01-23T18:58:23.055082879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:58:23.056123 containerd[1628]: time="2026-01-23T18:58:23.055109802Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:58:23.056123 containerd[1628]: time="2026-01-23T18:58:23.055143806Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:58:23.056123 containerd[1628]: time="2026-01-23T18:58:23.055162738Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:58:23.056123 containerd[1628]: time="2026-01-23T18:58:23.055171940Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:58:23.056391 containerd[1628]: time="2026-01-23T18:58:23.055181393Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:58:23.056391 containerd[1628]: time="2026-01-23T18:58:23.055190426Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:58:23.056391 containerd[1628]: time="2026-01-23T18:58:23.055201209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:58:23.056391 containerd[1628]: time="2026-01-23T18:58:23.055219039Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:58:23.056391 containerd[1628]: time="2026-01-23T18:58:23.055237394Z" level=info msg="runtime interface created" Jan 23 18:58:23.056391 containerd[1628]: time="2026-01-23T18:58:23.055242935Z" level=info msg="created NRI interface" Jan 23 18:58:23.056391 containerd[1628]: time="2026-01-23T18:58:23.055251321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:58:23.056391 containerd[1628]: time="2026-01-23T18:58:23.055267959Z" level=info msg="Connect containerd service" Jan 23 18:58:23.056391 containerd[1628]: time="2026-01-23T18:58:23.055288643Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:58:23.056391 containerd[1628]: time="2026-01-23T18:58:23.055944897Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:58:23.300313 kernel: EXT4-fs (vda9): resized filesystem to 12499963 Jan 23 18:58:23.332472 extend-filesystems[1639]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 18:58:23.332472 extend-filesystems[1639]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 23 18:58:23.332472 extend-filesystems[1639]: The filesystem on /dev/vda9 is now 12499963 (4k) blocks long. Jan 23 18:58:23.341816 extend-filesystems[1604]: Resized filesystem in /dev/vda9 Jan 23 18:58:23.339638 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:58:23.340030 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:58:23.409171 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:58:23.453039 containerd[1628]: time="2026-01-23T18:58:23.452859879Z" level=info msg="Start subscribing containerd event" Jan 23 18:58:23.453597 containerd[1628]: time="2026-01-23T18:58:23.452961323Z" level=info msg="Start recovering state" Jan 23 18:58:23.453728 containerd[1628]: time="2026-01-23T18:58:23.453692696Z" level=info msg="Start event monitor" Jan 23 18:58:23.453806 containerd[1628]: time="2026-01-23T18:58:23.453731247Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:58:23.453806 containerd[1628]: time="2026-01-23T18:58:23.453758253Z" level=info msg="Start streaming server" Jan 23 18:58:23.453806 containerd[1628]: time="2026-01-23T18:58:23.453782398Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:58:23.454028 containerd[1628]: time="2026-01-23T18:58:23.453799154Z" level=info msg="runtime interface starting up..." Jan 23 18:58:23.454028 containerd[1628]: time="2026-01-23T18:58:23.453821358Z" level=info msg="starting plugins..." Jan 23 18:58:23.454223 containerd[1628]: time="2026-01-23T18:58:23.453845443Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:58:23.454644 containerd[1628]: time="2026-01-23T18:58:23.454598970Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:58:23.454742 containerd[1628]: time="2026-01-23T18:58:23.454713218Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:58:23.456290 containerd[1628]: time="2026-01-23T18:58:23.456257929Z" level=info msg="containerd successfully booted in 0.526289s" Jan 23 18:58:23.456483 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:58:23.561445 systemd-networkd[1539]: eth0: Gained IPv6LL Jan 23 18:58:23.568754 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:58:23.574287 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:58:23.580653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:23.587669 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:58:23.661315 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:58:24.042131 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:58:25.159799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:25.174648 (kubelet)[1725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:25.426136 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:58:26.056138 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:58:26.061902 kubelet[1725]: E0123 18:58:26.061863 1725 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:26.067507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:26.067727 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:26.069911 systemd[1]: kubelet.service: Consumed 1.422s CPU time, 267.4M memory peak. Jan 23 18:58:29.433291 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:58:29.442827 coreos-metadata[1598]: Jan 23 18:58:29.442 WARN failed to locate config-drive, using the metadata service API instead Jan 23 18:58:29.481474 coreos-metadata[1598]: Jan 23 18:58:29.481 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 23 18:58:30.079186 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:58:30.095758 coreos-metadata[1685]: Jan 23 18:58:30.095 WARN failed to locate config-drive, using the metadata service API instead Jan 23 18:58:30.127461 coreos-metadata[1685]: Jan 23 18:58:30.127 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 23 18:58:30.197073 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:58:30.200209 systemd[1]: Started sshd@0-10.0.5.167:22-20.161.92.111:43892.service - OpenSSH per-connection server daemon (20.161.92.111:43892). Jan 23 18:58:30.876096 sshd[1743]: Accepted publickey for core from 20.161.92.111 port 43892 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:58:30.880060 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:30.896567 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:58:30.898923 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:58:30.919854 systemd-logind[1612]: New session 1 of user core. Jan 23 18:58:30.942709 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:58:30.950222 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:58:30.968808 (systemd)[1748]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 18:58:30.973778 systemd-logind[1612]: New session c1 of user core. Jan 23 18:58:31.143731 systemd[1748]: Queued start job for default target default.target. Jan 23 18:58:31.151283 systemd[1748]: Created slice app.slice - User Application Slice. Jan 23 18:58:31.151309 systemd[1748]: Reached target paths.target - Paths. Jan 23 18:58:31.151342 systemd[1748]: Reached target timers.target - Timers. Jan 23 18:58:31.152435 systemd[1748]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:58:31.196613 systemd[1748]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:58:31.196948 systemd[1748]: Reached target sockets.target - Sockets. Jan 23 18:58:31.197077 systemd[1748]: Reached target basic.target - Basic System. Jan 23 18:58:31.197210 systemd[1748]: Reached target default.target - Main User Target. Jan 23 18:58:31.197326 systemd[1748]: Startup finished in 210ms. Jan 23 18:58:31.197332 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:58:31.209389 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:58:31.624257 coreos-metadata[1598]: Jan 23 18:58:31.624 INFO Fetch successful Jan 23 18:58:31.624257 coreos-metadata[1598]: Jan 23 18:58:31.624 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 18:58:31.663590 systemd[1]: Started sshd@1-10.0.5.167:22-20.161.92.111:43904.service - OpenSSH per-connection server daemon (20.161.92.111:43904). Jan 23 18:58:32.225680 coreos-metadata[1685]: Jan 23 18:58:32.225 INFO Fetch successful Jan 23 18:58:32.225680 coreos-metadata[1685]: Jan 23 18:58:32.225 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 18:58:32.334858 sshd[1759]: Accepted publickey for core from 20.161.92.111 port 43904 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:58:32.337419 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:32.345823 systemd-logind[1612]: New session 2 of user core. Jan 23 18:58:32.355631 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 18:58:32.770467 coreos-metadata[1598]: Jan 23 18:58:32.770 INFO Fetch successful Jan 23 18:58:32.770467 coreos-metadata[1598]: Jan 23 18:58:32.770 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 23 18:58:32.782843 sshd[1762]: Connection closed by 20.161.92.111 port 43904 Jan 23 18:58:32.783824 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:32.792611 systemd[1]: sshd@1-10.0.5.167:22-20.161.92.111:43904.service: Deactivated successfully. Jan 23 18:58:32.797045 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 18:58:32.799086 systemd-logind[1612]: Session 2 logged out. Waiting for processes to exit. Jan 23 18:58:32.802370 systemd-logind[1612]: Removed session 2. Jan 23 18:58:32.902713 systemd[1]: Started sshd@2-10.0.5.167:22-20.161.92.111:57660.service - OpenSSH per-connection server daemon (20.161.92.111:57660). Jan 23 18:58:33.370015 coreos-metadata[1685]: Jan 23 18:58:33.369 INFO Fetch successful Jan 23 18:58:33.373705 unknown[1685]: wrote ssh authorized keys file for user: core Jan 23 18:58:33.377481 coreos-metadata[1598]: Jan 23 18:58:33.377 INFO Fetch successful Jan 23 18:58:33.377481 coreos-metadata[1598]: Jan 23 18:58:33.377 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 23 18:58:33.414271 update-ssh-keys[1772]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:58:33.415842 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 18:58:33.419517 systemd[1]: Finished sshkeys.service. Jan 23 18:58:33.545815 sshd[1768]: Accepted publickey for core from 20.161.92.111 port 57660 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:58:33.548852 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:33.560121 systemd-logind[1612]: New session 3 of user core. Jan 23 18:58:33.573426 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:58:33.959343 coreos-metadata[1598]: Jan 23 18:58:33.959 INFO Fetch successful Jan 23 18:58:33.959343 coreos-metadata[1598]: Jan 23 18:58:33.959 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 23 18:58:33.979327 sshd[1775]: Connection closed by 20.161.92.111 port 57660 Jan 23 18:58:33.980459 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:33.990020 systemd[1]: sshd@2-10.0.5.167:22-20.161.92.111:57660.service: Deactivated successfully. Jan 23 18:58:33.993984 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 18:58:33.996979 systemd-logind[1612]: Session 3 logged out. Waiting for processes to exit. Jan 23 18:58:34.000585 systemd-logind[1612]: Removed session 3. Jan 23 18:58:34.541922 coreos-metadata[1598]: Jan 23 18:58:34.541 INFO Fetch successful Jan 23 18:58:34.541922 coreos-metadata[1598]: Jan 23 18:58:34.541 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 23 18:58:35.145565 coreos-metadata[1598]: Jan 23 18:58:35.145 INFO Fetch successful Jan 23 18:58:35.189823 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 18:58:35.192058 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:58:35.192408 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:58:35.193151 systemd[1]: Startup finished in 4.056s (kernel) + 14.669s (initrd) + 15.907s (userspace) = 34.634s. Jan 23 18:58:36.231727 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:58:36.236453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:36.405953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:36.415456 (kubelet)[1792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:36.448982 kubelet[1792]: E0123 18:58:36.448939 1792 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:36.452350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:36.452480 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:36.452911 systemd[1]: kubelet.service: Consumed 159ms CPU time, 110.3M memory peak. Jan 23 18:58:44.093083 systemd[1]: Started sshd@3-10.0.5.167:22-20.161.92.111:33558.service - OpenSSH per-connection server daemon (20.161.92.111:33558). Jan 23 18:58:44.747738 sshd[1801]: Accepted publickey for core from 20.161.92.111 port 33558 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:58:44.750457 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:44.763193 systemd-logind[1612]: New session 4 of user core. Jan 23 18:58:44.770428 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:58:45.182201 sshd[1804]: Connection closed by 20.161.92.111 port 33558 Jan 23 18:58:45.183523 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:45.192850 systemd[1]: sshd@3-10.0.5.167:22-20.161.92.111:33558.service: Deactivated successfully. Jan 23 18:58:45.197813 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:58:45.202316 systemd-logind[1612]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:58:45.204197 systemd-logind[1612]: Removed session 4. Jan 23 18:58:45.291281 systemd[1]: Started sshd@4-10.0.5.167:22-20.161.92.111:33560.service - OpenSSH per-connection server daemon (20.161.92.111:33560). Jan 23 18:58:45.936683 sshd[1810]: Accepted publickey for core from 20.161.92.111 port 33560 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:58:45.939610 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:45.950900 systemd-logind[1612]: New session 5 of user core. Jan 23 18:58:45.960400 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:58:46.223403 chronyd[1596]: Selected source PHC0 Jan 23 18:58:46.376285 sshd[1813]: Connection closed by 20.161.92.111 port 33560 Jan 23 18:58:46.377518 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:46.387792 systemd[1]: sshd@4-10.0.5.167:22-20.161.92.111:33560.service: Deactivated successfully. Jan 23 18:58:46.393260 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:58:46.396535 systemd-logind[1612]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:58:46.400201 systemd-logind[1612]: Removed session 5. Jan 23 18:58:46.481979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:58:46.505722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:46.508649 systemd[1]: Started sshd@5-10.0.5.167:22-20.161.92.111:33566.service - OpenSSH per-connection server daemon (20.161.92.111:33566). Jan 23 18:58:46.947330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:46.977228 (kubelet)[1830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:47.068194 kubelet[1830]: E0123 18:58:47.068099 1830 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:47.072413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:47.072783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:47.073541 systemd[1]: kubelet.service: Consumed 257ms CPU time, 110.2M memory peak. Jan 23 18:58:47.205329 sshd[1820]: Accepted publickey for core from 20.161.92.111 port 33566 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:58:47.209161 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:47.225279 systemd-logind[1612]: New session 6 of user core. Jan 23 18:58:47.230505 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:58:47.688560 sshd[1837]: Connection closed by 20.161.92.111 port 33566 Jan 23 18:58:47.689142 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:47.692528 systemd[1]: sshd@5-10.0.5.167:22-20.161.92.111:33566.service: Deactivated successfully. Jan 23 18:58:47.693890 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:58:47.694582 systemd-logind[1612]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:58:47.696263 systemd-logind[1612]: Removed session 6. Jan 23 18:58:47.822774 systemd[1]: Started sshd@6-10.0.5.167:22-20.161.92.111:33568.service - OpenSSH per-connection server daemon (20.161.92.111:33568). Jan 23 18:58:48.505958 sshd[1843]: Accepted publickey for core from 20.161.92.111 port 33568 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:58:48.508921 sshd-session[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:48.521287 systemd-logind[1612]: New session 7 of user core. Jan 23 18:58:48.531493 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:58:48.898418 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:58:48.898864 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:58:48.918080 sudo[1847]: pam_unix(sudo:session): session closed for user root Jan 23 18:58:49.022399 sshd[1846]: Connection closed by 20.161.92.111 port 33568 Jan 23 18:58:49.023919 sshd-session[1843]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:49.035779 systemd[1]: sshd@6-10.0.5.167:22-20.161.92.111:33568.service: Deactivated successfully. Jan 23 18:58:49.040061 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:58:49.042478 systemd-logind[1612]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:58:49.046047 systemd-logind[1612]: Removed session 7. Jan 23 18:58:49.148170 systemd[1]: Started sshd@7-10.0.5.167:22-20.161.92.111:33580.service - OpenSSH per-connection server daemon (20.161.92.111:33580). Jan 23 18:58:49.860154 sshd[1853]: Accepted publickey for core from 20.161.92.111 port 33580 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:58:49.863839 sshd-session[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:49.870200 systemd-logind[1612]: New session 8 of user core. Jan 23 18:58:49.884732 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:58:50.233575 sudo[1858]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:58:50.234831 sudo[1858]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:58:50.246144 sudo[1858]: pam_unix(sudo:session): session closed for user root Jan 23 18:58:50.260796 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:58:50.261566 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:58:50.281810 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:58:50.335570 augenrules[1880]: No rules Jan 23 18:58:50.336273 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:58:50.336651 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:58:50.337972 sudo[1857]: pam_unix(sudo:session): session closed for user root Jan 23 18:58:50.442855 sshd[1856]: Connection closed by 20.161.92.111 port 33580 Jan 23 18:58:50.442727 sshd-session[1853]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:50.449601 systemd[1]: sshd@7-10.0.5.167:22-20.161.92.111:33580.service: Deactivated successfully. Jan 23 18:58:50.452550 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:58:50.455503 systemd-logind[1612]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:58:50.457073 systemd-logind[1612]: Removed session 8. Jan 23 18:58:50.564741 systemd[1]: Started sshd@8-10.0.5.167:22-20.161.92.111:33590.service - OpenSSH per-connection server daemon (20.161.92.111:33590). Jan 23 18:58:51.239206 sshd[1889]: Accepted publickey for core from 20.161.92.111 port 33590 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:58:51.241415 sshd-session[1889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:51.254240 systemd-logind[1612]: New session 9 of user core. Jan 23 18:58:51.264514 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:58:51.608960 sudo[1893]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:58:51.609871 sudo[1893]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:58:52.807659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:52.808750 systemd[1]: kubelet.service: Consumed 257ms CPU time, 110.2M memory peak. Jan 23 18:58:52.813964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:52.865607 systemd[1]: Reload requested from client PID 1929 ('systemctl') (unit session-9.scope)... Jan 23 18:58:52.865627 systemd[1]: Reloading... Jan 23 18:58:52.974206 zram_generator::config[1970]: No configuration found. Jan 23 18:58:53.171324 systemd[1]: Reloading finished in 305 ms. Jan 23 18:58:53.219529 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:58:53.219740 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:58:53.220141 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:53.220254 systemd[1]: kubelet.service: Consumed 110ms CPU time, 98.3M memory peak. Jan 23 18:58:53.222289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:54.513514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:54.531722 (kubelet)[2023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:58:54.588053 kubelet[2023]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:58:54.588053 kubelet[2023]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:58:54.588053 kubelet[2023]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:58:54.588416 kubelet[2023]: I0123 18:58:54.588149 2023 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:58:55.160452 kubelet[2023]: I0123 18:58:55.159767 2023 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 18:58:55.160452 kubelet[2023]: I0123 18:58:55.159788 2023 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:58:55.160452 kubelet[2023]: I0123 18:58:55.160014 2023 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:58:55.215892 kubelet[2023]: I0123 18:58:55.215791 2023 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:58:55.226123 kubelet[2023]: I0123 18:58:55.226047 2023 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:58:55.229430 kubelet[2023]: I0123 18:58:55.229397 2023 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:58:55.229574 kubelet[2023]: I0123 18:58:55.229555 2023 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:58:55.229989 kubelet[2023]: I0123 18:58:55.229570 2023 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.5.167","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:58:55.229989 kubelet[2023]: I0123 18:58:55.229723 2023 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:58:55.229989 kubelet[2023]: I0123 18:58:55.229730 2023 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:58:55.231092 kubelet[2023]: I0123 18:58:55.231020 2023 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:58:55.235365 kubelet[2023]: I0123 18:58:55.235315 2023 kubelet.go:480] "Attempting to sync node with API server" Jan 23 18:58:55.235365 kubelet[2023]: I0123 18:58:55.235329 2023 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:58:55.235563 kubelet[2023]: I0123 18:58:55.235423 2023 kubelet.go:386] "Adding apiserver pod source" Jan 23 18:58:55.235563 kubelet[2023]: I0123 18:58:55.235433 2023 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:58:55.239203 kubelet[2023]: E0123 18:58:55.239119 2023 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:55.239203 kubelet[2023]: E0123 18:58:55.239166 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:55.247030 kubelet[2023]: I0123 18:58:55.246993 2023 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:58:55.248722 kubelet[2023]: I0123 18:58:55.248688 2023 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:58:55.251288 kubelet[2023]: W0123 18:58:55.251038 2023 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:58:55.252080 kubelet[2023]: E0123 18:58:55.252027 2023 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:58:55.253131 kubelet[2023]: E0123 18:58:55.252753 2023 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.5.167\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:58:55.265844 kubelet[2023]: I0123 18:58:55.265801 2023 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:58:55.266261 kubelet[2023]: I0123 18:58:55.265926 2023 server.go:1289] "Started kubelet" Jan 23 18:58:55.273223 kubelet[2023]: I0123 18:58:55.273202 2023 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:58:55.274910 kubelet[2023]: I0123 18:58:55.274898 2023 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:58:55.276942 kubelet[2023]: I0123 18:58:55.276842 2023 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:58:55.277458 kubelet[2023]: I0123 18:58:55.277431 2023 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:58:55.279990 kubelet[2023]: I0123 18:58:55.273225 2023 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:58:55.281990 kubelet[2023]: I0123 18:58:55.281976 2023 server.go:317] "Adding debug handlers to kubelet server" Jan 23 18:58:55.282697 kubelet[2023]: I0123 18:58:55.282686 2023 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:58:55.282852 kubelet[2023]: E0123 18:58:55.282841 2023 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.5.167\" not found" Jan 23 18:58:55.283408 kubelet[2023]: I0123 18:58:55.283398 2023 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:58:55.283514 kubelet[2023]: I0123 18:58:55.283509 2023 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:58:55.288765 kubelet[2023]: I0123 18:58:55.288621 2023 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:58:55.288927 kubelet[2023]: I0123 18:58:55.288910 2023 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:58:55.291088 kubelet[2023]: I0123 18:58:55.291077 2023 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:58:55.309789 kubelet[2023]: E0123 18:58:55.309739 2023 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.5.167\" not found" node="10.0.5.167" Jan 23 18:58:55.313761 kubelet[2023]: I0123 18:58:55.311364 2023 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:58:55.313761 kubelet[2023]: I0123 18:58:55.311401 2023 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:58:55.313761 kubelet[2023]: I0123 18:58:55.311427 2023 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:58:55.315121 kubelet[2023]: I0123 18:58:55.314779 2023 policy_none.go:49] "None policy: Start" Jan 23 18:58:55.315121 kubelet[2023]: I0123 18:58:55.314809 2023 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:58:55.315121 kubelet[2023]: I0123 18:58:55.314825 2023 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:58:55.327888 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:58:55.347436 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:58:55.353632 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:58:55.362063 kubelet[2023]: E0123 18:58:55.362040 2023 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:58:55.365463 kubelet[2023]: I0123 18:58:55.365449 2023 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:58:55.366122 kubelet[2023]: I0123 18:58:55.365875 2023 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:58:55.367894 kubelet[2023]: I0123 18:58:55.367135 2023 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:58:55.368323 kubelet[2023]: E0123 18:58:55.368306 2023 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:58:55.368433 kubelet[2023]: E0123 18:58:55.368423 2023 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.5.167\" not found" Jan 23 18:58:55.373816 kubelet[2023]: I0123 18:58:55.373785 2023 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 18:58:55.375723 kubelet[2023]: I0123 18:58:55.375707 2023 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 18:58:55.375836 kubelet[2023]: I0123 18:58:55.375828 2023 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 18:58:55.375933 kubelet[2023]: I0123 18:58:55.375925 2023 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:58:55.376142 kubelet[2023]: I0123 18:58:55.376134 2023 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 18:58:55.376294 kubelet[2023]: E0123 18:58:55.376283 2023 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 23 18:58:55.471375 kubelet[2023]: I0123 18:58:55.471217 2023 kubelet_node_status.go:75] "Attempting to register node" node="10.0.5.167" Jan 23 18:58:55.480235 kubelet[2023]: I0123 18:58:55.480186 2023 kubelet_node_status.go:78] "Successfully registered node" node="10.0.5.167" Jan 23 18:58:55.480235 kubelet[2023]: E0123 18:58:55.480224 2023 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.5.167\": node \"10.0.5.167\" not found" Jan 23 18:58:55.489136 kubelet[2023]: E0123 18:58:55.488934 2023 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.5.167\" not found" Jan 23 18:58:55.564555 sudo[1893]: pam_unix(sudo:session): session closed for user root Jan 23 18:58:55.589716 kubelet[2023]: E0123 18:58:55.589649 2023 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.5.167\" not found" Jan 23 18:58:55.662752 sshd[1892]: Connection closed by 20.161.92.111 port 33590 Jan 23 18:58:55.662517 sshd-session[1889]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:55.668987 systemd[1]: sshd@8-10.0.5.167:22-20.161.92.111:33590.service: Deactivated successfully. Jan 23 18:58:55.672744 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:58:55.673286 systemd[1]: session-9.scope: Consumed 711ms CPU time, 75.6M memory peak. Jan 23 18:58:55.675336 systemd-logind[1612]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:58:55.677432 systemd-logind[1612]: Removed session 9. Jan 23 18:58:55.691727 kubelet[2023]: E0123 18:58:55.691652 2023 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.5.167\" not found" Jan 23 18:58:55.792261 kubelet[2023]: E0123 18:58:55.792191 2023 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.5.167\" not found" Jan 23 18:58:55.893274 kubelet[2023]: E0123 18:58:55.893198 2023 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.5.167\" not found" Jan 23 18:58:55.994380 kubelet[2023]: E0123 18:58:55.994287 2023 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.5.167\" not found" Jan 23 18:58:56.095446 kubelet[2023]: E0123 18:58:56.095225 2023 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.5.167\" not found" Jan 23 18:58:56.161601 kubelet[2023]: I0123 18:58:56.161537 2023 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 18:58:56.161777 kubelet[2023]: I0123 18:58:56.161709 2023 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 23 18:58:56.161777 kubelet[2023]: I0123 18:58:56.161745 2023 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 23 18:58:56.196307 kubelet[2023]: E0123 18:58:56.196241 2023 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.5.167\" not found" Jan 23 18:58:56.239892 kubelet[2023]: E0123 18:58:56.239827 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:56.297239 kubelet[2023]: E0123 18:58:56.297182 2023 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.5.167\" not found" Jan 23 18:58:56.399733 kubelet[2023]: I0123 18:58:56.399542 2023 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 23 18:58:56.400899 containerd[1628]: time="2026-01-23T18:58:56.400811809Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:58:56.401529 kubelet[2023]: I0123 18:58:56.401474 2023 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 23 18:58:57.241775 kubelet[2023]: E0123 18:58:57.241689 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:57.242559 kubelet[2023]: I0123 18:58:57.242237 2023 apiserver.go:52] "Watching apiserver" Jan 23 18:58:57.271790 systemd[1]: Created slice kubepods-besteffort-pod8e9ec645_d532_4851_8195_83ef29fdce69.slice - libcontainer container kubepods-besteffort-pod8e9ec645_d532_4851_8195_83ef29fdce69.slice. Jan 23 18:58:57.284373 kubelet[2023]: I0123 18:58:57.284334 2023 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:58:57.290677 systemd[1]: Created slice kubepods-burstable-pod8ac315e3_d97e_4113_bc9a_097f2adf7bc7.slice - libcontainer container kubepods-burstable-pod8ac315e3_d97e_4113_bc9a_097f2adf7bc7.slice. Jan 23 18:58:57.296368 kubelet[2023]: I0123 18:58:57.296325 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cilium-run\") pod \"cilium-xc7n4\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " pod="kube-system/cilium-xc7n4" Jan 23 18:58:57.296368 kubelet[2023]: I0123 18:58:57.296366 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-etc-cni-netd\") pod \"cilium-xc7n4\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " pod="kube-system/cilium-xc7n4" Jan 23 18:58:57.296511 kubelet[2023]: I0123 18:58:57.296396 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-clustermesh-secrets\") pod \"cilium-xc7n4\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " pod="kube-system/cilium-xc7n4" Jan 23 18:58:57.296511 kubelet[2023]: I0123 18:58:57.296417 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-host-proc-sys-net\") pod \"cilium-xc7n4\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " pod="kube-system/cilium-xc7n4" Jan 23 18:58:57.296511 kubelet[2023]: I0123 18:58:57.296444 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-host-proc-sys-kernel\") pod \"cilium-xc7n4\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " pod="kube-system/cilium-xc7n4" Jan 23 18:58:57.296511 kubelet[2023]: I0123 18:58:57.296466 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e9ec645-d532-4851-8195-83ef29fdce69-xtables-lock\") pod \"kube-proxy-p9msv\" (UID: \"8e9ec645-d532-4851-8195-83ef29fdce69\") " pod="kube-system/kube-proxy-p9msv" Jan 23 18:58:57.296511 kubelet[2023]: I0123 18:58:57.296487 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e9ec645-d532-4851-8195-83ef29fdce69-lib-modules\") pod \"kube-proxy-p9msv\" (UID: \"8e9ec645-d532-4851-8195-83ef29fdce69\") " pod="kube-system/kube-proxy-p9msv" Jan 23 18:58:57.296687 kubelet[2023]: I0123 18:58:57.296506 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-bpf-maps\") pod \"cilium-xc7n4\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " pod="kube-system/cilium-xc7n4" Jan 23 18:58:57.296687 kubelet[2023]: I0123 18:58:57.296530 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cilium-cgroup\") pod \"cilium-xc7n4\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " pod="kube-system/cilium-xc7n4" Jan 23 18:58:57.296687 kubelet[2023]: I0123 18:58:57.296550 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-xtables-lock\") pod \"cilium-xc7n4\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " pod="kube-system/cilium-xc7n4" Jan 23 18:58:57.296687 kubelet[2023]: I0123 18:58:57.296571 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-hubble-tls\") pod \"cilium-xc7n4\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " pod="kube-system/cilium-xc7n4" Jan 23 18:58:57.296687 kubelet[2023]: I0123 18:58:57.296592 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cni-path\") pod \"cilium-xc7n4\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " pod="kube-system/cilium-xc7n4" Jan 23 18:58:57.296687 kubelet[2023]: I0123 18:58:57.296642 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-hostproc\") pod \"cilium-xc7n4\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " pod="kube-system/cilium-xc7n4" Jan 23 18:58:57.296881 kubelet[2023]: I0123 18:58:57.296666 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-lib-modules\") pod \"cilium-xc7n4\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " pod="kube-system/cilium-xc7n4" Jan 23 18:58:57.296881 kubelet[2023]: I0123 18:58:57.296688 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cilium-config-path\") pod \"cilium-xc7n4\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " pod="kube-system/cilium-xc7n4" Jan 23 18:58:57.296881 kubelet[2023]: I0123 18:58:57.296710 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tftx5\" (UniqueName: \"kubernetes.io/projected/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-kube-api-access-tftx5\") pod \"cilium-xc7n4\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " pod="kube-system/cilium-xc7n4" Jan 23 18:58:57.296881 kubelet[2023]: I0123 18:58:57.296733 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e9ec645-d532-4851-8195-83ef29fdce69-kube-proxy\") pod \"kube-proxy-p9msv\" (UID: \"8e9ec645-d532-4851-8195-83ef29fdce69\") " pod="kube-system/kube-proxy-p9msv" Jan 23 18:58:57.296881 kubelet[2023]: I0123 18:58:57.296754 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dplpz\" (UniqueName: \"kubernetes.io/projected/8e9ec645-d532-4851-8195-83ef29fdce69-kube-api-access-dplpz\") pod \"kube-proxy-p9msv\" (UID: \"8e9ec645-d532-4851-8195-83ef29fdce69\") " pod="kube-system/kube-proxy-p9msv" Jan 23 18:58:57.589851 containerd[1628]: time="2026-01-23T18:58:57.589695193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p9msv,Uid:8e9ec645-d532-4851-8195-83ef29fdce69,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:57.601798 containerd[1628]: time="2026-01-23T18:58:57.601728087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xc7n4,Uid:8ac315e3-d97e-4113-bc9a-097f2adf7bc7,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:58.242479 kubelet[2023]: E0123 18:58:58.242212 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:58.264617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2398248899.mount: Deactivated successfully. Jan 23 18:58:58.289152 containerd[1628]: time="2026-01-23T18:58:58.288539165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:58:58.291325 containerd[1628]: time="2026-01-23T18:58:58.291193220Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:58:58.293265 containerd[1628]: time="2026-01-23T18:58:58.293199312Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:58:58.294681 containerd[1628]: time="2026-01-23T18:58:58.294626280Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321158" Jan 23 18:58:58.296351 containerd[1628]: time="2026-01-23T18:58:58.296264526Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:58:58.303132 containerd[1628]: time="2026-01-23T18:58:58.302308422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:58:58.304946 containerd[1628]: time="2026-01-23T18:58:58.304889479Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 696.167978ms" Jan 23 18:58:58.307171 containerd[1628]: time="2026-01-23T18:58:58.307077523Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 710.019911ms" Jan 23 18:58:58.345141 containerd[1628]: time="2026-01-23T18:58:58.344430354Z" level=info msg="connecting to shim 77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83" address="unix:///run/containerd/s/86a0da514203ae9b0cb6452802e1243334dfd7b822264f605f5c959f6020fcb7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:58.347615 containerd[1628]: time="2026-01-23T18:58:58.347558557Z" level=info msg="connecting to shim a5b719fcd385c1a5b1b78de02d6353f964621805c15055817b5ed1a76e6a6458" address="unix:///run/containerd/s/0aa027434fedb107468efbe257ea65ecf9b1da08578d803641d1c8ec664afb62" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:58.379623 systemd[1]: Started cri-containerd-77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83.scope - libcontainer container 77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83. Jan 23 18:58:58.393579 systemd[1]: Started cri-containerd-a5b719fcd385c1a5b1b78de02d6353f964621805c15055817b5ed1a76e6a6458.scope - libcontainer container a5b719fcd385c1a5b1b78de02d6353f964621805c15055817b5ed1a76e6a6458. Jan 23 18:58:58.444327 containerd[1628]: time="2026-01-23T18:58:58.444256415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xc7n4,Uid:8ac315e3-d97e-4113-bc9a-097f2adf7bc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\"" Jan 23 18:58:58.446012 containerd[1628]: time="2026-01-23T18:58:58.445984086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p9msv,Uid:8e9ec645-d532-4851-8195-83ef29fdce69,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5b719fcd385c1a5b1b78de02d6353f964621805c15055817b5ed1a76e6a6458\"" Jan 23 18:58:58.446798 containerd[1628]: time="2026-01-23T18:58:58.446584584Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 18:58:59.243526 kubelet[2023]: E0123 18:58:59.243436 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:00.245119 kubelet[2023]: E0123 18:59:00.245055 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:01.245714 kubelet[2023]: E0123 18:59:01.245570 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:02.245962 kubelet[2023]: E0123 18:59:02.245931 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:03.201733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3742496850.mount: Deactivated successfully. Jan 23 18:59:03.247334 kubelet[2023]: E0123 18:59:03.247290 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:04.248072 kubelet[2023]: E0123 18:59:04.248037 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:05.249338 kubelet[2023]: E0123 18:59:05.249210 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:05.422511 containerd[1628]: time="2026-01-23T18:59:05.422408308Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:05.425462 containerd[1628]: time="2026-01-23T18:59:05.425395905Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 18:59:05.427393 containerd[1628]: time="2026-01-23T18:59:05.427306459Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:05.432396 containerd[1628]: time="2026-01-23T18:59:05.432079260Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.985457208s" Jan 23 18:59:05.432396 containerd[1628]: time="2026-01-23T18:59:05.432187292Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 18:59:05.435140 containerd[1628]: time="2026-01-23T18:59:05.434379189Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 18:59:05.440030 containerd[1628]: time="2026-01-23T18:59:05.439962378Z" level=info msg="CreateContainer within sandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 18:59:05.459039 containerd[1628]: time="2026-01-23T18:59:05.457878818Z" level=info msg="Container 40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:05.463791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4150285525.mount: Deactivated successfully. Jan 23 18:59:05.474255 containerd[1628]: time="2026-01-23T18:59:05.474204132Z" level=info msg="CreateContainer within sandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb\"" Jan 23 18:59:05.475404 containerd[1628]: time="2026-01-23T18:59:05.475361257Z" level=info msg="StartContainer for \"40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb\"" Jan 23 18:59:05.476368 containerd[1628]: time="2026-01-23T18:59:05.476334400Z" level=info msg="connecting to shim 40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb" address="unix:///run/containerd/s/86a0da514203ae9b0cb6452802e1243334dfd7b822264f605f5c959f6020fcb7" protocol=ttrpc version=3 Jan 23 18:59:05.506294 systemd[1]: Started cri-containerd-40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb.scope - libcontainer container 40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb. Jan 23 18:59:05.543710 containerd[1628]: time="2026-01-23T18:59:05.543673772Z" level=info msg="StartContainer for \"40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb\" returns successfully" Jan 23 18:59:05.551829 systemd[1]: cri-containerd-40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb.scope: Deactivated successfully. Jan 23 18:59:05.554776 containerd[1628]: time="2026-01-23T18:59:05.554743531Z" level=info msg="received container exit event container_id:\"40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb\" id:\"40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb\" pid:2205 exited_at:{seconds:1769194745 nanos:554375119}" Jan 23 18:59:05.572128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb-rootfs.mount: Deactivated successfully. Jan 23 18:59:06.250312 kubelet[2023]: E0123 18:59:06.250266 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:07.968410 kubelet[2023]: E0123 18:59:07.251299 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:07.969177 update_engine[1613]: I20260123 18:59:07.561219 1613 update_attempter.cc:509] Updating boot flags... Jan 23 18:59:08.251891 kubelet[2023]: E0123 18:59:08.251806 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:09.252657 kubelet[2023]: E0123 18:59:09.252610 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:09.427724 containerd[1628]: time="2026-01-23T18:59:09.427225172Z" level=info msg="CreateContainer within sandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 18:59:09.448131 containerd[1628]: time="2026-01-23T18:59:09.448083558Z" level=info msg="Container 62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:09.448386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223746942.mount: Deactivated successfully. Jan 23 18:59:09.458864 containerd[1628]: time="2026-01-23T18:59:09.458823281Z" level=info msg="CreateContainer within sandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06\"" Jan 23 18:59:09.460324 containerd[1628]: time="2026-01-23T18:59:09.459479571Z" level=info msg="StartContainer for \"62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06\"" Jan 23 18:59:09.460637 containerd[1628]: time="2026-01-23T18:59:09.460620612Z" level=info msg="connecting to shim 62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06" address="unix:///run/containerd/s/86a0da514203ae9b0cb6452802e1243334dfd7b822264f605f5c959f6020fcb7" protocol=ttrpc version=3 Jan 23 18:59:09.484333 systemd[1]: Started cri-containerd-62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06.scope - libcontainer container 62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06. Jan 23 18:59:09.523598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2920524537.mount: Deactivated successfully. Jan 23 18:59:09.530386 containerd[1628]: time="2026-01-23T18:59:09.530350752Z" level=info msg="StartContainer for \"62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06\" returns successfully" Jan 23 18:59:09.540238 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:59:09.540434 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:59:09.541206 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:59:09.545465 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:59:09.545659 systemd[1]: cri-containerd-62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06.scope: Deactivated successfully. Jan 23 18:59:09.549825 containerd[1628]: time="2026-01-23T18:59:09.549726814Z" level=info msg="received container exit event container_id:\"62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06\" id:\"62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06\" pid:2274 exited_at:{seconds:1769194749 nanos:549544099}" Jan 23 18:59:09.563926 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:59:10.002713 containerd[1628]: time="2026-01-23T18:59:10.002676230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:10.004561 containerd[1628]: time="2026-01-23T18:59:10.004536772Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930122" Jan 23 18:59:10.006111 containerd[1628]: time="2026-01-23T18:59:10.006068049Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:10.008856 containerd[1628]: time="2026-01-23T18:59:10.008820417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:10.009857 containerd[1628]: time="2026-01-23T18:59:10.009766917Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 4.575335194s" Jan 23 18:59:10.009857 containerd[1628]: time="2026-01-23T18:59:10.009789628Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 18:59:10.013509 containerd[1628]: time="2026-01-23T18:59:10.013492903Z" level=info msg="CreateContainer within sandbox \"a5b719fcd385c1a5b1b78de02d6353f964621805c15055817b5ed1a76e6a6458\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:59:10.024296 containerd[1628]: time="2026-01-23T18:59:10.024275850Z" level=info msg="Container e14340ffeafc82c96950d74c72fa6eca45394a3dec6cbc244b1db53b80377ad0: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:10.033757 containerd[1628]: time="2026-01-23T18:59:10.033725260Z" level=info msg="CreateContainer within sandbox \"a5b719fcd385c1a5b1b78de02d6353f964621805c15055817b5ed1a76e6a6458\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e14340ffeafc82c96950d74c72fa6eca45394a3dec6cbc244b1db53b80377ad0\"" Jan 23 18:59:10.035556 containerd[1628]: time="2026-01-23T18:59:10.034273696Z" level=info msg="StartContainer for \"e14340ffeafc82c96950d74c72fa6eca45394a3dec6cbc244b1db53b80377ad0\"" Jan 23 18:59:10.035556 containerd[1628]: time="2026-01-23T18:59:10.035319113Z" level=info msg="connecting to shim e14340ffeafc82c96950d74c72fa6eca45394a3dec6cbc244b1db53b80377ad0" address="unix:///run/containerd/s/0aa027434fedb107468efbe257ea65ecf9b1da08578d803641d1c8ec664afb62" protocol=ttrpc version=3 Jan 23 18:59:10.055340 systemd[1]: Started cri-containerd-e14340ffeafc82c96950d74c72fa6eca45394a3dec6cbc244b1db53b80377ad0.scope - libcontainer container e14340ffeafc82c96950d74c72fa6eca45394a3dec6cbc244b1db53b80377ad0. Jan 23 18:59:10.122714 containerd[1628]: time="2026-01-23T18:59:10.122683259Z" level=info msg="StartContainer for \"e14340ffeafc82c96950d74c72fa6eca45394a3dec6cbc244b1db53b80377ad0\" returns successfully" Jan 23 18:59:10.199811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06-rootfs.mount: Deactivated successfully. Jan 23 18:59:10.253064 kubelet[2023]: E0123 18:59:10.252968 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:10.436740 kubelet[2023]: I0123 18:59:10.436587 2023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p9msv" podStartSLOduration=3.873601365 podStartE2EDuration="15.436565375s" podCreationTimestamp="2026-01-23 18:58:55 +0000 UTC" firstStartedPulling="2026-01-23 18:58:58.447357145 +0000 UTC m=+3.909067464" lastFinishedPulling="2026-01-23 18:59:10.010321163 +0000 UTC m=+15.472031474" observedRunningTime="2026-01-23 18:59:10.435831013 +0000 UTC m=+15.897541379" watchObservedRunningTime="2026-01-23 18:59:10.436565375 +0000 UTC m=+15.898275712" Jan 23 18:59:10.442517 containerd[1628]: time="2026-01-23T18:59:10.442159198Z" level=info msg="CreateContainer within sandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 18:59:10.464123 containerd[1628]: time="2026-01-23T18:59:10.461663176Z" level=info msg="Container 912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:10.476714 containerd[1628]: time="2026-01-23T18:59:10.476686616Z" level=info msg="CreateContainer within sandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef\"" Jan 23 18:59:10.477639 containerd[1628]: time="2026-01-23T18:59:10.477618858Z" level=info msg="StartContainer for \"912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef\"" Jan 23 18:59:10.479797 containerd[1628]: time="2026-01-23T18:59:10.479773052Z" level=info msg="connecting to shim 912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef" address="unix:///run/containerd/s/86a0da514203ae9b0cb6452802e1243334dfd7b822264f605f5c959f6020fcb7" protocol=ttrpc version=3 Jan 23 18:59:10.516017 systemd[1]: Started cri-containerd-912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef.scope - libcontainer container 912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef. Jan 23 18:59:10.586305 containerd[1628]: time="2026-01-23T18:59:10.584966963Z" level=info msg="StartContainer for \"912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef\" returns successfully" Jan 23 18:59:10.587279 systemd[1]: cri-containerd-912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef.scope: Deactivated successfully. Jan 23 18:59:10.589518 containerd[1628]: time="2026-01-23T18:59:10.589466926Z" level=info msg="received container exit event container_id:\"912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef\" id:\"912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef\" pid:2443 exited_at:{seconds:1769194750 nanos:589252834}" Jan 23 18:59:10.618227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef-rootfs.mount: Deactivated successfully. Jan 23 18:59:11.254246 kubelet[2023]: E0123 18:59:11.254152 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:11.451058 containerd[1628]: time="2026-01-23T18:59:11.450960542Z" level=info msg="CreateContainer within sandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 18:59:11.473153 containerd[1628]: time="2026-01-23T18:59:11.471250284Z" level=info msg="Container 7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:11.481466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753931668.mount: Deactivated successfully. Jan 23 18:59:11.491228 containerd[1628]: time="2026-01-23T18:59:11.491168190Z" level=info msg="CreateContainer within sandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b\"" Jan 23 18:59:11.492563 containerd[1628]: time="2026-01-23T18:59:11.492502733Z" level=info msg="StartContainer for \"7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b\"" Jan 23 18:59:11.494018 containerd[1628]: time="2026-01-23T18:59:11.493956603Z" level=info msg="connecting to shim 7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b" address="unix:///run/containerd/s/86a0da514203ae9b0cb6452802e1243334dfd7b822264f605f5c959f6020fcb7" protocol=ttrpc version=3 Jan 23 18:59:11.526263 systemd[1]: Started cri-containerd-7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b.scope - libcontainer container 7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b. Jan 23 18:59:11.552525 systemd[1]: cri-containerd-7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b.scope: Deactivated successfully. Jan 23 18:59:11.555631 containerd[1628]: time="2026-01-23T18:59:11.555598103Z" level=info msg="received container exit event container_id:\"7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b\" id:\"7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b\" pid:2532 exited_at:{seconds:1769194751 nanos:553543219}" Jan 23 18:59:11.557219 containerd[1628]: time="2026-01-23T18:59:11.557147456Z" level=info msg="StartContainer for \"7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b\" returns successfully" Jan 23 18:59:11.574887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b-rootfs.mount: Deactivated successfully. Jan 23 18:59:12.254482 kubelet[2023]: E0123 18:59:12.254405 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:12.450855 containerd[1628]: time="2026-01-23T18:59:12.450776638Z" level=info msg="CreateContainer within sandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 18:59:12.474152 containerd[1628]: time="2026-01-23T18:59:12.473018526Z" level=info msg="Container 0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:12.473839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount859640399.mount: Deactivated successfully. Jan 23 18:59:12.488287 containerd[1628]: time="2026-01-23T18:59:12.488079224Z" level=info msg="CreateContainer within sandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\"" Jan 23 18:59:12.490128 containerd[1628]: time="2026-01-23T18:59:12.489075066Z" level=info msg="StartContainer for \"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\"" Jan 23 18:59:12.490912 containerd[1628]: time="2026-01-23T18:59:12.490863614Z" level=info msg="connecting to shim 0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be" address="unix:///run/containerd/s/86a0da514203ae9b0cb6452802e1243334dfd7b822264f605f5c959f6020fcb7" protocol=ttrpc version=3 Jan 23 18:59:12.524275 systemd[1]: Started cri-containerd-0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be.scope - libcontainer container 0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be. Jan 23 18:59:12.583221 containerd[1628]: time="2026-01-23T18:59:12.583177635Z" level=info msg="StartContainer for \"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\" returns successfully" Jan 23 18:59:12.735814 kubelet[2023]: I0123 18:59:12.735794 2023 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 18:59:12.913147 kernel: Initializing XFRM netlink socket Jan 23 18:59:13.255133 kubelet[2023]: E0123 18:59:13.254993 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:13.486136 kubelet[2023]: I0123 18:59:13.485745 2023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xc7n4" podStartSLOduration=11.49809743 podStartE2EDuration="18.485714353s" podCreationTimestamp="2026-01-23 18:58:55 +0000 UTC" firstStartedPulling="2026-01-23 18:58:58.44620202 +0000 UTC m=+3.907912338" lastFinishedPulling="2026-01-23 18:59:05.433818905 +0000 UTC m=+10.895529261" observedRunningTime="2026-01-23 18:59:13.484615013 +0000 UTC m=+18.946325469" watchObservedRunningTime="2026-01-23 18:59:13.485714353 +0000 UTC m=+18.947424806" Jan 23 18:59:14.255672 kubelet[2023]: E0123 18:59:14.255602 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:14.572827 systemd-networkd[1539]: cilium_host: Link UP Jan 23 18:59:14.572986 systemd-networkd[1539]: cilium_net: Link UP Jan 23 18:59:14.573148 systemd-networkd[1539]: cilium_net: Gained carrier Jan 23 18:59:14.573282 systemd-networkd[1539]: cilium_host: Gained carrier Jan 23 18:59:14.688407 systemd-networkd[1539]: cilium_vxlan: Link UP Jan 23 18:59:14.688413 systemd-networkd[1539]: cilium_vxlan: Gained carrier Jan 23 18:59:14.689419 systemd-networkd[1539]: cilium_host: Gained IPv6LL Jan 23 18:59:14.906198 kernel: NET: Registered PF_ALG protocol family Jan 23 18:59:14.937301 systemd-networkd[1539]: cilium_net: Gained IPv6LL Jan 23 18:59:15.235674 kubelet[2023]: E0123 18:59:15.235638 2023 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:15.256113 kubelet[2023]: E0123 18:59:15.256064 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:15.541979 systemd-networkd[1539]: lxc_health: Link UP Jan 23 18:59:15.556768 systemd-networkd[1539]: lxc_health: Gained carrier Jan 23 18:59:16.041368 systemd-networkd[1539]: cilium_vxlan: Gained IPv6LL Jan 23 18:59:16.257429 kubelet[2023]: E0123 18:59:16.257333 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:16.669544 systemd[1]: Created slice kubepods-besteffort-pod99d128f1_5e63_49dd_bc8a_d19fe521ae04.slice - libcontainer container kubepods-besteffort-pod99d128f1_5e63_49dd_bc8a_d19fe521ae04.slice. Jan 23 18:59:16.733797 kubelet[2023]: I0123 18:59:16.733708 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqxhf\" (UniqueName: \"kubernetes.io/projected/99d128f1-5e63-49dd-bc8a-d19fe521ae04-kube-api-access-pqxhf\") pod \"nginx-deployment-7fcdb87857-mr2jl\" (UID: \"99d128f1-5e63-49dd-bc8a-d19fe521ae04\") " pod="default/nginx-deployment-7fcdb87857-mr2jl" Jan 23 18:59:16.745306 systemd-networkd[1539]: lxc_health: Gained IPv6LL Jan 23 18:59:16.981418 containerd[1628]: time="2026-01-23T18:59:16.981332788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-mr2jl,Uid:99d128f1-5e63-49dd-bc8a-d19fe521ae04,Namespace:default,Attempt:0,}" Jan 23 18:59:17.035385 systemd-networkd[1539]: lxc6a3c933202ff: Link UP Jan 23 18:59:17.044232 kernel: eth0: renamed from tmp19c5e Jan 23 18:59:17.047605 systemd-networkd[1539]: lxc6a3c933202ff: Gained carrier Jan 23 18:59:17.258526 kubelet[2023]: E0123 18:59:17.258352 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:18.259254 kubelet[2023]: E0123 18:59:18.259200 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:18.921344 systemd-networkd[1539]: lxc6a3c933202ff: Gained IPv6LL Jan 23 18:59:19.260143 kubelet[2023]: E0123 18:59:19.259844 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:20.063881 containerd[1628]: time="2026-01-23T18:59:20.063767435Z" level=info msg="connecting to shim 19c5ec2519960012b6a6b4ae60528ce4184213844b237fff1ac8561e5b0dfe5e" address="unix:///run/containerd/s/15642657a65d78a512253bcce91e9e154ec2271fc52fea69721b61c6ae12581e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:20.088235 systemd[1]: Started cri-containerd-19c5ec2519960012b6a6b4ae60528ce4184213844b237fff1ac8561e5b0dfe5e.scope - libcontainer container 19c5ec2519960012b6a6b4ae60528ce4184213844b237fff1ac8561e5b0dfe5e. Jan 23 18:59:20.128955 containerd[1628]: time="2026-01-23T18:59:20.128920276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-mr2jl,Uid:99d128f1-5e63-49dd-bc8a-d19fe521ae04,Namespace:default,Attempt:0,} returns sandbox id \"19c5ec2519960012b6a6b4ae60528ce4184213844b237fff1ac8561e5b0dfe5e\"" Jan 23 18:59:20.130016 containerd[1628]: time="2026-01-23T18:59:20.129988973Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 18:59:20.260724 kubelet[2023]: E0123 18:59:20.260670 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:21.261584 kubelet[2023]: E0123 18:59:21.261519 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:22.262395 kubelet[2023]: E0123 18:59:22.262341 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:22.814890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258961287.mount: Deactivated successfully. Jan 23 18:59:23.263805 kubelet[2023]: E0123 18:59:23.263779 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:23.545444 containerd[1628]: time="2026-01-23T18:59:23.545259600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:23.547022 containerd[1628]: time="2026-01-23T18:59:23.546997813Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 23 18:59:23.548384 containerd[1628]: time="2026-01-23T18:59:23.548160921Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:23.556160 containerd[1628]: time="2026-01-23T18:59:23.555948986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:23.557086 containerd[1628]: time="2026-01-23T18:59:23.556990407Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 3.426965761s" Jan 23 18:59:23.557086 containerd[1628]: time="2026-01-23T18:59:23.557017152Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 18:59:23.572116 containerd[1628]: time="2026-01-23T18:59:23.571689786Z" level=info msg="CreateContainer within sandbox \"19c5ec2519960012b6a6b4ae60528ce4184213844b237fff1ac8561e5b0dfe5e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 23 18:59:23.588112 containerd[1628]: time="2026-01-23T18:59:23.587601973Z" level=info msg="Container 15f567f20a096350ef3fcde647dc7b14b8ded5152d56a18f8aa7d3ec8c49d365: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:23.588300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2775566691.mount: Deactivated successfully. Jan 23 18:59:23.596325 containerd[1628]: time="2026-01-23T18:59:23.596300339Z" level=info msg="CreateContainer within sandbox \"19c5ec2519960012b6a6b4ae60528ce4184213844b237fff1ac8561e5b0dfe5e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"15f567f20a096350ef3fcde647dc7b14b8ded5152d56a18f8aa7d3ec8c49d365\"" Jan 23 18:59:23.596997 containerd[1628]: time="2026-01-23T18:59:23.596980532Z" level=info msg="StartContainer for \"15f567f20a096350ef3fcde647dc7b14b8ded5152d56a18f8aa7d3ec8c49d365\"" Jan 23 18:59:23.597877 containerd[1628]: time="2026-01-23T18:59:23.597856887Z" level=info msg="connecting to shim 15f567f20a096350ef3fcde647dc7b14b8ded5152d56a18f8aa7d3ec8c49d365" address="unix:///run/containerd/s/15642657a65d78a512253bcce91e9e154ec2271fc52fea69721b61c6ae12581e" protocol=ttrpc version=3 Jan 23 18:59:23.598640 kubelet[2023]: I0123 18:59:23.598511 2023 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:59:23.625252 systemd[1]: Started cri-containerd-15f567f20a096350ef3fcde647dc7b14b8ded5152d56a18f8aa7d3ec8c49d365.scope - libcontainer container 15f567f20a096350ef3fcde647dc7b14b8ded5152d56a18f8aa7d3ec8c49d365. Jan 23 18:59:23.653328 containerd[1628]: time="2026-01-23T18:59:23.653301784Z" level=info msg="StartContainer for \"15f567f20a096350ef3fcde647dc7b14b8ded5152d56a18f8aa7d3ec8c49d365\" returns successfully" Jan 23 18:59:24.265034 kubelet[2023]: E0123 18:59:24.264948 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:25.265499 kubelet[2023]: E0123 18:59:25.265410 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:26.266048 kubelet[2023]: E0123 18:59:26.265991 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:27.267048 kubelet[2023]: E0123 18:59:27.266966 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:28.267959 kubelet[2023]: E0123 18:59:28.267830 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:29.268430 kubelet[2023]: E0123 18:59:29.268319 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:30.268898 kubelet[2023]: E0123 18:59:30.268796 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:31.269598 kubelet[2023]: E0123 18:59:31.269536 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:32.269839 kubelet[2023]: E0123 18:59:32.269716 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:33.044143 kubelet[2023]: I0123 18:59:33.043388 2023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-mr2jl" podStartSLOduration=13.615244656 podStartE2EDuration="17.043346648s" podCreationTimestamp="2026-01-23 18:59:16 +0000 UTC" firstStartedPulling="2026-01-23 18:59:20.12956925 +0000 UTC m=+25.591279567" lastFinishedPulling="2026-01-23 18:59:23.557671247 +0000 UTC m=+29.019381559" observedRunningTime="2026-01-23 18:59:24.512054255 +0000 UTC m=+29.973764705" watchObservedRunningTime="2026-01-23 18:59:33.043346648 +0000 UTC m=+38.505057027" Jan 23 18:59:33.056997 systemd[1]: Created slice kubepods-besteffort-podd2ea09a0_26df_4a41_88a5_3d27be87d0bd.slice - libcontainer container kubepods-besteffort-podd2ea09a0_26df_4a41_88a5_3d27be87d0bd.slice. Jan 23 18:59:33.142791 kubelet[2023]: I0123 18:59:33.142579 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d2ea09a0-26df-4a41-88a5-3d27be87d0bd-data\") pod \"nfs-server-provisioner-0\" (UID: \"d2ea09a0-26df-4a41-88a5-3d27be87d0bd\") " pod="default/nfs-server-provisioner-0" Jan 23 18:59:33.142791 kubelet[2023]: I0123 18:59:33.142674 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnj5c\" (UniqueName: \"kubernetes.io/projected/d2ea09a0-26df-4a41-88a5-3d27be87d0bd-kube-api-access-xnj5c\") pod \"nfs-server-provisioner-0\" (UID: \"d2ea09a0-26df-4a41-88a5-3d27be87d0bd\") " pod="default/nfs-server-provisioner-0" Jan 23 18:59:33.270916 kubelet[2023]: E0123 18:59:33.270820 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:33.364953 containerd[1628]: time="2026-01-23T18:59:33.364164265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d2ea09a0-26df-4a41-88a5-3d27be87d0bd,Namespace:default,Attempt:0,}" Jan 23 18:59:33.419420 systemd-networkd[1539]: lxc371a516bda07: Link UP Jan 23 18:59:33.426188 kernel: eth0: renamed from tmpb1404 Jan 23 18:59:33.430376 systemd-networkd[1539]: lxc371a516bda07: Gained carrier Jan 23 18:59:33.702021 containerd[1628]: time="2026-01-23T18:59:33.701816009Z" level=info msg="connecting to shim b14040c70d69db3ae632bca03662a036268e92eb935b712fb4bb404f9f2976e3" address="unix:///run/containerd/s/8d0eb3277c5263d56520db50c7b561a510c4d2dad78b2f2522ef2e0042b9c64f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:33.731451 systemd[1]: Started cri-containerd-b14040c70d69db3ae632bca03662a036268e92eb935b712fb4bb404f9f2976e3.scope - libcontainer container b14040c70d69db3ae632bca03662a036268e92eb935b712fb4bb404f9f2976e3. Jan 23 18:59:33.779962 containerd[1628]: time="2026-01-23T18:59:33.779918184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d2ea09a0-26df-4a41-88a5-3d27be87d0bd,Namespace:default,Attempt:0,} returns sandbox id \"b14040c70d69db3ae632bca03662a036268e92eb935b712fb4bb404f9f2976e3\"" Jan 23 18:59:33.784860 containerd[1628]: time="2026-01-23T18:59:33.784619472Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 23 18:59:34.271802 kubelet[2023]: E0123 18:59:34.271757 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:35.051277 systemd-networkd[1539]: lxc371a516bda07: Gained IPv6LL Jan 23 18:59:35.235933 kubelet[2023]: E0123 18:59:35.235894 2023 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:35.272741 kubelet[2023]: E0123 18:59:35.272714 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:35.843583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount500955837.mount: Deactivated successfully. Jan 23 18:59:36.273752 kubelet[2023]: E0123 18:59:36.273723 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:37.275005 kubelet[2023]: E0123 18:59:37.274958 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:37.469417 containerd[1628]: time="2026-01-23T18:59:37.469339481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:37.471356 containerd[1628]: time="2026-01-23T18:59:37.471286537Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039474" Jan 23 18:59:37.472874 containerd[1628]: time="2026-01-23T18:59:37.472763911Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:37.478995 containerd[1628]: time="2026-01-23T18:59:37.478949921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:37.480608 containerd[1628]: time="2026-01-23T18:59:37.480570776Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.695913783s" Jan 23 18:59:37.480757 containerd[1628]: time="2026-01-23T18:59:37.480610980Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 23 18:59:37.489655 containerd[1628]: time="2026-01-23T18:59:37.489572757Z" level=info msg="CreateContainer within sandbox \"b14040c70d69db3ae632bca03662a036268e92eb935b712fb4bb404f9f2976e3\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 23 18:59:37.508746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount758112249.mount: Deactivated successfully. Jan 23 18:59:37.510530 containerd[1628]: time="2026-01-23T18:59:37.509659711Z" level=info msg="Container f5a4513e4d261c4ba4f0beccbf8b80bec3743b03d2fe1fe0cda96868b12a56ae: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:37.529115 containerd[1628]: time="2026-01-23T18:59:37.528672192Z" level=info msg="CreateContainer within sandbox \"b14040c70d69db3ae632bca03662a036268e92eb935b712fb4bb404f9f2976e3\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f5a4513e4d261c4ba4f0beccbf8b80bec3743b03d2fe1fe0cda96868b12a56ae\"" Jan 23 18:59:37.529387 containerd[1628]: time="2026-01-23T18:59:37.529362634Z" level=info msg="StartContainer for \"f5a4513e4d261c4ba4f0beccbf8b80bec3743b03d2fe1fe0cda96868b12a56ae\"" Jan 23 18:59:37.530418 containerd[1628]: time="2026-01-23T18:59:37.530388325Z" level=info msg="connecting to shim f5a4513e4d261c4ba4f0beccbf8b80bec3743b03d2fe1fe0cda96868b12a56ae" address="unix:///run/containerd/s/8d0eb3277c5263d56520db50c7b561a510c4d2dad78b2f2522ef2e0042b9c64f" protocol=ttrpc version=3 Jan 23 18:59:37.553261 systemd[1]: Started cri-containerd-f5a4513e4d261c4ba4f0beccbf8b80bec3743b03d2fe1fe0cda96868b12a56ae.scope - libcontainer container f5a4513e4d261c4ba4f0beccbf8b80bec3743b03d2fe1fe0cda96868b12a56ae. Jan 23 18:59:37.579584 containerd[1628]: time="2026-01-23T18:59:37.579483524Z" level=info msg="StartContainer for \"f5a4513e4d261c4ba4f0beccbf8b80bec3743b03d2fe1fe0cda96868b12a56ae\" returns successfully" Jan 23 18:59:38.275949 kubelet[2023]: E0123 18:59:38.275815 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:38.562907 kubelet[2023]: I0123 18:59:38.562597 2023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.861641681 podStartE2EDuration="5.562567243s" podCreationTimestamp="2026-01-23 18:59:33 +0000 UTC" firstStartedPulling="2026-01-23 18:59:33.784302852 +0000 UTC m=+39.246013182" lastFinishedPulling="2026-01-23 18:59:37.485228425 +0000 UTC m=+42.946938744" observedRunningTime="2026-01-23 18:59:38.561673849 +0000 UTC m=+44.023384304" watchObservedRunningTime="2026-01-23 18:59:38.562567243 +0000 UTC m=+44.024277698" Jan 23 18:59:39.276132 kubelet[2023]: E0123 18:59:39.276011 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:40.276321 kubelet[2023]: E0123 18:59:40.276227 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:41.277028 kubelet[2023]: E0123 18:59:41.276937 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:42.277254 kubelet[2023]: E0123 18:59:42.277182 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:42.835550 systemd[1]: Created slice kubepods-besteffort-pod301336bc_8506_4bb1_aea1_a3a206febc1b.slice - libcontainer container kubepods-besteffort-pod301336bc_8506_4bb1_aea1_a3a206febc1b.slice. Jan 23 18:59:42.907381 kubelet[2023]: I0123 18:59:42.907314 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-13c3f566-7c2f-4057-b3b7-9b750d1b70eb\" (UniqueName: \"kubernetes.io/nfs/301336bc-8506-4bb1-aea1-a3a206febc1b-pvc-13c3f566-7c2f-4057-b3b7-9b750d1b70eb\") pod \"test-pod-1\" (UID: \"301336bc-8506-4bb1-aea1-a3a206febc1b\") " pod="default/test-pod-1" Jan 23 18:59:42.907630 kubelet[2023]: I0123 18:59:42.907397 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgz8x\" (UniqueName: \"kubernetes.io/projected/301336bc-8506-4bb1-aea1-a3a206febc1b-kube-api-access-qgz8x\") pod \"test-pod-1\" (UID: \"301336bc-8506-4bb1-aea1-a3a206febc1b\") " pod="default/test-pod-1" Jan 23 18:59:43.086179 kernel: netfs: FS-Cache loaded Jan 23 18:59:43.166368 kernel: RPC: Registered named UNIX socket transport module. Jan 23 18:59:43.166622 kernel: RPC: Registered udp transport module. Jan 23 18:59:43.167424 kernel: RPC: Registered tcp transport module. Jan 23 18:59:43.167535 kernel: RPC: Registered tcp-with-tls transport module. Jan 23 18:59:43.168144 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 23 18:59:43.277922 kubelet[2023]: E0123 18:59:43.277844 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:43.342407 kernel: NFS: Registering the id_resolver key type Jan 23 18:59:43.342605 kernel: Key type id_resolver registered Jan 23 18:59:43.343367 kernel: Key type id_legacy registered Jan 23 18:59:43.373899 nfsidmap[3359]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 23 18:59:43.375110 nfsidmap[3359]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 23 18:59:43.377184 nfsidmap[3360]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 23 18:59:43.377323 nfsidmap[3360]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 23 18:59:43.399198 nfsrahead[3362]: setting /var/lib/kubelet/pods/301336bc-8506-4bb1-aea1-a3a206febc1b/volumes/kubernetes.io~nfs/pvc-13c3f566-7c2f-4057-b3b7-9b750d1b70eb readahead to 128 Jan 23 18:59:43.443446 containerd[1628]: time="2026-01-23T18:59:43.442730705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:301336bc-8506-4bb1-aea1-a3a206febc1b,Namespace:default,Attempt:0,}" Jan 23 18:59:43.469800 systemd-networkd[1539]: lxcffb07449b040: Link UP Jan 23 18:59:43.478121 kernel: eth0: renamed from tmp0dea0 Jan 23 18:59:43.478689 systemd-networkd[1539]: lxcffb07449b040: Gained carrier Jan 23 18:59:43.621921 containerd[1628]: time="2026-01-23T18:59:43.621887278Z" level=info msg="connecting to shim 0dea00aa4013e4279e6c0b272333a90d6f2714e7ab1e41c82a69039a707f6e65" address="unix:///run/containerd/s/7d32ad5486f4c3ec00a223283d343183becea5a0e9e3c1fd9e997ddad7e651dd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:43.647236 systemd[1]: Started cri-containerd-0dea00aa4013e4279e6c0b272333a90d6f2714e7ab1e41c82a69039a707f6e65.scope - libcontainer container 0dea00aa4013e4279e6c0b272333a90d6f2714e7ab1e41c82a69039a707f6e65. Jan 23 18:59:43.692633 containerd[1628]: time="2026-01-23T18:59:43.692606473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:301336bc-8506-4bb1-aea1-a3a206febc1b,Namespace:default,Attempt:0,} returns sandbox id \"0dea00aa4013e4279e6c0b272333a90d6f2714e7ab1e41c82a69039a707f6e65\"" Jan 23 18:59:43.694187 containerd[1628]: time="2026-01-23T18:59:43.693861801Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 18:59:44.086147 containerd[1628]: time="2026-01-23T18:59:44.084468314Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:44.086147 containerd[1628]: time="2026-01-23T18:59:44.085825290Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 23 18:59:44.092642 containerd[1628]: time="2026-01-23T18:59:44.092583980Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 398.251155ms" Jan 23 18:59:44.092857 containerd[1628]: time="2026-01-23T18:59:44.092824239Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 18:59:44.101067 containerd[1628]: time="2026-01-23T18:59:44.101011708Z" level=info msg="CreateContainer within sandbox \"0dea00aa4013e4279e6c0b272333a90d6f2714e7ab1e41c82a69039a707f6e65\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 23 18:59:44.121287 containerd[1628]: time="2026-01-23T18:59:44.121220720Z" level=info msg="Container 21a89918838abeca9e6a15ce87038dde6033a9032bb4c82f697b18f25bc91750: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:44.137567 containerd[1628]: time="2026-01-23T18:59:44.137502831Z" level=info msg="CreateContainer within sandbox \"0dea00aa4013e4279e6c0b272333a90d6f2714e7ab1e41c82a69039a707f6e65\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"21a89918838abeca9e6a15ce87038dde6033a9032bb4c82f697b18f25bc91750\"" Jan 23 18:59:44.138521 containerd[1628]: time="2026-01-23T18:59:44.138499615Z" level=info msg="StartContainer for \"21a89918838abeca9e6a15ce87038dde6033a9032bb4c82f697b18f25bc91750\"" Jan 23 18:59:44.139650 containerd[1628]: time="2026-01-23T18:59:44.139621584Z" level=info msg="connecting to shim 21a89918838abeca9e6a15ce87038dde6033a9032bb4c82f697b18f25bc91750" address="unix:///run/containerd/s/7d32ad5486f4c3ec00a223283d343183becea5a0e9e3c1fd9e997ddad7e651dd" protocol=ttrpc version=3 Jan 23 18:59:44.175280 systemd[1]: Started cri-containerd-21a89918838abeca9e6a15ce87038dde6033a9032bb4c82f697b18f25bc91750.scope - libcontainer container 21a89918838abeca9e6a15ce87038dde6033a9032bb4c82f697b18f25bc91750. Jan 23 18:59:44.223159 containerd[1628]: time="2026-01-23T18:59:44.222294987Z" level=info msg="StartContainer for \"21a89918838abeca9e6a15ce87038dde6033a9032bb4c82f697b18f25bc91750\" returns successfully" Jan 23 18:59:44.278314 kubelet[2023]: E0123 18:59:44.278232 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:45.279458 kubelet[2023]: E0123 18:59:45.279357 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:45.417937 systemd-networkd[1539]: lxcffb07449b040: Gained IPv6LL Jan 23 18:59:46.280005 kubelet[2023]: E0123 18:59:46.279957 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:47.280947 kubelet[2023]: E0123 18:59:47.280856 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:47.681994 systemd[1]: Started sshd@9-10.0.5.167:22-185.124.195.61:10409.service - OpenSSH per-connection server daemon (185.124.195.61:10409). Jan 23 18:59:47.737475 sshd[3482]: Connection closed by 185.124.195.61 port 10409 Jan 23 18:59:47.738983 systemd[1]: sshd@9-10.0.5.167:22-185.124.195.61:10409.service: Deactivated successfully. Jan 23 18:59:48.282046 kubelet[2023]: E0123 18:59:48.281939 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:49.282927 kubelet[2023]: E0123 18:59:49.282856 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:50.283447 kubelet[2023]: E0123 18:59:50.283343 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:50.425147 kubelet[2023]: I0123 18:59:50.424922 2023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.024136259 podStartE2EDuration="16.424890566s" podCreationTimestamp="2026-01-23 18:59:34 +0000 UTC" firstStartedPulling="2026-01-23 18:59:43.693679461 +0000 UTC m=+49.155389773" lastFinishedPulling="2026-01-23 18:59:44.094433689 +0000 UTC m=+49.556144080" observedRunningTime="2026-01-23 18:59:44.578633162 +0000 UTC m=+50.040343565" watchObservedRunningTime="2026-01-23 18:59:50.424890566 +0000 UTC m=+55.886600976" Jan 23 18:59:50.476250 containerd[1628]: time="2026-01-23T18:59:50.476173483Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:59:50.490486 containerd[1628]: time="2026-01-23T18:59:50.490425198Z" level=info msg="StopContainer for \"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\" with timeout 2 (s)" Jan 23 18:59:50.491083 containerd[1628]: time="2026-01-23T18:59:50.491038851Z" level=info msg="Stop container \"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\" with signal terminated" Jan 23 18:59:50.510257 systemd-networkd[1539]: lxc_health: Link DOWN Jan 23 18:59:50.510273 systemd-networkd[1539]: lxc_health: Lost carrier Jan 23 18:59:50.533677 systemd[1]: cri-containerd-0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be.scope: Deactivated successfully. Jan 23 18:59:50.534026 systemd[1]: cri-containerd-0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be.scope: Consumed 6.386s CPU time, 123.7M memory peak, 112K read from disk, 13.3M written to disk. Jan 23 18:59:50.536379 containerd[1628]: time="2026-01-23T18:59:50.535754476Z" level=info msg="received container exit event container_id:\"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\" id:\"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\" pid:2571 exited_at:{seconds:1769194790 nanos:535280716}" Jan 23 18:59:50.569598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be-rootfs.mount: Deactivated successfully. Jan 23 18:59:51.033964 containerd[1628]: time="2026-01-23T18:59:51.033686945Z" level=info msg="StopContainer for \"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\" returns successfully" Jan 23 18:59:51.036175 containerd[1628]: time="2026-01-23T18:59:51.035462663Z" level=info msg="StopPodSandbox for \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\"" Jan 23 18:59:51.036175 containerd[1628]: time="2026-01-23T18:59:51.035592181Z" level=info msg="Container to stop \"40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:59:51.036175 containerd[1628]: time="2026-01-23T18:59:51.035640339Z" level=info msg="Container to stop \"912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:59:51.036175 containerd[1628]: time="2026-01-23T18:59:51.035665286Z" level=info msg="Container to stop \"7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:59:51.036175 containerd[1628]: time="2026-01-23T18:59:51.035698925Z" level=info msg="Container to stop \"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:59:51.036175 containerd[1628]: time="2026-01-23T18:59:51.035721733Z" level=info msg="Container to stop \"62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:59:51.049736 systemd[1]: cri-containerd-77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83.scope: Deactivated successfully. Jan 23 18:59:51.053851 containerd[1628]: time="2026-01-23T18:59:51.053756431Z" level=info msg="received sandbox exit event container_id:\"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" id:\"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" exit_status:137 exited_at:{seconds:1769194791 nanos:52419419}" monitor_name=podsandbox Jan 23 18:59:51.090307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83-rootfs.mount: Deactivated successfully. Jan 23 18:59:51.094996 containerd[1628]: time="2026-01-23T18:59:51.094739273Z" level=info msg="shim disconnected" id=77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83 namespace=k8s.io Jan 23 18:59:51.094996 containerd[1628]: time="2026-01-23T18:59:51.094790828Z" level=warning msg="cleaning up after shim disconnected" id=77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83 namespace=k8s.io Jan 23 18:59:51.094996 containerd[1628]: time="2026-01-23T18:59:51.094804822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 18:59:51.119065 containerd[1628]: time="2026-01-23T18:59:51.116430387Z" level=info msg="received sandbox container exit event sandbox_id:\"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" exit_status:137 exited_at:{seconds:1769194791 nanos:52419419}" monitor_name=criService Jan 23 18:59:51.118845 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83-shm.mount: Deactivated successfully. Jan 23 18:59:51.119644 containerd[1628]: time="2026-01-23T18:59:51.119599368Z" level=info msg="TearDown network for sandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" successfully" Jan 23 18:59:51.119696 containerd[1628]: time="2026-01-23T18:59:51.119653151Z" level=info msg="StopPodSandbox for \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" returns successfully" Jan 23 18:59:51.172758 kubelet[2023]: I0123 18:59:51.172721 2023 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-host-proc-sys-net\") pod \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " Jan 23 18:59:51.172937 kubelet[2023]: I0123 18:59:51.172890 2023 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8ac315e3-d97e-4113-bc9a-097f2adf7bc7" (UID: "8ac315e3-d97e-4113-bc9a-097f2adf7bc7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:59:51.173042 kubelet[2023]: I0123 18:59:51.172926 2023 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-host-proc-sys-kernel\") pod \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " Jan 23 18:59:51.173130 kubelet[2023]: I0123 18:59:51.173121 2023 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-bpf-maps\") pod \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " Jan 23 18:59:51.173199 kubelet[2023]: I0123 18:59:51.173012 2023 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8ac315e3-d97e-4113-bc9a-097f2adf7bc7" (UID: "8ac315e3-d97e-4113-bc9a-097f2adf7bc7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:59:51.173241 kubelet[2023]: I0123 18:59:51.173160 2023 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8ac315e3-d97e-4113-bc9a-097f2adf7bc7" (UID: "8ac315e3-d97e-4113-bc9a-097f2adf7bc7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:59:51.173287 kubelet[2023]: I0123 18:59:51.173191 2023 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-xtables-lock\") pod \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " Jan 23 18:59:51.173333 kubelet[2023]: I0123 18:59:51.173326 2023 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-lib-modules\") pod \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " Jan 23 18:59:51.173397 kubelet[2023]: I0123 18:59:51.173390 2023 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cni-path\") pod \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " Jan 23 18:59:51.173456 kubelet[2023]: I0123 18:59:51.173438 2023 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cilium-run\") pod \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " Jan 23 18:59:51.173512 kubelet[2023]: I0123 18:59:51.173504 2023 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cilium-cgroup\") pod \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " Jan 23 18:59:51.173579 kubelet[2023]: I0123 18:59:51.173571 2023 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cilium-config-path\") pod \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " Jan 23 18:59:51.173703 kubelet[2023]: I0123 18:59:51.173637 2023 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tftx5\" (UniqueName: \"kubernetes.io/projected/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-kube-api-access-tftx5\") pod \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " Jan 23 18:59:51.173703 kubelet[2023]: I0123 18:59:51.173653 2023 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-etc-cni-netd\") pod \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " Jan 23 18:59:51.173703 kubelet[2023]: I0123 18:59:51.173668 2023 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-hostproc\") pod \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " Jan 23 18:59:51.173703 kubelet[2023]: I0123 18:59:51.173684 2023 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-hubble-tls\") pod \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " Jan 23 18:59:51.173880 kubelet[2023]: I0123 18:59:51.173819 2023 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-clustermesh-secrets\") pod \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\" (UID: \"8ac315e3-d97e-4113-bc9a-097f2adf7bc7\") " Jan 23 18:59:51.173880 kubelet[2023]: I0123 18:59:51.173856 2023 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-host-proc-sys-net\") on node \"10.0.5.167\" DevicePath \"\"" Jan 23 18:59:51.173880 kubelet[2023]: I0123 18:59:51.173865 2023 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-host-proc-sys-kernel\") on node \"10.0.5.167\" DevicePath \"\"" Jan 23 18:59:51.174033 kubelet[2023]: I0123 18:59:51.173962 2023 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-bpf-maps\") on node \"10.0.5.167\" DevicePath \"\"" Jan 23 18:59:51.174697 kubelet[2023]: I0123 18:59:51.173328 2023 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8ac315e3-d97e-4113-bc9a-097f2adf7bc7" (UID: "8ac315e3-d97e-4113-bc9a-097f2adf7bc7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:59:51.174841 kubelet[2023]: I0123 18:59:51.173358 2023 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8ac315e3-d97e-4113-bc9a-097f2adf7bc7" (UID: "8ac315e3-d97e-4113-bc9a-097f2adf7bc7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:59:51.174841 kubelet[2023]: I0123 18:59:51.173490 2023 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cni-path" (OuterVolumeSpecName: "cni-path") pod "8ac315e3-d97e-4113-bc9a-097f2adf7bc7" (UID: "8ac315e3-d97e-4113-bc9a-097f2adf7bc7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:59:51.174841 kubelet[2023]: I0123 18:59:51.173533 2023 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8ac315e3-d97e-4113-bc9a-097f2adf7bc7" (UID: "8ac315e3-d97e-4113-bc9a-097f2adf7bc7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:59:51.174841 kubelet[2023]: I0123 18:59:51.173574 2023 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8ac315e3-d97e-4113-bc9a-097f2adf7bc7" (UID: "8ac315e3-d97e-4113-bc9a-097f2adf7bc7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:59:51.176274 kubelet[2023]: I0123 18:59:51.176229 2023 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-hostproc" (OuterVolumeSpecName: "hostproc") pod "8ac315e3-d97e-4113-bc9a-097f2adf7bc7" (UID: "8ac315e3-d97e-4113-bc9a-097f2adf7bc7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:59:51.179021 kubelet[2023]: I0123 18:59:51.176891 2023 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8ac315e3-d97e-4113-bc9a-097f2adf7bc7" (UID: "8ac315e3-d97e-4113-bc9a-097f2adf7bc7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:59:51.178817 systemd[1]: var-lib-kubelet-pods-8ac315e3\x2dd97e\x2d4113\x2dbc9a\x2d097f2adf7bc7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 18:59:51.181918 kubelet[2023]: I0123 18:59:51.181899 2023 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8ac315e3-d97e-4113-bc9a-097f2adf7bc7" (UID: "8ac315e3-d97e-4113-bc9a-097f2adf7bc7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 18:59:51.184680 systemd[1]: var-lib-kubelet-pods-8ac315e3\x2dd97e\x2d4113\x2dbc9a\x2d097f2adf7bc7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 18:59:51.185308 kubelet[2023]: I0123 18:59:51.185270 2023 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8ac315e3-d97e-4113-bc9a-097f2adf7bc7" (UID: "8ac315e3-d97e-4113-bc9a-097f2adf7bc7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:59:51.187928 kubelet[2023]: I0123 18:59:51.187892 2023 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8ac315e3-d97e-4113-bc9a-097f2adf7bc7" (UID: "8ac315e3-d97e-4113-bc9a-097f2adf7bc7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:59:51.189363 kubelet[2023]: I0123 18:59:51.189333 2023 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-kube-api-access-tftx5" (OuterVolumeSpecName: "kube-api-access-tftx5") pod "8ac315e3-d97e-4113-bc9a-097f2adf7bc7" (UID: "8ac315e3-d97e-4113-bc9a-097f2adf7bc7"). InnerVolumeSpecName "kube-api-access-tftx5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:59:51.275361 kubelet[2023]: I0123 18:59:51.274976 2023 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cni-path\") on node \"10.0.5.167\" DevicePath \"\"" Jan 23 18:59:51.275361 kubelet[2023]: I0123 18:59:51.275070 2023 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cilium-run\") on node \"10.0.5.167\" DevicePath \"\"" Jan 23 18:59:51.275361 kubelet[2023]: I0123 18:59:51.275126 2023 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cilium-cgroup\") on node \"10.0.5.167\" DevicePath \"\"" Jan 23 18:59:51.275361 kubelet[2023]: I0123 18:59:51.275152 2023 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-cilium-config-path\") on node \"10.0.5.167\" DevicePath \"\"" Jan 23 18:59:51.275361 kubelet[2023]: I0123 18:59:51.275179 2023 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tftx5\" (UniqueName: \"kubernetes.io/projected/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-kube-api-access-tftx5\") on node \"10.0.5.167\" DevicePath \"\"" Jan 23 18:59:51.275361 kubelet[2023]: I0123 18:59:51.275203 2023 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-etc-cni-netd\") on node \"10.0.5.167\" DevicePath \"\"" Jan 23 18:59:51.275361 kubelet[2023]: I0123 18:59:51.275226 2023 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-hostproc\") on node \"10.0.5.167\" DevicePath \"\"" Jan 23 18:59:51.275361 kubelet[2023]: I0123 18:59:51.275247 2023 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-hubble-tls\") on node \"10.0.5.167\" DevicePath \"\"" Jan 23 18:59:51.276276 kubelet[2023]: I0123 18:59:51.275271 2023 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-clustermesh-secrets\") on node \"10.0.5.167\" DevicePath \"\"" Jan 23 18:59:51.276276 kubelet[2023]: I0123 18:59:51.275292 2023 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-xtables-lock\") on node \"10.0.5.167\" DevicePath \"\"" Jan 23 18:59:51.276276 kubelet[2023]: I0123 18:59:51.275313 2023 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ac315e3-d97e-4113-bc9a-097f2adf7bc7-lib-modules\") on node \"10.0.5.167\" DevicePath \"\"" Jan 23 18:59:51.284354 kubelet[2023]: E0123 18:59:51.284162 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:51.391777 systemd[1]: Removed slice kubepods-burstable-pod8ac315e3_d97e_4113_bc9a_097f2adf7bc7.slice - libcontainer container kubepods-burstable-pod8ac315e3_d97e_4113_bc9a_097f2adf7bc7.slice. Jan 23 18:59:51.393255 systemd[1]: kubepods-burstable-pod8ac315e3_d97e_4113_bc9a_097f2adf7bc7.slice: Consumed 6.482s CPU time, 124.1M memory peak, 112K read from disk, 13.3M written to disk. Jan 23 18:59:51.570669 systemd[1]: var-lib-kubelet-pods-8ac315e3\x2dd97e\x2d4113\x2dbc9a\x2d097f2adf7bc7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtftx5.mount: Deactivated successfully. Jan 23 18:59:51.597047 kubelet[2023]: I0123 18:59:51.596969 2023 scope.go:117] "RemoveContainer" containerID="0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be" Jan 23 18:59:51.601338 containerd[1628]: time="2026-01-23T18:59:51.601089374Z" level=info msg="RemoveContainer for \"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\"" Jan 23 18:59:51.620042 containerd[1628]: time="2026-01-23T18:59:51.619291333Z" level=info msg="RemoveContainer for \"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\" returns successfully" Jan 23 18:59:51.620291 kubelet[2023]: I0123 18:59:51.620268 2023 scope.go:117] "RemoveContainer" containerID="7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b" Jan 23 18:59:51.627983 containerd[1628]: time="2026-01-23T18:59:51.627948082Z" level=info msg="RemoveContainer for \"7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b\"" Jan 23 18:59:51.634265 containerd[1628]: time="2026-01-23T18:59:51.634217377Z" level=info msg="RemoveContainer for \"7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b\" returns successfully" Jan 23 18:59:51.634542 kubelet[2023]: I0123 18:59:51.634458 2023 scope.go:117] "RemoveContainer" containerID="912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef" Jan 23 18:59:51.637221 containerd[1628]: time="2026-01-23T18:59:51.637193857Z" level=info msg="RemoveContainer for \"912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef\"" Jan 23 18:59:51.643254 containerd[1628]: time="2026-01-23T18:59:51.643214521Z" level=info msg="RemoveContainer for \"912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef\" returns successfully" Jan 23 18:59:51.643545 kubelet[2023]: I0123 18:59:51.643528 2023 scope.go:117] "RemoveContainer" containerID="62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06" Jan 23 18:59:51.645254 containerd[1628]: time="2026-01-23T18:59:51.645182627Z" level=info msg="RemoveContainer for \"62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06\"" Jan 23 18:59:51.650131 containerd[1628]: time="2026-01-23T18:59:51.650086070Z" level=info msg="RemoveContainer for \"62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06\" returns successfully" Jan 23 18:59:51.650382 kubelet[2023]: I0123 18:59:51.650362 2023 scope.go:117] "RemoveContainer" containerID="40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb" Jan 23 18:59:51.651940 containerd[1628]: time="2026-01-23T18:59:51.651915292Z" level=info msg="RemoveContainer for \"40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb\"" Jan 23 18:59:51.656377 containerd[1628]: time="2026-01-23T18:59:51.656352141Z" level=info msg="RemoveContainer for \"40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb\" returns successfully" Jan 23 18:59:51.656587 kubelet[2023]: I0123 18:59:51.656569 2023 scope.go:117] "RemoveContainer" containerID="0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be" Jan 23 18:59:51.656993 containerd[1628]: time="2026-01-23T18:59:51.656897048Z" level=error msg="ContainerStatus for \"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\": not found" Jan 23 18:59:51.657193 kubelet[2023]: E0123 18:59:51.657137 2023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\": not found" containerID="0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be" Jan 23 18:59:51.657270 kubelet[2023]: I0123 18:59:51.657165 2023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be"} err="failed to get container status \"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ea400aa508d99979ed33b0858281201d436689eefc5bc8754ae98dcd21aa5be\": not found" Jan 23 18:59:51.657464 kubelet[2023]: I0123 18:59:51.657365 2023 scope.go:117] "RemoveContainer" containerID="7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b" Jan 23 18:59:51.657698 containerd[1628]: time="2026-01-23T18:59:51.657652102Z" level=error msg="ContainerStatus for \"7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b\": not found" Jan 23 18:59:51.657827 kubelet[2023]: E0123 18:59:51.657803 2023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b\": not found" containerID="7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b" Jan 23 18:59:51.657964 kubelet[2023]: I0123 18:59:51.657910 2023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b"} err="failed to get container status \"7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d0c1df369343be1c193dcc2eac3400b5835784df5dc2f4fedf0e2ab35b2c23b\": not found" Jan 23 18:59:51.657964 kubelet[2023]: I0123 18:59:51.657929 2023 scope.go:117] "RemoveContainer" containerID="912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef" Jan 23 18:59:51.658240 containerd[1628]: time="2026-01-23T18:59:51.658210345Z" level=error msg="ContainerStatus for \"912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef\": not found" Jan 23 18:59:51.658364 kubelet[2023]: E0123 18:59:51.658351 2023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef\": not found" containerID="912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef" Jan 23 18:59:51.658427 kubelet[2023]: I0123 18:59:51.658414 2023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef"} err="failed to get container status \"912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef\": rpc error: code = NotFound desc = an error occurred when try to find container \"912c39743cdf62e229e8c6ffa1a7daf225104e4285c1ecd1fff6f55bcac8faef\": not found" Jan 23 18:59:51.658515 kubelet[2023]: I0123 18:59:51.658469 2023 scope.go:117] "RemoveContainer" containerID="62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06" Jan 23 18:59:51.658640 containerd[1628]: time="2026-01-23T18:59:51.658593346Z" level=error msg="ContainerStatus for \"62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06\": not found" Jan 23 18:59:51.658779 kubelet[2023]: E0123 18:59:51.658736 2023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06\": not found" containerID="62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06" Jan 23 18:59:51.658867 kubelet[2023]: I0123 18:59:51.658754 2023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06"} err="failed to get container status \"62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06\": rpc error: code = NotFound desc = an error occurred when try to find container \"62ffb5b8f965740cc30c0bae32f4e5ca4fd7c24dafe2e36c296918e4d02c0e06\": not found" Jan 23 18:59:51.658867 kubelet[2023]: I0123 18:59:51.658843 2023 scope.go:117] "RemoveContainer" containerID="40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb" Jan 23 18:59:51.659097 containerd[1628]: time="2026-01-23T18:59:51.659072185Z" level=error msg="ContainerStatus for \"40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb\": not found" Jan 23 18:59:51.659225 kubelet[2023]: E0123 18:59:51.659210 2023 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb\": not found" containerID="40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb" Jan 23 18:59:51.659313 kubelet[2023]: I0123 18:59:51.659298 2023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb"} err="failed to get container status \"40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb\": rpc error: code = NotFound desc = an error occurred when try to find container \"40b4a0a975d9504a17e46b68307fe48a2112a57d24d9c6d031ebe200ed79bbbb\": not found" Jan 23 18:59:52.284860 kubelet[2023]: E0123 18:59:52.284748 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:53.285823 kubelet[2023]: E0123 18:59:53.285754 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:53.383887 kubelet[2023]: I0123 18:59:53.382870 2023 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac315e3-d97e-4113-bc9a-097f2adf7bc7" path="/var/lib/kubelet/pods/8ac315e3-d97e-4113-bc9a-097f2adf7bc7/volumes" Jan 23 18:59:54.286744 kubelet[2023]: E0123 18:59:54.286622 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:54.419194 systemd[1]: Created slice kubepods-burstable-podb9501ddc_1d47_4007_85ac_abf183fa8f29.slice - libcontainer container kubepods-burstable-podb9501ddc_1d47_4007_85ac_abf183fa8f29.slice. Jan 23 18:59:54.460959 systemd[1]: Created slice kubepods-besteffort-podd82b5eb7_6ef0_4209_8b87_80d6de0db0a2.slice - libcontainer container kubepods-besteffort-podd82b5eb7_6ef0_4209_8b87_80d6de0db0a2.slice. Jan 23 18:59:54.498478 kubelet[2023]: I0123 18:59:54.498365 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdp2v\" (UniqueName: \"kubernetes.io/projected/b9501ddc-1d47-4007-85ac-abf183fa8f29-kube-api-access-bdp2v\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.498478 kubelet[2023]: I0123 18:59:54.498451 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwkjq\" (UniqueName: \"kubernetes.io/projected/d82b5eb7-6ef0-4209-8b87-80d6de0db0a2-kube-api-access-vwkjq\") pod \"cilium-operator-6c4d7847fc-cf9ld\" (UID: \"d82b5eb7-6ef0-4209-8b87-80d6de0db0a2\") " pod="kube-system/cilium-operator-6c4d7847fc-cf9ld" Jan 23 18:59:54.498780 kubelet[2023]: I0123 18:59:54.498500 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9501ddc-1d47-4007-85ac-abf183fa8f29-cilium-cgroup\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.498780 kubelet[2023]: I0123 18:59:54.498541 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d82b5eb7-6ef0-4209-8b87-80d6de0db0a2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-cf9ld\" (UID: \"d82b5eb7-6ef0-4209-8b87-80d6de0db0a2\") " pod="kube-system/cilium-operator-6c4d7847fc-cf9ld" Jan 23 18:59:54.498780 kubelet[2023]: I0123 18:59:54.498579 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9501ddc-1d47-4007-85ac-abf183fa8f29-bpf-maps\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.498780 kubelet[2023]: I0123 18:59:54.498615 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9501ddc-1d47-4007-85ac-abf183fa8f29-etc-cni-netd\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.498780 kubelet[2023]: I0123 18:59:54.498658 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9501ddc-1d47-4007-85ac-abf183fa8f29-clustermesh-secrets\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.499158 kubelet[2023]: I0123 18:59:54.498704 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b9501ddc-1d47-4007-85ac-abf183fa8f29-cilium-ipsec-secrets\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.499158 kubelet[2023]: I0123 18:59:54.498748 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9501ddc-1d47-4007-85ac-abf183fa8f29-cilium-run\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.499158 kubelet[2023]: I0123 18:59:54.498788 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9501ddc-1d47-4007-85ac-abf183fa8f29-xtables-lock\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.499158 kubelet[2023]: I0123 18:59:54.498826 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9501ddc-1d47-4007-85ac-abf183fa8f29-host-proc-sys-kernel\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.499158 kubelet[2023]: I0123 18:59:54.498865 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9501ddc-1d47-4007-85ac-abf183fa8f29-hubble-tls\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.499158 kubelet[2023]: I0123 18:59:54.498909 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9501ddc-1d47-4007-85ac-abf183fa8f29-hostproc\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.499586 kubelet[2023]: I0123 18:59:54.498945 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9501ddc-1d47-4007-85ac-abf183fa8f29-cni-path\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.499586 kubelet[2023]: I0123 18:59:54.498981 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9501ddc-1d47-4007-85ac-abf183fa8f29-lib-modules\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.499586 kubelet[2023]: I0123 18:59:54.499020 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9501ddc-1d47-4007-85ac-abf183fa8f29-cilium-config-path\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.499586 kubelet[2023]: I0123 18:59:54.499069 2023 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9501ddc-1d47-4007-85ac-abf183fa8f29-host-proc-sys-net\") pod \"cilium-n26cj\" (UID: \"b9501ddc-1d47-4007-85ac-abf183fa8f29\") " pod="kube-system/cilium-n26cj" Jan 23 18:59:54.755485 containerd[1628]: time="2026-01-23T18:59:54.755411310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n26cj,Uid:b9501ddc-1d47-4007-85ac-abf183fa8f29,Namespace:kube-system,Attempt:0,}" Jan 23 18:59:54.765350 containerd[1628]: time="2026-01-23T18:59:54.765253391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cf9ld,Uid:d82b5eb7-6ef0-4209-8b87-80d6de0db0a2,Namespace:kube-system,Attempt:0,}" Jan 23 18:59:54.797594 containerd[1628]: time="2026-01-23T18:59:54.796774716Z" level=info msg="connecting to shim 0eddf21fdf069f60d759da17c97b05b73a6e669f977a1bc9e914fa332442eaff" address="unix:///run/containerd/s/27f3ac4feb4e4c8952a2e0c4fb89bad58ebf85f09e44f727bc72945e8f9a7b86" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:54.800255 containerd[1628]: time="2026-01-23T18:59:54.800209775Z" level=info msg="connecting to shim 4a76b76815635bee981c9467601a221a48358ab81c6919f1a088ab569448bde2" address="unix:///run/containerd/s/71a767996b69c2cdfdd226c7eed4fa348b91f686f8db6df4b262ec4434e401b6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:54.835310 systemd[1]: Started cri-containerd-0eddf21fdf069f60d759da17c97b05b73a6e669f977a1bc9e914fa332442eaff.scope - libcontainer container 0eddf21fdf069f60d759da17c97b05b73a6e669f977a1bc9e914fa332442eaff. Jan 23 18:59:54.841281 systemd[1]: Started cri-containerd-4a76b76815635bee981c9467601a221a48358ab81c6919f1a088ab569448bde2.scope - libcontainer container 4a76b76815635bee981c9467601a221a48358ab81c6919f1a088ab569448bde2. Jan 23 18:59:54.880838 containerd[1628]: time="2026-01-23T18:59:54.880454726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n26cj,Uid:b9501ddc-1d47-4007-85ac-abf183fa8f29,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a76b76815635bee981c9467601a221a48358ab81c6919f1a088ab569448bde2\"" Jan 23 18:59:54.887579 containerd[1628]: time="2026-01-23T18:59:54.887521874Z" level=info msg="CreateContainer within sandbox \"4a76b76815635bee981c9467601a221a48358ab81c6919f1a088ab569448bde2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 18:59:54.897932 containerd[1628]: time="2026-01-23T18:59:54.897461616Z" level=info msg="Container dc21f1a0bf75a4421d06839eafc980f817074253495eba18fb4cdcf2ad59fc1b: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:54.907974 containerd[1628]: time="2026-01-23T18:59:54.907950545Z" level=info msg="CreateContainer within sandbox \"4a76b76815635bee981c9467601a221a48358ab81c6919f1a088ab569448bde2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dc21f1a0bf75a4421d06839eafc980f817074253495eba18fb4cdcf2ad59fc1b\"" Jan 23 18:59:54.908597 containerd[1628]: time="2026-01-23T18:59:54.908575404Z" level=info msg="StartContainer for \"dc21f1a0bf75a4421d06839eafc980f817074253495eba18fb4cdcf2ad59fc1b\"" Jan 23 18:59:54.910033 containerd[1628]: time="2026-01-23T18:59:54.910010387Z" level=info msg="connecting to shim dc21f1a0bf75a4421d06839eafc980f817074253495eba18fb4cdcf2ad59fc1b" address="unix:///run/containerd/s/71a767996b69c2cdfdd226c7eed4fa348b91f686f8db6df4b262ec4434e401b6" protocol=ttrpc version=3 Jan 23 18:59:54.923504 containerd[1628]: time="2026-01-23T18:59:54.923365387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cf9ld,Uid:d82b5eb7-6ef0-4209-8b87-80d6de0db0a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eddf21fdf069f60d759da17c97b05b73a6e669f977a1bc9e914fa332442eaff\"" Jan 23 18:59:54.925880 containerd[1628]: time="2026-01-23T18:59:54.925856304Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 18:59:54.929252 systemd[1]: Started cri-containerd-dc21f1a0bf75a4421d06839eafc980f817074253495eba18fb4cdcf2ad59fc1b.scope - libcontainer container dc21f1a0bf75a4421d06839eafc980f817074253495eba18fb4cdcf2ad59fc1b. Jan 23 18:59:54.955761 containerd[1628]: time="2026-01-23T18:59:54.955681370Z" level=info msg="StartContainer for \"dc21f1a0bf75a4421d06839eafc980f817074253495eba18fb4cdcf2ad59fc1b\" returns successfully" Jan 23 18:59:54.960622 systemd[1]: cri-containerd-dc21f1a0bf75a4421d06839eafc980f817074253495eba18fb4cdcf2ad59fc1b.scope: Deactivated successfully. Jan 23 18:59:54.963734 containerd[1628]: time="2026-01-23T18:59:54.963659437Z" level=info msg="received container exit event container_id:\"dc21f1a0bf75a4421d06839eafc980f817074253495eba18fb4cdcf2ad59fc1b\" id:\"dc21f1a0bf75a4421d06839eafc980f817074253495eba18fb4cdcf2ad59fc1b\" pid:3683 exited_at:{seconds:1769194794 nanos:962849985}" Jan 23 18:59:55.236219 kubelet[2023]: E0123 18:59:55.236077 2023 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:55.287431 kubelet[2023]: E0123 18:59:55.287356 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:55.309716 containerd[1628]: time="2026-01-23T18:59:55.309357174Z" level=info msg="StopPodSandbox for \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\"" Jan 23 18:59:55.309716 containerd[1628]: time="2026-01-23T18:59:55.309594947Z" level=info msg="TearDown network for sandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" successfully" Jan 23 18:59:55.309716 containerd[1628]: time="2026-01-23T18:59:55.309641927Z" level=info msg="StopPodSandbox for \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" returns successfully" Jan 23 18:59:55.314126 containerd[1628]: time="2026-01-23T18:59:55.313346154Z" level=info msg="RemovePodSandbox for \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\"" Jan 23 18:59:55.314126 containerd[1628]: time="2026-01-23T18:59:55.313431758Z" level=info msg="Forcibly stopping sandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\"" Jan 23 18:59:55.314126 containerd[1628]: time="2026-01-23T18:59:55.313611484Z" level=info msg="TearDown network for sandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" successfully" Jan 23 18:59:55.316855 containerd[1628]: time="2026-01-23T18:59:55.316059611Z" level=info msg="Ensure that sandbox 77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83 in task-service has been cleanup successfully" Jan 23 18:59:55.321800 containerd[1628]: time="2026-01-23T18:59:55.321746543Z" level=info msg="RemovePodSandbox \"77e906fb5d08bc18b48026e1322a1a332d8e07bcaa6baa02ec6898a385d20e83\" returns successfully" Jan 23 18:59:55.390707 kubelet[2023]: E0123 18:59:55.390651 2023 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 18:59:55.652329 containerd[1628]: time="2026-01-23T18:59:55.651301106Z" level=info msg="CreateContainer within sandbox \"4a76b76815635bee981c9467601a221a48358ab81c6919f1a088ab569448bde2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 18:59:55.668835 containerd[1628]: time="2026-01-23T18:59:55.668788957Z" level=info msg="Container 29db17007ebb2caf57eb4b2b562266341fee2410745dad93107d80d61707da51: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:55.682175 containerd[1628]: time="2026-01-23T18:59:55.682134181Z" level=info msg="CreateContainer within sandbox \"4a76b76815635bee981c9467601a221a48358ab81c6919f1a088ab569448bde2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"29db17007ebb2caf57eb4b2b562266341fee2410745dad93107d80d61707da51\"" Jan 23 18:59:55.682740 containerd[1628]: time="2026-01-23T18:59:55.682707471Z" level=info msg="StartContainer for \"29db17007ebb2caf57eb4b2b562266341fee2410745dad93107d80d61707da51\"" Jan 23 18:59:55.684549 containerd[1628]: time="2026-01-23T18:59:55.684521938Z" level=info msg="connecting to shim 29db17007ebb2caf57eb4b2b562266341fee2410745dad93107d80d61707da51" address="unix:///run/containerd/s/71a767996b69c2cdfdd226c7eed4fa348b91f686f8db6df4b262ec4434e401b6" protocol=ttrpc version=3 Jan 23 18:59:55.711286 systemd[1]: Started cri-containerd-29db17007ebb2caf57eb4b2b562266341fee2410745dad93107d80d61707da51.scope - libcontainer container 29db17007ebb2caf57eb4b2b562266341fee2410745dad93107d80d61707da51. Jan 23 18:59:55.737515 containerd[1628]: time="2026-01-23T18:59:55.737479417Z" level=info msg="StartContainer for \"29db17007ebb2caf57eb4b2b562266341fee2410745dad93107d80d61707da51\" returns successfully" Jan 23 18:59:55.742005 systemd[1]: cri-containerd-29db17007ebb2caf57eb4b2b562266341fee2410745dad93107d80d61707da51.scope: Deactivated successfully. Jan 23 18:59:55.744001 containerd[1628]: time="2026-01-23T18:59:55.743972687Z" level=info msg="received container exit event container_id:\"29db17007ebb2caf57eb4b2b562266341fee2410745dad93107d80d61707da51\" id:\"29db17007ebb2caf57eb4b2b562266341fee2410745dad93107d80d61707da51\" pid:3731 exited_at:{seconds:1769194795 nanos:743751770}" Jan 23 18:59:55.761414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29db17007ebb2caf57eb4b2b562266341fee2410745dad93107d80d61707da51-rootfs.mount: Deactivated successfully. Jan 23 18:59:56.288197 kubelet[2023]: E0123 18:59:56.288082 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:56.449745 kubelet[2023]: I0123 18:59:56.449687 2023 setters.go:618] "Node became not ready" node="10.0.5.167" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:59:56Z","lastTransitionTime":"2026-01-23T18:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 18:59:56.657050 containerd[1628]: time="2026-01-23T18:59:56.656330936Z" level=info msg="CreateContainer within sandbox \"4a76b76815635bee981c9467601a221a48358ab81c6919f1a088ab569448bde2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 18:59:56.677805 containerd[1628]: time="2026-01-23T18:59:56.677751366Z" level=info msg="Container fb2d2c8d39901ed84c742b46a77352615db3e5cd5ca1ea820f2356b10a5064b9: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:56.694026 containerd[1628]: time="2026-01-23T18:59:56.693962817Z" level=info msg="CreateContainer within sandbox \"4a76b76815635bee981c9467601a221a48358ab81c6919f1a088ab569448bde2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fb2d2c8d39901ed84c742b46a77352615db3e5cd5ca1ea820f2356b10a5064b9\"" Jan 23 18:59:56.694953 containerd[1628]: time="2026-01-23T18:59:56.694908277Z" level=info msg="StartContainer for \"fb2d2c8d39901ed84c742b46a77352615db3e5cd5ca1ea820f2356b10a5064b9\"" Jan 23 18:59:56.698770 containerd[1628]: time="2026-01-23T18:59:56.698718673Z" level=info msg="connecting to shim fb2d2c8d39901ed84c742b46a77352615db3e5cd5ca1ea820f2356b10a5064b9" address="unix:///run/containerd/s/71a767996b69c2cdfdd226c7eed4fa348b91f686f8db6df4b262ec4434e401b6" protocol=ttrpc version=3 Jan 23 18:59:56.737481 systemd[1]: Started cri-containerd-fb2d2c8d39901ed84c742b46a77352615db3e5cd5ca1ea820f2356b10a5064b9.scope - libcontainer container fb2d2c8d39901ed84c742b46a77352615db3e5cd5ca1ea820f2356b10a5064b9. Jan 23 18:59:56.820230 systemd[1]: cri-containerd-fb2d2c8d39901ed84c742b46a77352615db3e5cd5ca1ea820f2356b10a5064b9.scope: Deactivated successfully. Jan 23 18:59:56.821052 containerd[1628]: time="2026-01-23T18:59:56.821027488Z" level=info msg="StartContainer for \"fb2d2c8d39901ed84c742b46a77352615db3e5cd5ca1ea820f2356b10a5064b9\" returns successfully" Jan 23 18:59:56.823300 containerd[1628]: time="2026-01-23T18:59:56.823269044Z" level=info msg="received container exit event container_id:\"fb2d2c8d39901ed84c742b46a77352615db3e5cd5ca1ea820f2356b10a5064b9\" id:\"fb2d2c8d39901ed84c742b46a77352615db3e5cd5ca1ea820f2356b10a5064b9\" pid:3784 exited_at:{seconds:1769194796 nanos:822938605}" Jan 23 18:59:56.843391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb2d2c8d39901ed84c742b46a77352615db3e5cd5ca1ea820f2356b10a5064b9-rootfs.mount: Deactivated successfully. Jan 23 18:59:57.127122 containerd[1628]: time="2026-01-23T18:59:57.126852910Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:57.128785 containerd[1628]: time="2026-01-23T18:59:57.128766133Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 18:59:57.130112 containerd[1628]: time="2026-01-23T18:59:57.130080300Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:57.131549 containerd[1628]: time="2026-01-23T18:59:57.131464077Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.205570609s" Jan 23 18:59:57.131549 containerd[1628]: time="2026-01-23T18:59:57.131488364Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 18:59:57.135998 containerd[1628]: time="2026-01-23T18:59:57.135727071Z" level=info msg="CreateContainer within sandbox \"0eddf21fdf069f60d759da17c97b05b73a6e669f977a1bc9e914fa332442eaff\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 18:59:57.143469 containerd[1628]: time="2026-01-23T18:59:57.143441223Z" level=info msg="Container cd71c70b83e900fe9311c1dffd24efe19a47677ce226024803c373bba8e89a4a: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:57.155813 containerd[1628]: time="2026-01-23T18:59:57.155729191Z" level=info msg="CreateContainer within sandbox \"0eddf21fdf069f60d759da17c97b05b73a6e669f977a1bc9e914fa332442eaff\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cd71c70b83e900fe9311c1dffd24efe19a47677ce226024803c373bba8e89a4a\"" Jan 23 18:59:57.156237 containerd[1628]: time="2026-01-23T18:59:57.156218346Z" level=info msg="StartContainer for \"cd71c70b83e900fe9311c1dffd24efe19a47677ce226024803c373bba8e89a4a\"" Jan 23 18:59:57.156878 containerd[1628]: time="2026-01-23T18:59:57.156842022Z" level=info msg="connecting to shim cd71c70b83e900fe9311c1dffd24efe19a47677ce226024803c373bba8e89a4a" address="unix:///run/containerd/s/27f3ac4feb4e4c8952a2e0c4fb89bad58ebf85f09e44f727bc72945e8f9a7b86" protocol=ttrpc version=3 Jan 23 18:59:57.176234 systemd[1]: Started cri-containerd-cd71c70b83e900fe9311c1dffd24efe19a47677ce226024803c373bba8e89a4a.scope - libcontainer container cd71c70b83e900fe9311c1dffd24efe19a47677ce226024803c373bba8e89a4a. Jan 23 18:59:57.203506 containerd[1628]: time="2026-01-23T18:59:57.203479343Z" level=info msg="StartContainer for \"cd71c70b83e900fe9311c1dffd24efe19a47677ce226024803c373bba8e89a4a\" returns successfully" Jan 23 18:59:57.288424 kubelet[2023]: E0123 18:59:57.288343 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:57.668374 kubelet[2023]: I0123 18:59:57.667183 2023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-cf9ld" podStartSLOduration=1.4604978499999999 podStartE2EDuration="3.667145701s" podCreationTimestamp="2026-01-23 18:59:54 +0000 UTC" firstStartedPulling="2026-01-23 18:59:54.925399382 +0000 UTC m=+60.387109695" lastFinishedPulling="2026-01-23 18:59:57.132047235 +0000 UTC m=+62.593757546" observedRunningTime="2026-01-23 18:59:57.666764082 +0000 UTC m=+63.128474471" watchObservedRunningTime="2026-01-23 18:59:57.667145701 +0000 UTC m=+63.128856126" Jan 23 18:59:57.682511 containerd[1628]: time="2026-01-23T18:59:57.682428300Z" level=info msg="CreateContainer within sandbox \"4a76b76815635bee981c9467601a221a48358ab81c6919f1a088ab569448bde2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 18:59:57.707250 containerd[1628]: time="2026-01-23T18:59:57.704374653Z" level=info msg="Container de5d7cc8b74d9a5a1092d1e15e32251d64da921b68b98b35da42656a21419b50: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:57.716874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3406713377.mount: Deactivated successfully. Jan 23 18:59:57.730324 containerd[1628]: time="2026-01-23T18:59:57.730254547Z" level=info msg="CreateContainer within sandbox \"4a76b76815635bee981c9467601a221a48358ab81c6919f1a088ab569448bde2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"de5d7cc8b74d9a5a1092d1e15e32251d64da921b68b98b35da42656a21419b50\"" Jan 23 18:59:57.731361 containerd[1628]: time="2026-01-23T18:59:57.731313248Z" level=info msg="StartContainer for \"de5d7cc8b74d9a5a1092d1e15e32251d64da921b68b98b35da42656a21419b50\"" Jan 23 18:59:57.733211 containerd[1628]: time="2026-01-23T18:59:57.733155934Z" level=info msg="connecting to shim de5d7cc8b74d9a5a1092d1e15e32251d64da921b68b98b35da42656a21419b50" address="unix:///run/containerd/s/71a767996b69c2cdfdd226c7eed4fa348b91f686f8db6df4b262ec4434e401b6" protocol=ttrpc version=3 Jan 23 18:59:57.769325 systemd[1]: Started cri-containerd-de5d7cc8b74d9a5a1092d1e15e32251d64da921b68b98b35da42656a21419b50.scope - libcontainer container de5d7cc8b74d9a5a1092d1e15e32251d64da921b68b98b35da42656a21419b50. Jan 23 18:59:57.804678 systemd[1]: cri-containerd-de5d7cc8b74d9a5a1092d1e15e32251d64da921b68b98b35da42656a21419b50.scope: Deactivated successfully. Jan 23 18:59:57.806312 containerd[1628]: time="2026-01-23T18:59:57.806277104Z" level=info msg="received container exit event container_id:\"de5d7cc8b74d9a5a1092d1e15e32251d64da921b68b98b35da42656a21419b50\" id:\"de5d7cc8b74d9a5a1092d1e15e32251d64da921b68b98b35da42656a21419b50\" pid:3865 exited_at:{seconds:1769194797 nanos:804584880}" Jan 23 18:59:57.817606 containerd[1628]: time="2026-01-23T18:59:57.817515552Z" level=info msg="StartContainer for \"de5d7cc8b74d9a5a1092d1e15e32251d64da921b68b98b35da42656a21419b50\" returns successfully" Jan 23 18:59:57.832342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de5d7cc8b74d9a5a1092d1e15e32251d64da921b68b98b35da42656a21419b50-rootfs.mount: Deactivated successfully. Jan 23 18:59:58.289295 kubelet[2023]: E0123 18:59:58.289203 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:58.688484 containerd[1628]: time="2026-01-23T18:59:58.688322119Z" level=info msg="CreateContainer within sandbox \"4a76b76815635bee981c9467601a221a48358ab81c6919f1a088ab569448bde2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 18:59:58.714222 containerd[1628]: time="2026-01-23T18:59:58.714072419Z" level=info msg="Container 307221ac4cea6dc1820d552cdd3dbd0f130901af8b14ed0d12330dc4ee6a3198: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:58.729792 containerd[1628]: time="2026-01-23T18:59:58.729710559Z" level=info msg="CreateContainer within sandbox \"4a76b76815635bee981c9467601a221a48358ab81c6919f1a088ab569448bde2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"307221ac4cea6dc1820d552cdd3dbd0f130901af8b14ed0d12330dc4ee6a3198\"" Jan 23 18:59:58.732402 containerd[1628]: time="2026-01-23T18:59:58.732355318Z" level=info msg="StartContainer for \"307221ac4cea6dc1820d552cdd3dbd0f130901af8b14ed0d12330dc4ee6a3198\"" Jan 23 18:59:58.734571 containerd[1628]: time="2026-01-23T18:59:58.734525940Z" level=info msg="connecting to shim 307221ac4cea6dc1820d552cdd3dbd0f130901af8b14ed0d12330dc4ee6a3198" address="unix:///run/containerd/s/71a767996b69c2cdfdd226c7eed4fa348b91f686f8db6df4b262ec4434e401b6" protocol=ttrpc version=3 Jan 23 18:59:58.761264 systemd[1]: Started cri-containerd-307221ac4cea6dc1820d552cdd3dbd0f130901af8b14ed0d12330dc4ee6a3198.scope - libcontainer container 307221ac4cea6dc1820d552cdd3dbd0f130901af8b14ed0d12330dc4ee6a3198. Jan 23 18:59:58.819417 containerd[1628]: time="2026-01-23T18:59:58.819211718Z" level=info msg="StartContainer for \"307221ac4cea6dc1820d552cdd3dbd0f130901af8b14ed0d12330dc4ee6a3198\" returns successfully" Jan 23 18:59:59.126779 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_256)) Jan 23 18:59:59.290333 kubelet[2023]: E0123 18:59:59.290267 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:59.709433 kubelet[2023]: I0123 18:59:59.709308 2023 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n26cj" podStartSLOduration=5.709277901 podStartE2EDuration="5.709277901s" podCreationTimestamp="2026-01-23 18:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:59:59.708887634 +0000 UTC m=+65.170598091" watchObservedRunningTime="2026-01-23 18:59:59.709277901 +0000 UTC m=+65.170988356" Jan 23 19:00:00.291337 kubelet[2023]: E0123 19:00:00.291227 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:01.291615 kubelet[2023]: E0123 19:00:01.291566 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:02.160568 systemd-networkd[1539]: lxc_health: Link UP Jan 23 19:00:02.162343 systemd-networkd[1539]: lxc_health: Gained carrier Jan 23 19:00:02.291702 kubelet[2023]: E0123 19:00:02.291663 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:03.209334 systemd-networkd[1539]: lxc_health: Gained IPv6LL Jan 23 19:00:03.292008 kubelet[2023]: E0123 19:00:03.291969 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:04.293245 kubelet[2023]: E0123 19:00:04.293160 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:05.293976 kubelet[2023]: E0123 19:00:05.293889 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:06.294682 kubelet[2023]: E0123 19:00:06.294603 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:07.295456 kubelet[2023]: E0123 19:00:07.295383 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:08.296149 kubelet[2023]: E0123 19:00:08.296034 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:09.296864 kubelet[2023]: E0123 19:00:09.296771 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:10.297141 kubelet[2023]: E0123 19:00:10.297000 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:11.297855 kubelet[2023]: E0123 19:00:11.297789 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:12.298009 kubelet[2023]: E0123 19:00:12.297957 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:13.298508 kubelet[2023]: E0123 19:00:13.298436 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:14.299467 kubelet[2023]: E0123 19:00:14.299367 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:15.236146 kubelet[2023]: E0123 19:00:15.236026 2023 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:15.300204 kubelet[2023]: E0123 19:00:15.300065 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:16.301340 kubelet[2023]: E0123 19:00:16.301245 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:17.301770 kubelet[2023]: E0123 19:00:17.301679 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:18.302320 kubelet[2023]: E0123 19:00:18.302219 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:19.303596 kubelet[2023]: E0123 19:00:19.303518 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:20.304304 kubelet[2023]: E0123 19:00:20.304190 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:21.305126 kubelet[2023]: E0123 19:00:21.305049 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:22.306339 kubelet[2023]: E0123 19:00:22.306175 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:23.307171 kubelet[2023]: E0123 19:00:23.307062 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:24.308204 kubelet[2023]: E0123 19:00:24.308122 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:25.308429 kubelet[2023]: E0123 19:00:25.308353 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:26.309547 kubelet[2023]: E0123 19:00:26.309486 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:27.310368 kubelet[2023]: E0123 19:00:27.310289 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:28.311243 kubelet[2023]: E0123 19:00:28.311154 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:29.312412 kubelet[2023]: E0123 19:00:29.312331 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:30.313319 kubelet[2023]: E0123 19:00:30.313209 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:31.314485 kubelet[2023]: E0123 19:00:31.314375 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:32.315249 kubelet[2023]: E0123 19:00:32.315147 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:33.316228 kubelet[2023]: E0123 19:00:33.316083 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:33.989277 kubelet[2023]: E0123 19:00:33.989188 2023 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.5.227:60532->10.0.5.235:2379: read: connection timed out" Jan 23 19:00:34.317334 kubelet[2023]: E0123 19:00:34.317198 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:35.236370 kubelet[2023]: E0123 19:00:35.236260 2023 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:35.262767 systemd[1]: cri-containerd-cd71c70b83e900fe9311c1dffd24efe19a47677ce226024803c373bba8e89a4a.scope: Deactivated successfully. Jan 23 19:00:35.268725 containerd[1628]: time="2026-01-23T19:00:35.268633113Z" level=info msg="received container exit event container_id:\"cd71c70b83e900fe9311c1dffd24efe19a47677ce226024803c373bba8e89a4a\" id:\"cd71c70b83e900fe9311c1dffd24efe19a47677ce226024803c373bba8e89a4a\" pid:3833 exit_status:1 exited_at:{seconds:1769194835 nanos:268085194}" Jan 23 19:00:35.318264 kubelet[2023]: E0123 19:00:35.318063 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:35.326051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd71c70b83e900fe9311c1dffd24efe19a47677ce226024803c373bba8e89a4a-rootfs.mount: Deactivated successfully. Jan 23 19:00:35.799556 kubelet[2023]: I0123 19:00:35.799184 2023 scope.go:117] "RemoveContainer" containerID="cd71c70b83e900fe9311c1dffd24efe19a47677ce226024803c373bba8e89a4a" Jan 23 19:00:35.803747 containerd[1628]: time="2026-01-23T19:00:35.803649245Z" level=info msg="CreateContainer within sandbox \"0eddf21fdf069f60d759da17c97b05b73a6e669f977a1bc9e914fa332442eaff\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Jan 23 19:00:35.829768 containerd[1628]: time="2026-01-23T19:00:35.828722983Z" level=info msg="Container 3b8a70343de0dc067d1a0e067d36da0b2e8cc4ea7c8a71a13c6aad7f25e0b96f: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:00:35.845409 containerd[1628]: time="2026-01-23T19:00:35.845327816Z" level=info msg="CreateContainer within sandbox \"0eddf21fdf069f60d759da17c97b05b73a6e669f977a1bc9e914fa332442eaff\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"3b8a70343de0dc067d1a0e067d36da0b2e8cc4ea7c8a71a13c6aad7f25e0b96f\"" Jan 23 19:00:35.846401 containerd[1628]: time="2026-01-23T19:00:35.846348440Z" level=info msg="StartContainer for \"3b8a70343de0dc067d1a0e067d36da0b2e8cc4ea7c8a71a13c6aad7f25e0b96f\"" Jan 23 19:00:35.847643 containerd[1628]: time="2026-01-23T19:00:35.847587576Z" level=info msg="connecting to shim 3b8a70343de0dc067d1a0e067d36da0b2e8cc4ea7c8a71a13c6aad7f25e0b96f" address="unix:///run/containerd/s/27f3ac4feb4e4c8952a2e0c4fb89bad58ebf85f09e44f727bc72945e8f9a7b86" protocol=ttrpc version=3 Jan 23 19:00:35.878294 systemd[1]: Started cri-containerd-3b8a70343de0dc067d1a0e067d36da0b2e8cc4ea7c8a71a13c6aad7f25e0b96f.scope - libcontainer container 3b8a70343de0dc067d1a0e067d36da0b2e8cc4ea7c8a71a13c6aad7f25e0b96f. Jan 23 19:00:35.925563 containerd[1628]: time="2026-01-23T19:00:35.925513976Z" level=info msg="StartContainer for \"3b8a70343de0dc067d1a0e067d36da0b2e8cc4ea7c8a71a13c6aad7f25e0b96f\" returns successfully" Jan 23 19:00:36.319061 kubelet[2023]: E0123 19:00:36.318970 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:37.199743 kubelet[2023]: E0123 19:00:37.199494 2023 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T19:00:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T19:00:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T19:00:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T19:00:27Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":63836358},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\\\",\\\"registry.k8s.io/kube-proxy:v1.33.7\\\"],\\\"sizeBytes\\\":31929115},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\\\",\\\"registry.k8s.io/pause:3.10\\\"],\\\"sizeBytes\\\":320368}]}}\" for node \"10.0.5.167\": Patch \"https://10.0.5.227:6443/api/v1/nodes/10.0.5.167/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 19:00:37.320158 kubelet[2023]: E0123 19:00:37.320065 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:37.636803 kubelet[2023]: E0123 19:00:37.636705 2023 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.5.167\": rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.5.227:60438->10.0.5.235:2379: read: connection timed out" Jan 23 19:00:38.321278 kubelet[2023]: E0123 19:00:38.321174 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:39.322073 kubelet[2023]: E0123 19:00:39.322004 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:39.813405 kubelet[2023]: E0123 19:00:39.813033 2023 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.5.227:60324->10.0.5.235:2379: read: connection timed out" event="&Event{ObjectMeta:{cilium-operator-6c4d7847fc-cf9ld.188d71545e64790b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:cilium-operator-6c4d7847fc-cf9ld,UID:d82b5eb7-6ef0-4209-8b87-80d6de0db0a2,APIVersion:v1,ResourceVersion:1022,FieldPath:spec.containers{cilium-operator},},Reason:Pulled,Message:Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine,Source:EventSource{Component:kubelet,Host:10.0.5.167,},FirstTimestamp:2026-01-23 19:00:35.801307403 +0000 UTC m=+101.263017779,LastTimestamp:2026-01-23 19:00:35.801307403 +0000 UTC m=+101.263017779,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.5.167,}" Jan 23 19:00:40.322510 kubelet[2023]: E0123 19:00:40.322434 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:41.323478 kubelet[2023]: E0123 19:00:41.323401 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:42.324191 kubelet[2023]: E0123 19:00:42.324066 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:43.325341 kubelet[2023]: E0123 19:00:43.325233 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:43.990229 kubelet[2023]: E0123 19:00:43.990080 2023 controller.go:195] "Failed to update lease" err="Put \"https://10.0.5.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.5.167?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 19:00:44.326272 kubelet[2023]: E0123 19:00:44.326032 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:44.612792 kubelet[2023]: I0123 19:00:44.612503 2023 status_manager.go:895] "Failed to get status for pod" podUID="d82b5eb7-6ef0-4209-8b87-80d6de0db0a2" pod="kube-system/cilium-operator-6c4d7847fc-cf9ld" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.5.227:60452->10.0.5.235:2379: read: connection timed out" Jan 23 19:00:45.326900 kubelet[2023]: E0123 19:00:45.326798 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:46.327816 kubelet[2023]: E0123 19:00:46.327768 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:47.328411 kubelet[2023]: E0123 19:00:47.328354 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:47.638286 kubelet[2023]: E0123 19:00:47.637744 2023 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.5.167\": Get \"https://10.0.5.227:6443/api/v1/nodes/10.0.5.167?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 19:00:48.329240 kubelet[2023]: E0123 19:00:48.329168 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:49.330281 kubelet[2023]: E0123 19:00:49.330182 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:50.331495 kubelet[2023]: E0123 19:00:50.331346 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:51.332387 kubelet[2023]: E0123 19:00:51.332344 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:52.333589 kubelet[2023]: E0123 19:00:52.333466 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:53.333872 kubelet[2023]: E0123 19:00:53.333765 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:53.991288 kubelet[2023]: E0123 19:00:53.991176 2023 controller.go:195] "Failed to update lease" err="Put \"https://10.0.5.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.5.167?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 19:00:54.334687 kubelet[2023]: E0123 19:00:54.334460 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:55.236602 kubelet[2023]: E0123 19:00:55.236507 2023 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:55.334683 kubelet[2023]: E0123 19:00:55.334620 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:56.335585 kubelet[2023]: E0123 19:00:56.335479 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:57.336453 kubelet[2023]: E0123 19:00:57.336352 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:57.638969 kubelet[2023]: E0123 19:00:57.638740 2023 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.5.167\": Get \"https://10.0.5.227:6443/api/v1/nodes/10.0.5.167?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 19:00:58.337394 kubelet[2023]: E0123 19:00:58.337300 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:59.338367 kubelet[2023]: E0123 19:00:59.338258 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:01:00.339062 kubelet[2023]: E0123 19:01:00.338960 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:01:01.340319 kubelet[2023]: E0123 19:01:01.340217 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:01:02.340519 kubelet[2023]: E0123 19:01:02.340448 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:01:03.341604 kubelet[2023]: E0123 19:01:03.341514 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:01:03.992752 kubelet[2023]: E0123 19:01:03.992318 2023 controller.go:195] "Failed to update lease" err="Put \"https://10.0.5.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.5.167?timeout=10s\": context deadline exceeded" Jan 23 19:01:04.342979 kubelet[2023]: E0123 19:01:04.342752 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:01:05.343938 kubelet[2023]: E0123 19:01:05.343786 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:01:06.191594 systemd[1]: Started sshd@10-10.0.5.167:22-185.124.195.59:10410.service - OpenSSH per-connection server daemon (185.124.195.59:10410). Jan 23 19:01:06.221242 sshd[4606]: Connection closed by 185.124.195.59 port 10410 Jan 23 19:01:06.221871 systemd[1]: sshd@10-10.0.5.167:22-185.124.195.59:10410.service: Deactivated successfully. Jan 23 19:01:06.344706 kubelet[2023]: E0123 19:01:06.344557 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:01:07.344966 kubelet[2023]: E0123 19:01:07.344879 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:01:07.639365 kubelet[2023]: E0123 19:01:07.639169 2023 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.5.167\": Get \"https://10.0.5.227:6443/api/v1/nodes/10.0.5.167?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 19:01:07.639365 kubelet[2023]: E0123 19:01:07.639237 2023 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 23 19:01:08.345359 kubelet[2023]: E0123 19:01:08.345264 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:01:09.346077 kubelet[2023]: E0123 19:01:09.346013 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:01:10.346790 kubelet[2023]: E0123 19:01:10.346662 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:01:11.347336 kubelet[2023]: E0123 19:01:11.347257 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:01:12.348026 kubelet[2023]: E0123 19:01:12.347915 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:01:13.349157 kubelet[2023]: E0123 19:01:13.348986 2023 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"