Jan 23 18:51:32.785842 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 18:51:32.785863 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:51:32.785871 kernel: BIOS-provided physical RAM map: Jan 23 18:51:32.785877 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:51:32.785882 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 18:51:32.785887 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 18:51:32.785894 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 18:51:32.785899 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 18:51:32.785904 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 18:51:32.785909 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 18:51:32.785915 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000007e93efff] usable Jan 23 18:51:32.785920 kernel: BIOS-e820: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 23 18:51:32.785925 kernel: BIOS-e820: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 23 18:51:32.785930 kernel: BIOS-e820: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 23 18:51:32.785938 kernel: BIOS-e820: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 23 18:51:32.785943 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 18:51:32.785949 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 18:51:32.785954 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 18:51:32.785959 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 23 18:51:32.785965 kernel: BIOS-e820: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 23 18:51:32.785970 kernel: BIOS-e820: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 23 18:51:32.785977 kernel: BIOS-e820: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 23 18:51:32.785982 kernel: BIOS-e820: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 23 18:51:32.785987 kernel: BIOS-e820: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 23 18:51:32.785993 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:51:32.785998 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:51:32.786003 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:51:32.786008 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 18:51:32.786014 kernel: NX (Execute Disable) protection: active Jan 23 18:51:32.786019 kernel: APIC: Static calls initialized Jan 23 18:51:32.786025 kernel: e820: update [mem 0x7df7f018-0x7df88a57] usable ==> usable Jan 23 18:51:32.786030 kernel: e820: update [mem 0x7df57018-0x7df7e457] usable ==> usable Jan 23 18:51:32.786035 kernel: extended physical RAM map: Jan 23 18:51:32.786042 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:51:32.786048 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 18:51:32.786053 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 18:51:32.786058 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 18:51:32.786063 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 18:51:32.786069 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 18:51:32.786074 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 18:51:32.786082 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000007df57017] usable Jan 23 18:51:32.786090 kernel: reserve setup_data: [mem 0x000000007df57018-0x000000007df7e457] usable Jan 23 18:51:32.786096 kernel: reserve setup_data: [mem 0x000000007df7e458-0x000000007df7f017] usable Jan 23 18:51:32.786102 kernel: reserve setup_data: [mem 0x000000007df7f018-0x000000007df88a57] usable Jan 23 18:51:32.786108 kernel: reserve setup_data: [mem 0x000000007df88a58-0x000000007e93efff] usable Jan 23 18:51:32.786114 kernel: reserve setup_data: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 23 18:51:32.786120 kernel: reserve setup_data: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 23 18:51:32.786126 kernel: reserve setup_data: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 23 18:51:32.786133 kernel: reserve setup_data: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 23 18:51:32.786139 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 18:51:32.786145 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 18:51:32.786151 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 18:51:32.786157 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 23 18:51:32.786163 kernel: reserve setup_data: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 23 18:51:32.786169 kernel: reserve setup_data: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 23 18:51:32.786175 kernel: reserve setup_data: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 23 18:51:32.786181 kernel: reserve setup_data: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 23 18:51:32.786187 kernel: reserve setup_data: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 23 18:51:32.786193 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:51:32.786200 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:51:32.786206 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:51:32.786212 kernel: reserve setup_data: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 18:51:32.786218 kernel: efi: EFI v2.7 by EDK II Jan 23 18:51:32.786224 kernel: efi: SMBIOS=0x7f972000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7dfd8018 RNG=0x7fb72018 Jan 23 18:51:32.786230 kernel: random: crng init done Jan 23 18:51:32.786237 kernel: efi: Remove mem139: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 18:51:32.786243 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 18:51:32.786292 kernel: secureboot: Secure boot disabled Jan 23 18:51:32.786298 kernel: SMBIOS 2.8 present. Jan 23 18:51:32.786305 kernel: DMI: STACKIT Cloud OpenStack Nova/Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 23 18:51:32.786311 kernel: DMI: Memory slots populated: 1/1 Jan 23 18:51:32.786319 kernel: Hypervisor detected: KVM Jan 23 18:51:32.786325 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 23 18:51:32.786331 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 18:51:32.786337 kernel: kvm-clock: using sched offset of 5236021439 cycles Jan 23 18:51:32.786343 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 18:51:32.786349 kernel: tsc: Detected 2294.586 MHz processor Jan 23 18:51:32.786356 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:51:32.786362 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:51:32.786368 kernel: last_pfn = 0x180000 max_arch_pfn = 0x10000000000 Jan 23 18:51:32.786375 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 18:51:32.786383 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:51:32.786389 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 23 18:51:32.786394 kernel: Using GB pages for direct mapping Jan 23 18:51:32.786400 kernel: ACPI: Early table checksum verification disabled Jan 23 18:51:32.786406 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 23 18:51:32.786412 kernel: ACPI: XSDT 0x000000007FB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jan 23 18:51:32.786418 kernel: ACPI: FACP 0x000000007FB77000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:51:32.786424 kernel: ACPI: DSDT 0x000000007FB78000 00423C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:51:32.786430 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 23 18:51:32.786437 kernel: ACPI: APIC 0x000000007FB76000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:51:32.786443 kernel: ACPI: MCFG 0x000000007FB75000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:51:32.786449 kernel: ACPI: WAET 0x000000007FB74000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:51:32.786454 kernel: ACPI: BGRT 0x000000007FB73000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 18:51:32.786460 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb77000-0x7fb770f3] Jan 23 18:51:32.786466 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb78000-0x7fb7c23b] Jan 23 18:51:32.786472 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 23 18:51:32.786477 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb76000-0x7fb7607f] Jan 23 18:51:32.786483 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb75000-0x7fb7503b] Jan 23 18:51:32.786490 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb74000-0x7fb74027] Jan 23 18:51:32.786496 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb73000-0x7fb73037] Jan 23 18:51:32.786502 kernel: No NUMA configuration found Jan 23 18:51:32.786508 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 23 18:51:32.786514 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jan 23 18:51:32.786519 kernel: Zone ranges: Jan 23 18:51:32.786525 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:51:32.786531 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 18:51:32.786537 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 23 18:51:32.786544 kernel: Device empty Jan 23 18:51:32.786550 kernel: Movable zone start for each node Jan 23 18:51:32.786556 kernel: Early memory node ranges Jan 23 18:51:32.786562 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 18:51:32.786567 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 23 18:51:32.786573 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 23 18:51:32.786579 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 23 18:51:32.786585 kernel: node 0: [mem 0x0000000000900000-0x000000007e93efff] Jan 23 18:51:32.786590 kernel: node 0: [mem 0x000000007ea00000-0x000000007ec70fff] Jan 23 18:51:32.786596 kernel: node 0: [mem 0x000000007ed85000-0x000000007f8ecfff] Jan 23 18:51:32.786609 kernel: node 0: [mem 0x000000007fbff000-0x000000007feaefff] Jan 23 18:51:32.786615 kernel: node 0: [mem 0x000000007feb5000-0x000000007feebfff] Jan 23 18:51:32.786622 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 23 18:51:32.786629 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 23 18:51:32.786636 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:51:32.786642 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 18:51:32.786648 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 23 18:51:32.786655 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:51:32.786662 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 23 18:51:32.786669 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 18:51:32.786675 kernel: On node 0, zone DMA32: 276 pages in unavailable ranges Jan 23 18:51:32.786681 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 18:51:32.786687 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 23 18:51:32.786694 kernel: On node 0, zone Normal: 276 pages in unavailable ranges Jan 23 18:51:32.786700 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 18:51:32.786707 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 18:51:32.786713 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:51:32.786721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 18:51:32.786727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 18:51:32.786734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:51:32.786740 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 18:51:32.786746 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 18:51:32.786753 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:51:32.786759 kernel: TSC deadline timer available Jan 23 18:51:32.786765 kernel: CPU topo: Max. logical packages: 2 Jan 23 18:51:32.786772 kernel: CPU topo: Max. logical dies: 2 Jan 23 18:51:32.786780 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:51:32.786786 kernel: CPU topo: Max. threads per core: 1 Jan 23 18:51:32.786792 kernel: CPU topo: Num. cores per package: 1 Jan 23 18:51:32.786798 kernel: CPU topo: Num. threads per package: 1 Jan 23 18:51:32.786805 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 18:51:32.786811 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 18:51:32.786818 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 18:51:32.786824 kernel: kvm-guest: setup PV sched yield Jan 23 18:51:32.786831 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 23 18:51:32.786838 kernel: Booting paravirtualized kernel on KVM Jan 23 18:51:32.786845 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:51:32.786851 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 18:51:32.786858 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 18:51:32.786864 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 18:51:32.786870 kernel: pcpu-alloc: [0] 0 1 Jan 23 18:51:32.786876 kernel: kvm-guest: PV spinlocks enabled Jan 23 18:51:32.786883 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:51:32.786890 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:51:32.786898 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:51:32.786905 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 18:51:32.786911 kernel: Fallback order for Node 0: 0 Jan 23 18:51:32.786917 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1046694 Jan 23 18:51:32.786923 kernel: Policy zone: Normal Jan 23 18:51:32.786930 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:51:32.786936 kernel: software IO TLB: area num 2. Jan 23 18:51:32.786942 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 18:51:32.786950 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:51:32.786957 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:51:32.786963 kernel: Dynamic Preempt: voluntary Jan 23 18:51:32.786969 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:51:32.786976 kernel: rcu: RCU event tracing is enabled. Jan 23 18:51:32.786983 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 18:51:32.786989 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:51:32.786996 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:51:32.787002 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:51:32.787010 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:51:32.787016 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 18:51:32.787022 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:51:32.787029 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:51:32.787035 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:51:32.787042 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 18:51:32.787048 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:51:32.787054 kernel: Console: colour dummy device 80x25 Jan 23 18:51:32.787061 kernel: printk: legacy console [tty0] enabled Jan 23 18:51:32.787068 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:51:32.787075 kernel: ACPI: Core revision 20240827 Jan 23 18:51:32.787081 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:51:32.787087 kernel: x2apic enabled Jan 23 18:51:32.787094 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:51:32.787100 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 18:51:32.787106 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 18:51:32.787113 kernel: kvm-guest: setup PV IPIs Jan 23 18:51:32.787119 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21133ac8314, max_idle_ns: 440795303427 ns Jan 23 18:51:32.787127 kernel: Calibrating delay loop (skipped) preset value.. 4589.17 BogoMIPS (lpj=2294586) Jan 23 18:51:32.787133 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:51:32.787140 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 18:51:32.787146 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 18:51:32.787152 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:51:32.787158 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 23 18:51:32.787164 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 23 18:51:32.787170 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 23 18:51:32.787176 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 18:51:32.787183 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 18:51:32.787189 kernel: TAA: Mitigation: Clear CPU buffers Jan 23 18:51:32.787196 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 23 18:51:32.787202 kernel: active return thunk: its_return_thunk Jan 23 18:51:32.787209 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 18:51:32.787215 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:51:32.787221 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:51:32.787227 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:51:32.787233 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 18:51:32.787239 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 18:51:32.787262 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 18:51:32.787269 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 18:51:32.787277 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:51:32.787284 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 18:51:32.787290 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 18:51:32.787296 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 18:51:32.787303 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 23 18:51:32.787310 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 23 18:51:32.787316 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:51:32.787323 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:51:32.787329 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:51:32.787336 kernel: landlock: Up and running. Jan 23 18:51:32.787342 kernel: SELinux: Initializing. Jan 23 18:51:32.787349 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:51:32.787357 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:51:32.787363 kernel: smpboot: CPU0: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (family: 0x6, model: 0x6a, stepping: 0x6) Jan 23 18:51:32.787370 kernel: Performance Events: PEBS fmt0-, Icelake events, full-width counters, Intel PMU driver. Jan 23 18:51:32.787377 kernel: ... version: 2 Jan 23 18:51:32.787384 kernel: ... bit width: 48 Jan 23 18:51:32.787391 kernel: ... generic registers: 8 Jan 23 18:51:32.787397 kernel: ... value mask: 0000ffffffffffff Jan 23 18:51:32.787404 kernel: ... max period: 00007fffffffffff Jan 23 18:51:32.787411 kernel: ... fixed-purpose events: 3 Jan 23 18:51:32.787418 kernel: ... event mask: 00000007000000ff Jan 23 18:51:32.787426 kernel: signal: max sigframe size: 3632 Jan 23 18:51:32.787433 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:51:32.787439 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:51:32.787446 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:51:32.787453 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:51:32.787460 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:51:32.787466 kernel: .... node #0, CPUs: #1 Jan 23 18:51:32.787473 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 18:51:32.787480 kernel: smpboot: Total of 2 processors activated (9178.34 BogoMIPS) Jan 23 18:51:32.787488 kernel: Memory: 3945188K/4186776K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 236708K reserved, 0K cma-reserved) Jan 23 18:51:32.787495 kernel: devtmpfs: initialized Jan 23 18:51:32.787502 kernel: x86/mm: Memory block size: 128MB Jan 23 18:51:32.787509 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 23 18:51:32.787516 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 23 18:51:32.787522 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 23 18:51:32.787529 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 23 18:51:32.787536 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feb3000-0x7feb4fff] (8192 bytes) Jan 23 18:51:32.787543 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7ff70000-0x7fffffff] (589824 bytes) Jan 23 18:51:32.787551 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:51:32.787558 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 18:51:32.787565 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:51:32.787572 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:51:32.787578 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:51:32.787585 kernel: audit: type=2000 audit(1769194290.886:1): state=initialized audit_enabled=0 res=1 Jan 23 18:51:32.787592 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:51:32.787598 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:51:32.787607 kernel: cpuidle: using governor menu Jan 23 18:51:32.787613 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:51:32.787620 kernel: dca service started, version 1.12.1 Jan 23 18:51:32.787627 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 18:51:32.787634 kernel: PCI: Using configuration type 1 for base access Jan 23 18:51:32.787641 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:51:32.787647 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:51:32.787654 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:51:32.787661 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:51:32.787669 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:51:32.787676 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:51:32.787683 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:51:32.787689 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:51:32.787696 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:51:32.787703 kernel: ACPI: Interpreter enabled Jan 23 18:51:32.787710 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 18:51:32.787716 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:51:32.787723 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:51:32.787730 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 18:51:32.787740 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 18:51:32.787747 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 18:51:32.787875 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 18:51:32.787936 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 18:51:32.787992 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 18:51:32.788000 kernel: PCI host bridge to bus 0000:00 Jan 23 18:51:32.788061 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 18:51:32.788115 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 18:51:32.788165 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 18:51:32.788215 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 23 18:51:32.788279 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 18:51:32.788330 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38e800003fff window] Jan 23 18:51:32.788382 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 18:51:32.788454 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 18:51:32.788523 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 23 18:51:32.788585 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Jan 23 18:51:32.788646 kernel: pci 0000:00:01.0: BAR 2 [mem 0x38e800000000-0x38e800003fff 64bit pref] Jan 23 18:51:32.788706 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8439e000-0x8439efff] Jan 23 18:51:32.788765 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 18:51:32.788828 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 18:51:32.788903 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.788965 kernel: pci 0000:00:02.0: BAR 0 [mem 0x8439d000-0x8439dfff] Jan 23 18:51:32.789026 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 18:51:32.789087 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 23 18:51:32.789146 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 23 18:51:32.789205 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:51:32.789293 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.789361 kernel: pci 0000:00:02.1: BAR 0 [mem 0x8439c000-0x8439cfff] Jan 23 18:51:32.789424 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 18:51:32.789487 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 23 18:51:32.789550 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 18:51:32.789618 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.789677 kernel: pci 0000:00:02.2: BAR 0 [mem 0x8439b000-0x8439bfff] Jan 23 18:51:32.789738 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 18:51:32.789796 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 23 18:51:32.789853 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 18:51:32.789915 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.789975 kernel: pci 0000:00:02.3: BAR 0 [mem 0x8439a000-0x8439afff] Jan 23 18:51:32.790033 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 18:51:32.790092 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 23 18:51:32.790153 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 18:51:32.790219 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.790298 kernel: pci 0000:00:02.4: BAR 0 [mem 0x84399000-0x84399fff] Jan 23 18:51:32.790363 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 18:51:32.790426 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 23 18:51:32.790489 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 18:51:32.790557 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.790622 kernel: pci 0000:00:02.5: BAR 0 [mem 0x84398000-0x84398fff] Jan 23 18:51:32.790685 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 18:51:32.790748 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 23 18:51:32.790810 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 18:51:32.790878 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.790941 kernel: pci 0000:00:02.6: BAR 0 [mem 0x84397000-0x84397fff] Jan 23 18:51:32.791006 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 18:51:32.791068 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 23 18:51:32.791133 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 18:51:32.791201 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.791656 kernel: pci 0000:00:02.7: BAR 0 [mem 0x84396000-0x84396fff] Jan 23 18:51:32.791733 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 18:51:32.791810 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 23 18:51:32.791879 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 18:51:32.791955 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.792021 kernel: pci 0000:00:03.0: BAR 0 [mem 0x84395000-0x84395fff] Jan 23 18:51:32.792087 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 18:51:32.792152 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 23 18:51:32.792218 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 18:51:32.792300 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.792382 kernel: pci 0000:00:03.1: BAR 0 [mem 0x84394000-0x84394fff] Jan 23 18:51:32.792452 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 18:51:32.792517 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 23 18:51:32.792583 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 18:51:32.794347 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.794437 kernel: pci 0000:00:03.2: BAR 0 [mem 0x84393000-0x84393fff] Jan 23 18:51:32.794510 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 18:51:32.794578 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 23 18:51:32.794651 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 18:51:32.794725 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.794795 kernel: pci 0000:00:03.3: BAR 0 [mem 0x84392000-0x84392fff] Jan 23 18:51:32.794866 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 18:51:32.794937 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 23 18:51:32.795006 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 18:51:32.795079 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.795147 kernel: pci 0000:00:03.4: BAR 0 [mem 0x84391000-0x84391fff] Jan 23 18:51:32.795215 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 18:51:32.795309 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 23 18:51:32.795386 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 18:51:32.795464 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.795536 kernel: pci 0000:00:03.5: BAR 0 [mem 0x84390000-0x84390fff] Jan 23 18:51:32.795604 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 18:51:32.795672 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 23 18:51:32.795740 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 18:51:32.795825 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.795895 kernel: pci 0000:00:03.6: BAR 0 [mem 0x8438f000-0x8438ffff] Jan 23 18:51:32.795961 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 18:51:32.796029 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 23 18:51:32.796094 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 18:51:32.796164 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.796231 kernel: pci 0000:00:03.7: BAR 0 [mem 0x8438e000-0x8438efff] Jan 23 18:51:32.797343 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 18:51:32.797423 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 23 18:51:32.797491 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 18:51:32.797569 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.797636 kernel: pci 0000:00:04.0: BAR 0 [mem 0x8438d000-0x8438dfff] Jan 23 18:51:32.797703 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 18:51:32.797768 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 23 18:51:32.797833 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 18:51:32.797925 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.797994 kernel: pci 0000:00:04.1: BAR 0 [mem 0x8438c000-0x8438cfff] Jan 23 18:51:32.798062 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 18:51:32.798128 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 23 18:51:32.798194 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 18:51:32.799671 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.799757 kernel: pci 0000:00:04.2: BAR 0 [mem 0x8438b000-0x8438bfff] Jan 23 18:51:32.799843 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 18:51:32.799910 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 23 18:51:32.799976 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 18:51:32.800042 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.800106 kernel: pci 0000:00:04.3: BAR 0 [mem 0x8438a000-0x8438afff] Jan 23 18:51:32.800170 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 18:51:32.800232 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 23 18:51:32.802348 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 18:51:32.802432 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.802499 kernel: pci 0000:00:04.4: BAR 0 [mem 0x84389000-0x84389fff] Jan 23 18:51:32.802560 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 18:51:32.802622 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 23 18:51:32.802686 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 18:51:32.802752 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.802813 kernel: pci 0000:00:04.5: BAR 0 [mem 0x84388000-0x84388fff] Jan 23 18:51:32.802873 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 18:51:32.802937 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 23 18:51:32.802998 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 18:51:32.803070 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.803130 kernel: pci 0000:00:04.6: BAR 0 [mem 0x84387000-0x84387fff] Jan 23 18:51:32.803192 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 18:51:32.803269 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 23 18:51:32.803334 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 18:51:32.803403 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.803467 kernel: pci 0000:00:04.7: BAR 0 [mem 0x84386000-0x84386fff] Jan 23 18:51:32.803531 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 18:51:32.803594 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 23 18:51:32.803657 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 18:51:32.803728 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.803806 kernel: pci 0000:00:05.0: BAR 0 [mem 0x84385000-0x84385fff] Jan 23 18:51:32.803875 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 18:51:32.803952 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 23 18:51:32.804013 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 18:51:32.804077 kernel: pci 0000:00:05.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.804136 kernel: pci 0000:00:05.1: BAR 0 [mem 0x84384000-0x84384fff] Jan 23 18:51:32.804198 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 18:51:32.804266 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 23 18:51:32.804326 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 18:51:32.804394 kernel: pci 0000:00:05.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.804456 kernel: pci 0000:00:05.2: BAR 0 [mem 0x84383000-0x84383fff] Jan 23 18:51:32.804516 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 18:51:32.804575 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 18:51:32.804638 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 18:51:32.804706 kernel: pci 0000:00:05.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.804770 kernel: pci 0000:00:05.3: BAR 0 [mem 0x84382000-0x84382fff] Jan 23 18:51:32.804833 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 18:51:32.804895 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 18:51:32.804954 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 18:51:32.805020 kernel: pci 0000:00:05.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:51:32.805087 kernel: pci 0000:00:05.4: BAR 0 [mem 0x84381000-0x84381fff] Jan 23 18:51:32.805151 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 18:51:32.805214 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 23 18:51:32.805841 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 18:51:32.805922 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 18:51:32.805989 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 18:51:32.806061 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 18:51:32.806128 kernel: pci 0000:00:1f.2: BAR 4 [io 0x7040-0x705f] Jan 23 18:51:32.806191 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x84380000-0x84380fff] Jan 23 18:51:32.806301 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 18:51:32.806368 kernel: pci 0000:00:1f.3: BAR 4 [io 0x7000-0x703f] Jan 23 18:51:32.806444 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 23 18:51:32.806510 kernel: pci 0000:01:00.0: BAR 0 [mem 0x84200000-0x842000ff 64bit] Jan 23 18:51:32.806580 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 18:51:32.806647 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 23 18:51:32.806712 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 23 18:51:32.806777 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:51:32.806843 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 18:51:32.806918 kernel: pci_bus 0000:02: extended config space not accessible Jan 23 18:51:32.806929 kernel: acpiphp: Slot [1] registered Jan 23 18:51:32.806936 kernel: acpiphp: Slot [0] registered Jan 23 18:51:32.806945 kernel: acpiphp: Slot [2] registered Jan 23 18:51:32.806952 kernel: acpiphp: Slot [3] registered Jan 23 18:51:32.806959 kernel: acpiphp: Slot [4] registered Jan 23 18:51:32.806966 kernel: acpiphp: Slot [5] registered Jan 23 18:51:32.806972 kernel: acpiphp: Slot [6] registered Jan 23 18:51:32.806979 kernel: acpiphp: Slot [7] registered Jan 23 18:51:32.806985 kernel: acpiphp: Slot [8] registered Jan 23 18:51:32.806992 kernel: acpiphp: Slot [9] registered Jan 23 18:51:32.806998 kernel: acpiphp: Slot [10] registered Jan 23 18:51:32.807005 kernel: acpiphp: Slot [11] registered Jan 23 18:51:32.807013 kernel: acpiphp: Slot [12] registered Jan 23 18:51:32.807020 kernel: acpiphp: Slot [13] registered Jan 23 18:51:32.807026 kernel: acpiphp: Slot [14] registered Jan 23 18:51:32.807033 kernel: acpiphp: Slot [15] registered Jan 23 18:51:32.807039 kernel: acpiphp: Slot [16] registered Jan 23 18:51:32.807046 kernel: acpiphp: Slot [17] registered Jan 23 18:51:32.807052 kernel: acpiphp: Slot [18] registered Jan 23 18:51:32.807058 kernel: acpiphp: Slot [19] registered Jan 23 18:51:32.807065 kernel: acpiphp: Slot [20] registered Jan 23 18:51:32.807073 kernel: acpiphp: Slot [21] registered Jan 23 18:51:32.807080 kernel: acpiphp: Slot [22] registered Jan 23 18:51:32.807086 kernel: acpiphp: Slot [23] registered Jan 23 18:51:32.807092 kernel: acpiphp: Slot [24] registered Jan 23 18:51:32.807099 kernel: acpiphp: Slot [25] registered Jan 23 18:51:32.807105 kernel: acpiphp: Slot [26] registered Jan 23 18:51:32.807112 kernel: acpiphp: Slot [27] registered Jan 23 18:51:32.807118 kernel: acpiphp: Slot [28] registered Jan 23 18:51:32.807125 kernel: acpiphp: Slot [29] registered Jan 23 18:51:32.807133 kernel: acpiphp: Slot [30] registered Jan 23 18:51:32.807141 kernel: acpiphp: Slot [31] registered Jan 23 18:51:32.807214 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jan 23 18:51:32.807768 kernel: pci 0000:02:01.0: BAR 4 [io 0x6000-0x601f] Jan 23 18:51:32.807858 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 18:51:32.807868 kernel: acpiphp: Slot [0-2] registered Jan 23 18:51:32.807943 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 18:51:32.808011 kernel: pci 0000:03:00.0: BAR 1 [mem 0x83e00000-0x83e00fff] Jan 23 18:51:32.808081 kernel: pci 0000:03:00.0: BAR 4 [mem 0x380800000000-0x380800003fff 64bit pref] Jan 23 18:51:32.808147 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 18:51:32.808522 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 18:51:32.808538 kernel: acpiphp: Slot [0-3] registered Jan 23 18:51:32.808612 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 23 18:51:32.808692 kernel: pci 0000:04:00.0: BAR 1 [mem 0x83c00000-0x83c00fff] Jan 23 18:51:32.808760 kernel: pci 0000:04:00.0: BAR 4 [mem 0x381000000000-0x381000003fff 64bit pref] Jan 23 18:51:32.808831 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 18:51:32.808841 kernel: acpiphp: Slot [0-4] registered Jan 23 18:51:32.808913 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 18:51:32.808981 kernel: pci 0000:05:00.0: BAR 4 [mem 0x381800000000-0x381800003fff 64bit pref] Jan 23 18:51:32.809046 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 18:51:32.809056 kernel: acpiphp: Slot [0-5] registered Jan 23 18:51:32.809129 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 18:51:32.809198 kernel: pci 0000:06:00.0: BAR 1 [mem 0x83800000-0x83800fff] Jan 23 18:51:32.809281 kernel: pci 0000:06:00.0: BAR 4 [mem 0x382000000000-0x382000003fff 64bit pref] Jan 23 18:51:32.809347 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 18:51:32.809357 kernel: acpiphp: Slot [0-6] registered Jan 23 18:51:32.809421 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 18:51:32.809431 kernel: acpiphp: Slot [0-7] registered Jan 23 18:51:32.809495 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 18:51:32.809504 kernel: acpiphp: Slot [0-8] registered Jan 23 18:51:32.809570 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 18:51:32.809580 kernel: acpiphp: Slot [0-9] registered Jan 23 18:51:32.809644 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 18:51:32.809654 kernel: acpiphp: Slot [0-10] registered Jan 23 18:51:32.809720 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 18:51:32.809730 kernel: acpiphp: Slot [0-11] registered Jan 23 18:51:32.809795 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 18:51:32.809805 kernel: acpiphp: Slot [0-12] registered Jan 23 18:51:32.809872 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 18:51:32.809881 kernel: acpiphp: Slot [0-13] registered Jan 23 18:51:32.809943 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 18:51:32.809953 kernel: acpiphp: Slot [0-14] registered Jan 23 18:51:32.810017 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 18:51:32.810027 kernel: acpiphp: Slot [0-15] registered Jan 23 18:51:32.810089 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 18:51:32.810099 kernel: acpiphp: Slot [0-16] registered Jan 23 18:51:32.810163 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 18:51:32.810173 kernel: acpiphp: Slot [0-17] registered Jan 23 18:51:32.810236 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 18:51:32.810251 kernel: acpiphp: Slot [0-18] registered Jan 23 18:51:32.810316 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 18:51:32.810325 kernel: acpiphp: Slot [0-19] registered Jan 23 18:51:32.810388 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 18:51:32.810400 kernel: acpiphp: Slot [0-20] registered Jan 23 18:51:32.810469 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 18:51:32.810479 kernel: acpiphp: Slot [0-21] registered Jan 23 18:51:32.810542 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 18:51:32.810551 kernel: acpiphp: Slot [0-22] registered Jan 23 18:51:32.810615 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 18:51:32.810624 kernel: acpiphp: Slot [0-23] registered Jan 23 18:51:32.810687 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 18:51:32.810698 kernel: acpiphp: Slot [0-24] registered Jan 23 18:51:32.810763 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 18:51:32.810773 kernel: acpiphp: Slot [0-25] registered Jan 23 18:51:32.810836 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 18:51:32.810845 kernel: acpiphp: Slot [0-26] registered Jan 23 18:51:32.810909 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 18:51:32.810919 kernel: acpiphp: Slot [0-27] registered Jan 23 18:51:32.810982 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 18:51:32.810994 kernel: acpiphp: Slot [0-28] registered Jan 23 18:51:32.811058 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 18:51:32.811067 kernel: acpiphp: Slot [0-29] registered Jan 23 18:51:32.811138 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 18:51:32.811149 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 18:51:32.811156 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 18:51:32.811163 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 18:51:32.811171 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 18:51:32.811178 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 18:51:32.811188 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 18:51:32.811195 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 18:51:32.811202 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 18:51:32.811209 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 18:51:32.811216 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 18:51:32.811224 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 18:51:32.811230 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 18:51:32.811238 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 18:51:32.811254 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 18:51:32.811287 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 18:51:32.811295 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 18:51:32.811302 kernel: iommu: Default domain type: Translated Jan 23 18:51:32.811310 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:51:32.811318 kernel: efivars: Registered efivars operations Jan 23 18:51:32.811325 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:51:32.811332 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 18:51:32.811340 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 23 18:51:32.811347 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 23 18:51:32.811356 kernel: e820: reserve RAM buffer [mem 0x7df57018-0x7fffffff] Jan 23 18:51:32.811363 kernel: e820: reserve RAM buffer [mem 0x7df7f018-0x7fffffff] Jan 23 18:51:32.811370 kernel: e820: reserve RAM buffer [mem 0x7e93f000-0x7fffffff] Jan 23 18:51:32.811378 kernel: e820: reserve RAM buffer [mem 0x7ec71000-0x7fffffff] Jan 23 18:51:32.811385 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 23 18:51:32.811392 kernel: e820: reserve RAM buffer [mem 0x7feaf000-0x7fffffff] Jan 23 18:51:32.811400 kernel: e820: reserve RAM buffer [mem 0x7feec000-0x7fffffff] Jan 23 18:51:32.811470 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 18:51:32.811539 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 18:51:32.811605 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 18:51:32.811614 kernel: vgaarb: loaded Jan 23 18:51:32.811622 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 18:51:32.811630 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:51:32.811637 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:51:32.811644 kernel: pnp: PnP ACPI init Jan 23 18:51:32.811714 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 18:51:32.811726 kernel: pnp: PnP ACPI: found 5 devices Jan 23 18:51:32.811734 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:51:32.811742 kernel: NET: Registered PF_INET protocol family Jan 23 18:51:32.811749 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:51:32.811757 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 18:51:32.811765 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:51:32.811773 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 18:51:32.811780 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 18:51:32.811824 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 18:51:32.811835 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:51:32.811842 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:51:32.811850 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:51:32.811858 kernel: NET: Registered PF_XDP protocol family Jan 23 18:51:32.811929 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 18:51:32.811991 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 18:51:32.812054 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 18:51:32.812119 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 18:51:32.812189 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 18:51:32.812344 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 18:51:32.812412 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 18:51:32.812474 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 18:51:32.812535 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 23 18:51:32.812597 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 23 18:51:32.812657 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 23 18:51:32.812718 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 23 18:51:32.812782 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 23 18:51:32.812844 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 23 18:51:32.812904 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 23 18:51:32.812994 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 23 18:51:32.813064 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 23 18:51:32.813191 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 23 18:51:32.813263 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 23 18:51:32.813326 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 23 18:51:32.813389 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 23 18:51:32.813450 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 23 18:51:32.813510 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 23 18:51:32.813576 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 23 18:51:32.813637 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 23 18:51:32.813699 kernel: pci 0000:00:05.1: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 23 18:51:32.813760 kernel: pci 0000:00:05.2: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 23 18:51:32.813822 kernel: pci 0000:00:05.3: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 23 18:51:32.813885 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 23 18:51:32.813945 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Jan 23 18:51:32.814010 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Jan 23 18:51:32.814071 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Jan 23 18:51:32.814130 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Jan 23 18:51:32.814190 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Jan 23 18:51:32.814265 kernel: pci 0000:00:02.6: bridge window [io 0x8000-0x8fff]: assigned Jan 23 18:51:32.814332 kernel: pci 0000:00:02.7: bridge window [io 0x9000-0x9fff]: assigned Jan 23 18:51:32.814393 kernel: pci 0000:00:03.0: bridge window [io 0xa000-0xafff]: assigned Jan 23 18:51:32.814451 kernel: pci 0000:00:03.1: bridge window [io 0xb000-0xbfff]: assigned Jan 23 18:51:32.814508 kernel: pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]: assigned Jan 23 18:51:32.814566 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff]: assigned Jan 23 18:51:32.814625 kernel: pci 0000:00:03.4: bridge window [io 0xe000-0xefff]: assigned Jan 23 18:51:32.814683 kernel: pci 0000:00:03.5: bridge window [io 0xf000-0xffff]: assigned Jan 23 18:51:32.814740 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.814798 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.814858 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.814917 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.814978 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.815037 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.815096 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.815155 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.815214 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.815290 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.815352 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.815416 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.815479 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.815542 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.815605 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.815668 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.815731 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.815810 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.815879 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.815943 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.816006 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.816069 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.816132 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.816194 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.818283 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.818375 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.818439 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.818498 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.818557 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.818616 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.818674 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff]: assigned Jan 23 18:51:32.818732 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff]: assigned Jan 23 18:51:32.818792 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 18:51:32.818855 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff]: assigned Jan 23 18:51:32.818916 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff]: assigned Jan 23 18:51:32.818976 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 18:51:32.819035 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff]: assigned Jan 23 18:51:32.819094 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff]: assigned Jan 23 18:51:32.819153 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff]: assigned Jan 23 18:51:32.819213 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff]: assigned Jan 23 18:51:32.819290 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff]: assigned Jan 23 18:51:32.819355 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff]: assigned Jan 23 18:51:32.819415 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff]: assigned Jan 23 18:51:32.819483 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.819543 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.819609 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.819672 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.819736 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.819814 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.819884 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.819952 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.820018 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.820083 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.820149 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.820214 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.821098 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.821168 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.821235 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.821583 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.821648 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.821708 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.821769 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.821830 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.821890 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.823295 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.823392 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.823463 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.823530 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.823597 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.823663 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.823730 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.823809 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 18:51:32.823878 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 23 18:51:32.823954 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 18:51:32.824024 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 23 18:51:32.824093 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 23 18:51:32.824160 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:51:32.824225 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 18:51:32.824297 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 23 18:51:32.824357 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 23 18:51:32.824417 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:51:32.824483 kernel: pci 0000:03:00.0: ROM [mem 0x83e80000-0x83efffff pref]: assigned Jan 23 18:51:32.824550 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 18:51:32.824614 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 23 18:51:32.824677 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 18:51:32.824741 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 18:51:32.824804 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 23 18:51:32.824867 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 18:51:32.824931 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 18:51:32.824994 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 23 18:51:32.825058 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 18:51:32.825121 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 18:51:32.825187 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 23 18:51:32.825259 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 18:51:32.825323 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 18:51:32.825387 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 23 18:51:32.825450 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 18:51:32.825517 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 18:51:32.825583 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 23 18:51:32.825645 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 18:51:32.825708 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 18:51:32.825771 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 23 18:51:32.825834 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 18:51:32.825897 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 18:51:32.825960 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 23 18:51:32.826024 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 18:51:32.826090 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 18:51:32.826155 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 23 18:51:32.826224 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 18:51:32.826342 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 18:51:32.826412 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 23 18:51:32.826479 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 18:51:32.826543 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 18:51:32.826953 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 23 18:51:32.827023 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 18:51:32.827084 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 18:51:32.827144 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 23 18:51:32.827209 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 18:51:32.827308 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 18:51:32.827611 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 23 18:51:32.827688 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 18:51:32.827755 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 18:51:32.827832 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 23 18:51:32.827901 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 18:51:32.827968 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 18:51:32.828034 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 23 18:51:32.828173 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 18:51:32.830521 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 18:51:32.830608 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff] Jan 23 18:51:32.830678 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 23 18:51:32.830745 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 18:51:32.830811 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 18:51:32.830875 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff] Jan 23 18:51:32.830939 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 23 18:51:32.831009 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 18:51:32.831077 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 18:51:32.831143 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff] Jan 23 18:51:32.831208 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 23 18:51:32.831283 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 18:51:32.831351 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 18:51:32.831417 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff] Jan 23 18:51:32.831488 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 23 18:51:32.833340 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 18:51:32.833421 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 18:51:32.833488 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff] Jan 23 18:51:32.833554 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 23 18:51:32.833620 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 18:51:32.833684 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 18:51:32.833745 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff] Jan 23 18:51:32.833809 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 23 18:51:32.833872 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 18:51:32.833938 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 18:51:32.834002 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff] Jan 23 18:51:32.834066 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 23 18:51:32.834130 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 18:51:32.834197 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 18:51:32.836309 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff] Jan 23 18:51:32.836393 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 23 18:51:32.836464 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 18:51:32.836535 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 18:51:32.836603 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff] Jan 23 18:51:32.836669 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 23 18:51:32.836737 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 18:51:32.836810 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 18:51:32.836877 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff] Jan 23 18:51:32.836945 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 23 18:51:32.837011 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 18:51:32.837081 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 18:51:32.837148 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff] Jan 23 18:51:32.837215 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 18:51:32.837336 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 18:51:32.837411 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 18:51:32.837477 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff] Jan 23 18:51:32.837543 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 18:51:32.837608 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 18:51:32.837676 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 18:51:32.837763 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff] Jan 23 18:51:32.837830 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 23 18:51:32.837895 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 18:51:32.838426 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 18:51:32.838488 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 18:51:32.838546 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 18:51:32.838603 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 23 18:51:32.838659 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 18:51:32.838716 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x38e800003fff window] Jan 23 18:51:32.838783 kernel: pci_bus 0000:01: resource 0 [io 0x6000-0x6fff] Jan 23 18:51:32.838848 kernel: pci_bus 0000:01: resource 1 [mem 0x84000000-0x842fffff] Jan 23 18:51:32.838907 kernel: pci_bus 0000:01: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:51:32.838974 kernel: pci_bus 0000:02: resource 0 [io 0x6000-0x6fff] Jan 23 18:51:32.839037 kernel: pci_bus 0000:02: resource 1 [mem 0x84000000-0x841fffff] Jan 23 18:51:32.839099 kernel: pci_bus 0000:02: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 18:51:32.839166 kernel: pci_bus 0000:03: resource 1 [mem 0x83e00000-0x83ffffff] Jan 23 18:51:32.839228 kernel: pci_bus 0000:03: resource 2 [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 18:51:32.839309 kernel: pci_bus 0000:04: resource 1 [mem 0x83c00000-0x83dfffff] Jan 23 18:51:32.839371 kernel: pci_bus 0000:04: resource 2 [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 18:51:32.839440 kernel: pci_bus 0000:05: resource 1 [mem 0x83a00000-0x83bfffff] Jan 23 18:51:32.839503 kernel: pci_bus 0000:05: resource 2 [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 18:51:32.839571 kernel: pci_bus 0000:06: resource 1 [mem 0x83800000-0x839fffff] Jan 23 18:51:32.839636 kernel: pci_bus 0000:06: resource 2 [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 18:51:32.839704 kernel: pci_bus 0000:07: resource 1 [mem 0x83600000-0x837fffff] Jan 23 18:51:32.839766 kernel: pci_bus 0000:07: resource 2 [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 18:51:32.839845 kernel: pci_bus 0000:08: resource 1 [mem 0x83400000-0x835fffff] Jan 23 18:51:32.839909 kernel: pci_bus 0000:08: resource 2 [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 18:51:32.839980 kernel: pci_bus 0000:09: resource 1 [mem 0x83200000-0x833fffff] Jan 23 18:51:32.840042 kernel: pci_bus 0000:09: resource 2 [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 18:51:32.840111 kernel: pci_bus 0000:0a: resource 1 [mem 0x83000000-0x831fffff] Jan 23 18:51:32.840174 kernel: pci_bus 0000:0a: resource 2 [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 18:51:32.840239 kernel: pci_bus 0000:0b: resource 1 [mem 0x82e00000-0x82ffffff] Jan 23 18:51:32.840312 kernel: pci_bus 0000:0b: resource 2 [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 18:51:32.840380 kernel: pci_bus 0000:0c: resource 1 [mem 0x82c00000-0x82dfffff] Jan 23 18:51:32.840442 kernel: pci_bus 0000:0c: resource 2 [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 18:51:32.840513 kernel: pci_bus 0000:0d: resource 1 [mem 0x82a00000-0x82bfffff] Jan 23 18:51:32.840573 kernel: pci_bus 0000:0d: resource 2 [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 18:51:32.840638 kernel: pci_bus 0000:0e: resource 1 [mem 0x82800000-0x829fffff] Jan 23 18:51:32.840698 kernel: pci_bus 0000:0e: resource 2 [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 18:51:32.840762 kernel: pci_bus 0000:0f: resource 1 [mem 0x82600000-0x827fffff] Jan 23 18:51:32.840823 kernel: pci_bus 0000:0f: resource 2 [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 18:51:32.840888 kernel: pci_bus 0000:10: resource 1 [mem 0x82400000-0x825fffff] Jan 23 18:51:32.840948 kernel: pci_bus 0000:10: resource 2 [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 18:51:32.841013 kernel: pci_bus 0000:11: resource 1 [mem 0x82200000-0x823fffff] Jan 23 18:51:32.841073 kernel: pci_bus 0000:11: resource 2 [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 18:51:32.841138 kernel: pci_bus 0000:12: resource 0 [io 0xf000-0xffff] Jan 23 18:51:32.841201 kernel: pci_bus 0000:12: resource 1 [mem 0x82000000-0x821fffff] Jan 23 18:51:32.841268 kernel: pci_bus 0000:12: resource 2 [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 18:51:32.841334 kernel: pci_bus 0000:13: resource 0 [io 0xe000-0xefff] Jan 23 18:51:32.841395 kernel: pci_bus 0000:13: resource 1 [mem 0x81e00000-0x81ffffff] Jan 23 18:51:32.841455 kernel: pci_bus 0000:13: resource 2 [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 18:51:32.841518 kernel: pci_bus 0000:14: resource 0 [io 0xd000-0xdfff] Jan 23 18:51:32.841578 kernel: pci_bus 0000:14: resource 1 [mem 0x81c00000-0x81dfffff] Jan 23 18:51:32.841639 kernel: pci_bus 0000:14: resource 2 [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 18:51:32.841705 kernel: pci_bus 0000:15: resource 0 [io 0xc000-0xcfff] Jan 23 18:51:32.841764 kernel: pci_bus 0000:15: resource 1 [mem 0x81a00000-0x81bfffff] Jan 23 18:51:32.841823 kernel: pci_bus 0000:15: resource 2 [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 18:51:32.841888 kernel: pci_bus 0000:16: resource 0 [io 0xb000-0xbfff] Jan 23 18:51:32.841948 kernel: pci_bus 0000:16: resource 1 [mem 0x81800000-0x819fffff] Jan 23 18:51:32.842007 kernel: pci_bus 0000:16: resource 2 [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 18:51:32.842073 kernel: pci_bus 0000:17: resource 0 [io 0xa000-0xafff] Jan 23 18:51:32.842133 kernel: pci_bus 0000:17: resource 1 [mem 0x81600000-0x817fffff] Jan 23 18:51:32.842203 kernel: pci_bus 0000:17: resource 2 [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 18:51:32.842291 kernel: pci_bus 0000:18: resource 0 [io 0x9000-0x9fff] Jan 23 18:51:32.842353 kernel: pci_bus 0000:18: resource 1 [mem 0x81400000-0x815fffff] Jan 23 18:51:32.842409 kernel: pci_bus 0000:18: resource 2 [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 18:51:32.842471 kernel: pci_bus 0000:19: resource 0 [io 0x8000-0x8fff] Jan 23 18:51:32.842528 kernel: pci_bus 0000:19: resource 1 [mem 0x81200000-0x813fffff] Jan 23 18:51:32.842583 kernel: pci_bus 0000:19: resource 2 [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 18:51:32.842646 kernel: pci_bus 0000:1a: resource 0 [io 0x5000-0x5fff] Jan 23 18:51:32.842703 kernel: pci_bus 0000:1a: resource 1 [mem 0x81000000-0x811fffff] Jan 23 18:51:32.842759 kernel: pci_bus 0000:1a: resource 2 [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 18:51:32.842819 kernel: pci_bus 0000:1b: resource 0 [io 0x4000-0x4fff] Jan 23 18:51:32.842877 kernel: pci_bus 0000:1b: resource 1 [mem 0x80e00000-0x80ffffff] Jan 23 18:51:32.842933 kernel: pci_bus 0000:1b: resource 2 [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 18:51:32.842992 kernel: pci_bus 0000:1c: resource 0 [io 0x3000-0x3fff] Jan 23 18:51:32.843048 kernel: pci_bus 0000:1c: resource 1 [mem 0x80c00000-0x80dfffff] Jan 23 18:51:32.843103 kernel: pci_bus 0000:1c: resource 2 [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 18:51:32.843162 kernel: pci_bus 0000:1d: resource 0 [io 0x2000-0x2fff] Jan 23 18:51:32.843218 kernel: pci_bus 0000:1d: resource 1 [mem 0x80a00000-0x80bfffff] Jan 23 18:51:32.843289 kernel: pci_bus 0000:1d: resource 2 [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 18:51:32.844090 kernel: pci_bus 0000:1e: resource 0 [io 0x1000-0x1fff] Jan 23 18:51:32.844165 kernel: pci_bus 0000:1e: resource 1 [mem 0x80800000-0x809fffff] Jan 23 18:51:32.844227 kernel: pci_bus 0000:1e: resource 2 [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 18:51:32.844238 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 18:51:32.844270 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:51:32.844278 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 18:51:32.844289 kernel: software IO TLB: mapped [mem 0x0000000077ede000-0x000000007bede000] (64MB) Jan 23 18:51:32.844296 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 18:51:32.844304 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21133ac8314, max_idle_ns: 440795303427 ns Jan 23 18:51:32.844311 kernel: Initialise system trusted keyrings Jan 23 18:51:32.844319 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 18:51:32.844326 kernel: Key type asymmetric registered Jan 23 18:51:32.844334 kernel: Asymmetric key parser 'x509' registered Jan 23 18:51:32.844341 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:51:32.844348 kernel: io scheduler mq-deadline registered Jan 23 18:51:32.844357 kernel: io scheduler kyber registered Jan 23 18:51:32.844364 kernel: io scheduler bfq registered Jan 23 18:51:32.844436 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 23 18:51:32.844501 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 23 18:51:32.844564 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 23 18:51:32.844625 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 23 18:51:32.844687 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 23 18:51:32.844751 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 23 18:51:32.844814 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 23 18:51:32.844875 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 23 18:51:32.844936 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 23 18:51:32.844997 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 23 18:51:32.845059 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 23 18:51:32.845122 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 23 18:51:32.845184 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 23 18:51:32.845260 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 23 18:51:32.845339 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 23 18:51:32.845405 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 23 18:51:32.845415 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 18:51:32.845479 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 23 18:51:32.845540 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 23 18:51:32.845602 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33 Jan 23 18:51:32.845663 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33 Jan 23 18:51:32.845724 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34 Jan 23 18:51:32.845795 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34 Jan 23 18:51:32.845858 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35 Jan 23 18:51:32.845918 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35 Jan 23 18:51:32.845980 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36 Jan 23 18:51:32.846041 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36 Jan 23 18:51:32.846101 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37 Jan 23 18:51:32.846165 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37 Jan 23 18:51:32.846229 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38 Jan 23 18:51:32.846301 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38 Jan 23 18:51:32.846367 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39 Jan 23 18:51:32.846433 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39 Jan 23 18:51:32.846442 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 18:51:32.846505 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40 Jan 23 18:51:32.846568 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40 Jan 23 18:51:32.846634 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 41 Jan 23 18:51:32.846698 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 41 Jan 23 18:51:32.846763 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 42 Jan 23 18:51:32.846827 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 42 Jan 23 18:51:32.846891 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 43 Jan 23 18:51:32.846955 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 43 Jan 23 18:51:32.847019 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 44 Jan 23 18:51:32.847084 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 44 Jan 23 18:51:32.847151 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 45 Jan 23 18:51:32.847214 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 45 Jan 23 18:51:32.847311 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 46 Jan 23 18:51:32.847378 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 46 Jan 23 18:51:32.847446 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 47 Jan 23 18:51:32.847512 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 47 Jan 23 18:51:32.847522 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 23 18:51:32.847587 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 48 Jan 23 18:51:32.847656 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 48 Jan 23 18:51:32.847723 kernel: pcieport 0000:00:05.1: PME: Signaling with IRQ 49 Jan 23 18:51:32.847800 kernel: pcieport 0000:00:05.1: AER: enabled with IRQ 49 Jan 23 18:51:32.847871 kernel: pcieport 0000:00:05.2: PME: Signaling with IRQ 50 Jan 23 18:51:32.847939 kernel: pcieport 0000:00:05.2: AER: enabled with IRQ 50 Jan 23 18:51:32.848006 kernel: pcieport 0000:00:05.3: PME: Signaling with IRQ 51 Jan 23 18:51:32.848069 kernel: pcieport 0000:00:05.3: AER: enabled with IRQ 51 Jan 23 18:51:32.848137 kernel: pcieport 0000:00:05.4: PME: Signaling with IRQ 52 Jan 23 18:51:32.848205 kernel: pcieport 0000:00:05.4: AER: enabled with IRQ 52 Jan 23 18:51:32.848215 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:51:32.848222 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:51:32.848230 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:51:32.848237 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 18:51:32.848253 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 18:51:32.848261 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 18:51:32.848333 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 18:51:32.848344 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 23 18:51:32.848406 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 18:51:32.848467 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T18:51:32 UTC (1769194292) Jan 23 18:51:32.848526 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 23 18:51:32.848535 kernel: intel_pstate: CPU model not supported Jan 23 18:51:32.848542 kernel: efifb: probing for efifb Jan 23 18:51:32.848549 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Jan 23 18:51:32.848556 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 18:51:32.848563 kernel: efifb: scrolling: redraw Jan 23 18:51:32.848573 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 18:51:32.848580 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 18:51:32.848588 kernel: fb0: EFI VGA frame buffer device Jan 23 18:51:32.848595 kernel: pstore: Using crash dump compression: deflate Jan 23 18:51:32.848603 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 18:51:32.848610 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:51:32.848618 kernel: Segment Routing with IPv6 Jan 23 18:51:32.848626 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:51:32.848633 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:51:32.848643 kernel: Key type dns_resolver registered Jan 23 18:51:32.848650 kernel: IPI shorthand broadcast: enabled Jan 23 18:51:32.848658 kernel: sched_clock: Marking stable (3377001385, 145377500)->(3773093478, -250714593) Jan 23 18:51:32.848665 kernel: registered taskstats version 1 Jan 23 18:51:32.848673 kernel: Loading compiled-in X.509 certificates Jan 23 18:51:32.848680 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 18:51:32.848688 kernel: Demotion targets for Node 0: null Jan 23 18:51:32.848695 kernel: Key type .fscrypt registered Jan 23 18:51:32.848702 kernel: Key type fscrypt-provisioning registered Jan 23 18:51:32.848710 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:51:32.848719 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:51:32.848726 kernel: ima: No architecture policies found Jan 23 18:51:32.848734 kernel: clk: Disabling unused clocks Jan 23 18:51:32.848741 kernel: Warning: unable to open an initial console. Jan 23 18:51:32.848749 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 18:51:32.848756 kernel: Write protecting the kernel read-only data: 40960k Jan 23 18:51:32.848764 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 18:51:32.848771 kernel: Run /init as init process Jan 23 18:51:32.848779 kernel: with arguments: Jan 23 18:51:32.848788 kernel: /init Jan 23 18:51:32.848795 kernel: with environment: Jan 23 18:51:32.848803 kernel: HOME=/ Jan 23 18:51:32.848810 kernel: TERM=linux Jan 23 18:51:32.848818 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:51:32.848829 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:51:32.848838 systemd[1]: Detected virtualization kvm. Jan 23 18:51:32.848848 systemd[1]: Detected architecture x86-64. Jan 23 18:51:32.848856 systemd[1]: Running in initrd. Jan 23 18:51:32.848863 systemd[1]: No hostname configured, using default hostname. Jan 23 18:51:32.848871 systemd[1]: Hostname set to . Jan 23 18:51:32.848879 systemd[1]: Initializing machine ID from VM UUID. Jan 23 18:51:32.848896 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:51:32.848906 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:51:32.848914 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:51:32.848922 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:51:32.848930 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:51:32.848938 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:51:32.848949 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:51:32.848958 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 18:51:32.848966 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 18:51:32.848974 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:51:32.848982 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:51:32.848990 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:51:32.849000 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:51:32.849008 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:51:32.849016 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:51:32.849024 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:51:32.849032 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:51:32.849040 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:51:32.849048 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:51:32.849056 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:51:32.849064 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:51:32.849073 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:51:32.849081 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:51:32.849089 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:51:32.849097 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:51:32.849105 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:51:32.849114 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:51:32.849122 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:51:32.849130 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:51:32.849139 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:51:32.849147 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:51:32.849155 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:51:32.849164 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:51:32.849172 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:51:32.849200 systemd-journald[223]: Collecting audit messages is disabled. Jan 23 18:51:32.849221 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:51:32.849229 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:32.849241 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:51:32.849259 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:51:32.849267 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:51:32.849275 kernel: Bridge firewalling registered Jan 23 18:51:32.849283 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:51:32.849291 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:51:32.849299 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:51:32.849307 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:51:32.849317 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:51:32.849326 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:51:32.849334 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:51:32.849343 systemd-journald[223]: Journal started Jan 23 18:51:32.849365 systemd-journald[223]: Runtime Journal (/run/log/journal/4dcc76ef4659461e9e690c8f14878c2b) is 8M, max 78M, 70M free. Jan 23 18:51:32.782727 systemd-modules-load[224]: Inserted module 'overlay' Jan 23 18:51:32.815682 systemd-modules-load[224]: Inserted module 'br_netfilter' Jan 23 18:51:32.852962 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:51:32.861348 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:51:32.870367 systemd-tmpfiles[262]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:51:32.874352 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:51:32.874889 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:51:32.879356 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:51:32.914913 systemd-resolved[283]: Positive Trust Anchors: Jan 23 18:51:32.915637 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:51:32.915668 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:51:32.919911 systemd-resolved[283]: Defaulting to hostname 'linux'. Jan 23 18:51:32.920931 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:51:32.921873 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:51:32.948265 kernel: SCSI subsystem initialized Jan 23 18:51:32.957264 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:51:32.968291 kernel: iscsi: registered transport (tcp) Jan 23 18:51:32.987399 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:51:32.987458 kernel: QLogic iSCSI HBA Driver Jan 23 18:51:33.002967 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:51:33.015230 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:51:33.017499 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:51:33.050836 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:51:33.054345 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:51:33.108281 kernel: raid6: avx512x4 gen() 46113 MB/s Jan 23 18:51:33.125276 kernel: raid6: avx512x2 gen() 46681 MB/s Jan 23 18:51:33.142273 kernel: raid6: avx512x1 gen() 46876 MB/s Jan 23 18:51:33.159374 kernel: raid6: avx2x4 gen() 36402 MB/s Jan 23 18:51:33.176278 kernel: raid6: avx2x2 gen() 35438 MB/s Jan 23 18:51:33.193514 kernel: raid6: avx2x1 gen() 27834 MB/s Jan 23 18:51:33.193589 kernel: raid6: using algorithm avx512x1 gen() 46876 MB/s Jan 23 18:51:33.211558 kernel: raid6: .... xor() 26534 MB/s, rmw enabled Jan 23 18:51:33.211622 kernel: raid6: using avx512x2 recovery algorithm Jan 23 18:51:33.230272 kernel: xor: automatically using best checksumming function avx Jan 23 18:51:33.350284 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:51:33.355676 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:51:33.357307 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:51:33.382222 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jan 23 18:51:33.386425 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:51:33.388973 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:51:33.408403 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Jan 23 18:51:33.427085 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:51:33.429087 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:51:33.499609 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:51:33.502356 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:51:33.561269 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 23 18:51:33.564410 kernel: virtio_blk virtio2: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 23 18:51:33.570523 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 18:51:33.570567 kernel: GPT:17805311 != 104857599 Jan 23 18:51:33.570579 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 18:51:33.572253 kernel: GPT:17805311 != 104857599 Jan 23 18:51:33.572272 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 18:51:33.573430 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:51:33.593270 kernel: ACPI: bus type USB registered Jan 23 18:51:33.595469 kernel: usbcore: registered new interface driver usbfs Jan 23 18:51:33.595500 kernel: usbcore: registered new interface driver hub Jan 23 18:51:33.598262 kernel: usbcore: registered new device driver usb Jan 23 18:51:33.612295 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:51:33.622271 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller Jan 23 18:51:33.626225 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:51:33.628719 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1 Jan 23 18:51:33.628863 kernel: uhci_hcd 0000:02:01.0: detected 2 ports Jan 23 18:51:33.628954 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x00006000 Jan 23 18:51:33.626336 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:33.629115 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:51:33.632480 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:51:33.641295 kernel: hub 1-0:1.0: USB hub found Jan 23 18:51:33.644718 kernel: hub 1-0:1.0: 2 ports detected Jan 23 18:51:33.645288 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:51:33.645377 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:33.648523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:51:33.651911 kernel: AES CTR mode by8 optimization enabled Jan 23 18:51:33.696289 kernel: libata version 3.00 loaded. Jan 23 18:51:33.703990 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 23 18:51:33.716808 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 18:51:33.722688 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 18:51:33.722837 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 18:51:33.722849 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 18:51:33.722937 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 18:51:33.723021 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 18:51:33.718512 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:33.726268 kernel: scsi host0: ahci Jan 23 18:51:33.726619 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 18:51:33.727421 kernel: scsi host1: ahci Jan 23 18:51:33.730268 kernel: scsi host2: ahci Jan 23 18:51:33.731365 kernel: scsi host3: ahci Jan 23 18:51:33.733178 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 18:51:33.742011 kernel: scsi host4: ahci Jan 23 18:51:33.742422 kernel: scsi host5: ahci Jan 23 18:51:33.742507 kernel: ata1: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380100 irq 61 lpm-pol 1 Jan 23 18:51:33.742518 kernel: ata2: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380180 irq 61 lpm-pol 1 Jan 23 18:51:33.742527 kernel: ata3: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380200 irq 61 lpm-pol 1 Jan 23 18:51:33.742541 kernel: ata4: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380280 irq 61 lpm-pol 1 Jan 23 18:51:33.742551 kernel: ata5: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380300 irq 61 lpm-pol 1 Jan 23 18:51:33.742559 kernel: ata6: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380380 irq 61 lpm-pol 1 Jan 23 18:51:33.734198 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 18:51:33.748915 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:51:33.750000 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:51:33.764501 disk-uuid[675]: Primary Header is updated. Jan 23 18:51:33.764501 disk-uuid[675]: Secondary Entries is updated. Jan 23 18:51:33.764501 disk-uuid[675]: Secondary Header is updated. Jan 23 18:51:33.770260 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:51:33.871291 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Jan 23 18:51:34.047273 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 18:51:34.047328 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 18:51:34.047338 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 18:51:34.047347 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 18:51:34.047365 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 18:51:34.047374 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 18:51:34.060271 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 18:51:34.068274 kernel: usbcore: registered new interface driver usbhid Jan 23 18:51:34.068310 kernel: usbhid: USB HID core driver Jan 23 18:51:34.072485 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input4 Jan 23 18:51:34.072515 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0 Jan 23 18:51:34.085977 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:51:34.086981 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:51:34.087415 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:51:34.088047 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:51:34.089372 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:51:34.119380 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:51:34.782371 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:51:34.783112 disk-uuid[676]: The operation has completed successfully. Jan 23 18:51:34.824906 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:51:34.825480 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:51:34.850153 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 18:51:34.864522 sh[701]: Success Jan 23 18:51:34.881545 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:51:34.881593 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:51:34.883305 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:51:34.891273 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 18:51:34.937806 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:51:34.939871 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 18:51:34.949475 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 18:51:34.959269 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (713) Jan 23 18:51:34.961382 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 18:51:34.961428 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:51:34.975746 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:51:34.975799 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:51:34.977769 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 18:51:34.978489 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:51:34.979654 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:51:34.981072 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:51:34.983332 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:51:35.020287 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (744) Jan 23 18:51:35.022281 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:51:35.024258 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:51:35.029319 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:51:35.029348 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:51:35.033275 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:51:35.034165 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:51:35.035839 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:51:35.068326 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:51:35.070798 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:51:35.098466 systemd-networkd[882]: lo: Link UP Jan 23 18:51:35.098474 systemd-networkd[882]: lo: Gained carrier Jan 23 18:51:35.099673 systemd-networkd[882]: Enumeration completed Jan 23 18:51:35.100047 systemd-networkd[882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:51:35.100050 systemd-networkd[882]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:51:35.100955 systemd-networkd[882]: eth0: Link UP Jan 23 18:51:35.101287 systemd-networkd[882]: eth0: Gained carrier Jan 23 18:51:35.101296 systemd-networkd[882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:51:35.101568 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:51:35.102378 systemd[1]: Reached target network.target - Network. Jan 23 18:51:35.112297 systemd-networkd[882]: eth0: DHCPv4 address 10.0.4.9/25, gateway 10.0.4.1 acquired from 10.0.4.1 Jan 23 18:51:35.173803 ignition[825]: Ignition 2.22.0 Jan 23 18:51:35.173813 ignition[825]: Stage: fetch-offline Jan 23 18:51:35.177375 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:51:35.173842 ignition[825]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:51:35.173849 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:51:35.173922 ignition[825]: parsed url from cmdline: "" Jan 23 18:51:35.173925 ignition[825]: no config URL provided Jan 23 18:51:35.173929 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:51:35.173935 ignition[825]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:51:35.180395 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 18:51:35.173939 ignition[825]: failed to fetch config: resource requires networking Jan 23 18:51:35.176326 ignition[825]: Ignition finished successfully Jan 23 18:51:35.217058 ignition[891]: Ignition 2.22.0 Jan 23 18:51:35.217069 ignition[891]: Stage: fetch Jan 23 18:51:35.217175 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:51:35.217182 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:51:35.217242 ignition[891]: parsed url from cmdline: "" Jan 23 18:51:35.217254 ignition[891]: no config URL provided Jan 23 18:51:35.217258 ignition[891]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:51:35.217264 ignition[891]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:51:35.217327 ignition[891]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 18:51:35.217333 ignition[891]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 23 18:51:35.217350 ignition[891]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 18:51:35.959735 ignition[891]: GET result: OK Jan 23 18:51:35.960429 ignition[891]: parsing config with SHA512: f4aab328d09d2157165a72bf7b9fad9a382dc2fdc581e05ea0007cd944e9abab75cc7ea546c629d4985a7ac11beffe51feb2fc19522fd22e1419994eeaeba17f Jan 23 18:51:35.962547 unknown[891]: fetched base config from "system" Jan 23 18:51:35.962555 unknown[891]: fetched base config from "system" Jan 23 18:51:35.962726 ignition[891]: fetch: fetch complete Jan 23 18:51:35.962559 unknown[891]: fetched user config from "openstack" Jan 23 18:51:35.962730 ignition[891]: fetch: fetch passed Jan 23 18:51:35.964697 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 18:51:35.962760 ignition[891]: Ignition finished successfully Jan 23 18:51:35.965993 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:51:36.003603 ignition[897]: Ignition 2.22.0 Jan 23 18:51:36.003615 ignition[897]: Stage: kargs Jan 23 18:51:36.003726 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:51:36.003734 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:51:36.004707 ignition[897]: kargs: kargs passed Jan 23 18:51:36.004743 ignition[897]: Ignition finished successfully Jan 23 18:51:36.006558 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:51:36.008132 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:51:36.033485 ignition[903]: Ignition 2.22.0 Jan 23 18:51:36.033495 ignition[903]: Stage: disks Jan 23 18:51:36.033613 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:51:36.033620 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:51:36.035456 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:51:36.034057 ignition[903]: disks: disks passed Jan 23 18:51:36.036367 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:51:36.034090 ignition[903]: Ignition finished successfully Jan 23 18:51:36.036904 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:51:36.037449 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:51:36.037969 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:51:36.038508 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:51:36.040367 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:51:36.078935 systemd-fsck[911]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 18:51:36.081524 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:51:36.082795 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:51:36.199307 kernel: EXT4-fs (vda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 18:51:36.198917 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:51:36.199947 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:51:36.201644 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:51:36.204332 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:51:36.205284 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 18:51:36.206779 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 23 18:51:36.207157 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:51:36.207182 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:51:36.218290 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:51:36.219846 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:51:36.234042 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (919) Jan 23 18:51:36.234092 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:51:36.235441 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:51:36.242390 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:51:36.242440 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:51:36.243907 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:51:36.278273 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:51:36.327541 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 18:51:36.332324 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Jan 23 18:51:36.336109 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 18:51:36.340176 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 18:51:36.422869 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:51:36.424907 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:51:36.426257 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:51:36.442073 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:51:36.443721 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:51:36.461077 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:51:36.472322 ignition[1035]: INFO : Ignition 2.22.0 Jan 23 18:51:36.472322 ignition[1035]: INFO : Stage: mount Jan 23 18:51:36.474317 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:51:36.474317 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:51:36.474317 ignition[1035]: INFO : mount: mount passed Jan 23 18:51:36.474317 ignition[1035]: INFO : Ignition finished successfully Jan 23 18:51:36.474488 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:51:36.907417 systemd-networkd[882]: eth0: Gained IPv6LL Jan 23 18:51:37.330287 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:51:39.338272 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:51:43.343275 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:51:43.345773 coreos-metadata[921]: Jan 23 18:51:43.345 WARN failed to locate config-drive, using the metadata service API instead Jan 23 18:51:43.356832 coreos-metadata[921]: Jan 23 18:51:43.356 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 18:51:43.922325 coreos-metadata[921]: Jan 23 18:51:43.922 INFO Fetch successful Jan 23 18:51:43.922890 coreos-metadata[921]: Jan 23 18:51:43.922 INFO wrote hostname ci-4459-2-3-2-5d24c62718 to /sysroot/etc/hostname Jan 23 18:51:43.925287 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 23 18:51:43.925394 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 23 18:51:43.926280 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:51:43.952506 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:51:43.976270 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1052) Jan 23 18:51:43.980841 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:51:43.980886 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:51:43.985559 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:51:43.985594 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:51:43.987401 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:51:44.011872 ignition[1070]: INFO : Ignition 2.22.0 Jan 23 18:51:44.011872 ignition[1070]: INFO : Stage: files Jan 23 18:51:44.012888 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:51:44.012888 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:51:44.012888 ignition[1070]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:51:44.013894 ignition[1070]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:51:44.013894 ignition[1070]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:51:44.017044 ignition[1070]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:51:44.017467 ignition[1070]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:51:44.018004 ignition[1070]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:51:44.017715 unknown[1070]: wrote ssh authorized keys file for user: core Jan 23 18:51:44.021827 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:51:44.021827 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:51:44.023386 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:51:44.023386 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:51:44.023386 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:51:44.024974 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:51:44.024974 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:51:44.024974 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 23 18:51:44.306624 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 23 18:51:45.236490 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:51:45.239944 ignition[1070]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:51:45.239944 ignition[1070]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:51:45.239944 ignition[1070]: INFO : files: files passed Jan 23 18:51:45.239944 ignition[1070]: INFO : Ignition finished successfully Jan 23 18:51:45.239500 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:51:45.242939 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:51:45.244355 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:51:45.261485 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:51:45.261568 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:51:45.268309 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:51:45.268309 initrd-setup-root-after-ignition[1100]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:51:45.270203 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:51:45.271514 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:51:45.272218 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:51:45.273342 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:51:45.303763 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:51:45.304457 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:51:45.305780 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:51:45.306646 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:51:45.307461 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:51:45.308574 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:51:45.326196 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:51:45.328373 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:51:45.344979 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:51:45.346117 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:51:45.347166 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:51:45.348107 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:51:45.348209 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:51:45.348784 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:51:45.349211 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:51:45.349679 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:51:45.350409 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:51:45.351142 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:51:45.351893 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:51:45.352625 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:51:45.353365 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:51:45.354109 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:51:45.354836 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:51:45.355600 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:51:45.356374 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:51:45.356462 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:51:45.357489 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:51:45.358212 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:51:45.358864 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:51:45.358944 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:51:45.359575 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:51:45.359657 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:51:45.360648 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:51:45.360761 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:51:45.361435 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:51:45.361528 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:51:45.364408 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:51:45.364858 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:51:45.364986 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:51:45.366446 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:51:45.367367 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:51:45.368293 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:51:45.368796 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:51:45.368873 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:51:45.372526 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:51:45.375327 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:51:45.389196 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:51:45.393848 ignition[1124]: INFO : Ignition 2.22.0 Jan 23 18:51:45.393848 ignition[1124]: INFO : Stage: umount Jan 23 18:51:45.395392 ignition[1124]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:51:45.395392 ignition[1124]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 18:51:45.395392 ignition[1124]: INFO : umount: umount passed Jan 23 18:51:45.395392 ignition[1124]: INFO : Ignition finished successfully Jan 23 18:51:45.394475 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:51:45.394564 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:51:45.396893 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:51:45.396992 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:51:45.397759 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:51:45.397832 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:51:45.398586 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:51:45.398625 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:51:45.399207 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 18:51:45.399240 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 18:51:45.399927 systemd[1]: Stopped target network.target - Network. Jan 23 18:51:45.400594 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:51:45.400634 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:51:45.401267 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:51:45.401913 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:51:45.405294 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:51:45.405659 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:51:45.406290 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:51:45.406920 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:51:45.406951 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:51:45.407556 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:51:45.407578 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:51:45.408186 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:51:45.408225 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:51:45.408839 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:51:45.408869 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:51:45.409491 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:51:45.409524 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:51:45.410166 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:51:45.410694 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:51:45.412816 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:51:45.413051 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:51:45.415772 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 18:51:45.415983 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:51:45.416013 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:51:45.417179 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:51:45.419025 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:51:45.419111 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:51:45.420501 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 18:51:45.420745 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:51:45.421335 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:51:45.421368 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:51:45.423312 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:51:45.423662 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:51:45.423719 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:51:45.424361 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:51:45.424391 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:51:45.426614 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:51:45.426644 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:51:45.427426 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:51:45.428954 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 18:51:45.435848 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:51:45.435969 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:51:45.437076 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:51:45.437121 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:51:45.439091 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:51:45.439121 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:51:45.440260 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:51:45.440641 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:51:45.441479 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:51:45.441834 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:51:45.442588 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:51:45.442947 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:51:45.447474 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:51:45.447867 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:51:45.447913 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:51:45.449123 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:51:45.449157 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:51:45.450548 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:51:45.450583 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:45.451914 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:51:45.452813 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:51:45.454454 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:51:45.454519 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:51:45.455787 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:51:45.457073 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:51:45.471132 systemd[1]: Switching root. Jan 23 18:51:45.506947 systemd-journald[223]: Journal stopped Jan 23 18:51:46.462433 systemd-journald[223]: Received SIGTERM from PID 1 (systemd). Jan 23 18:51:46.462518 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:51:46.462539 kernel: SELinux: policy capability open_perms=1 Jan 23 18:51:46.462549 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:51:46.462559 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:51:46.462568 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:51:46.462578 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:51:46.462588 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:51:46.462597 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:51:46.462609 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:51:46.462619 systemd[1]: Successfully loaded SELinux policy in 63.098ms. Jan 23 18:51:46.462638 kernel: audit: type=1403 audit(1769194305.648:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 18:51:46.462653 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.851ms. Jan 23 18:51:46.462665 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:51:46.462680 systemd[1]: Detected virtualization kvm. Jan 23 18:51:46.462690 systemd[1]: Detected architecture x86-64. Jan 23 18:51:46.462700 systemd[1]: Detected first boot. Jan 23 18:51:46.462712 systemd[1]: Hostname set to . Jan 23 18:51:46.462722 systemd[1]: Initializing machine ID from VM UUID. Jan 23 18:51:46.462732 zram_generator::config[1167]: No configuration found. Jan 23 18:51:46.462744 kernel: Guest personality initialized and is inactive Jan 23 18:51:46.462753 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 18:51:46.462763 kernel: Initialized host personality Jan 23 18:51:46.462772 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:51:46.462785 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:51:46.462796 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 18:51:46.462808 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:51:46.462818 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:51:46.462828 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:51:46.462840 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:51:46.462850 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:51:46.462860 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:51:46.462875 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:51:46.462885 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:51:46.462898 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:51:46.462908 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:51:46.462922 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:51:46.462932 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:51:46.462943 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:51:46.462956 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:51:46.462969 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:51:46.462982 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:51:46.462995 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:51:46.463005 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:51:46.463015 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:51:46.463026 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:51:46.463037 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:51:46.463046 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:51:46.463056 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:51:46.463068 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:51:46.463078 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:51:46.463090 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:51:46.463100 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:51:46.467293 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:51:46.467320 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:51:46.467333 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:51:46.467345 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:51:46.467357 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:51:46.467367 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:51:46.467382 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:51:46.467393 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:51:46.467403 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:51:46.467414 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:51:46.467425 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:51:46.467436 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:46.467446 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:51:46.467456 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:51:46.467466 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:51:46.467480 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:51:46.467491 systemd[1]: Reached target machines.target - Containers. Jan 23 18:51:46.467501 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:51:46.467512 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:51:46.467522 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:51:46.467536 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:51:46.467546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:51:46.467556 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:51:46.467569 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:51:46.467578 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:51:46.467589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:51:46.467600 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:51:46.467610 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:51:46.467623 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:51:46.467633 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:51:46.467644 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:51:46.467654 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:51:46.467665 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:51:46.467675 kernel: fuse: init (API version 7.41) Jan 23 18:51:46.467697 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:51:46.467707 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:51:46.467720 kernel: loop: module loaded Jan 23 18:51:46.467731 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:51:46.467743 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:51:46.467754 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:51:46.467766 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 18:51:46.467777 systemd[1]: Stopped verity-setup.service. Jan 23 18:51:46.467789 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:46.467800 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:51:46.467811 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:51:46.467821 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:51:46.467832 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:51:46.467842 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:51:46.467852 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:51:46.467863 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:51:46.467875 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:51:46.467885 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:51:46.467896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:51:46.467906 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:51:46.467916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:51:46.467926 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:51:46.467968 systemd-journald[1241]: Collecting audit messages is disabled. Jan 23 18:51:46.467995 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:51:46.468008 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:51:46.468018 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:51:46.468029 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:51:46.468039 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:51:46.468050 systemd-journald[1241]: Journal started Jan 23 18:51:46.468073 systemd-journald[1241]: Runtime Journal (/run/log/journal/4dcc76ef4659461e9e690c8f14878c2b) is 8M, max 78M, 70M free. Jan 23 18:51:46.176920 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:51:46.470405 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:51:46.196373 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 18:51:46.196777 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:51:46.471939 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:51:46.487932 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:51:46.493237 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:51:46.493865 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:51:46.493907 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:51:46.497228 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:51:46.505406 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:51:46.506078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:51:46.508403 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:51:46.510081 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:51:46.510617 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:51:46.512390 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:51:46.512872 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:51:46.515579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:51:46.524854 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:51:46.528430 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:51:46.530149 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:51:46.531678 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:51:46.532384 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:51:46.533740 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:51:46.536615 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:51:46.541163 systemd-journald[1241]: Time spent on flushing to /var/log/journal/4dcc76ef4659461e9e690c8f14878c2b is 73.864ms for 1687 entries. Jan 23 18:51:46.541163 systemd-journald[1241]: System Journal (/var/log/journal/4dcc76ef4659461e9e690c8f14878c2b) is 8M, max 584.8M, 576.8M free. Jan 23 18:51:46.630316 systemd-journald[1241]: Received client request to flush runtime journal. Jan 23 18:51:46.630441 kernel: loop0: detected capacity change from 0 to 128560 Jan 23 18:51:46.630468 kernel: ACPI: bus type drm_connector registered Jan 23 18:51:46.543323 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:51:46.565084 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:51:46.565937 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:51:46.569677 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:51:46.613299 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:51:46.624429 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:51:46.624611 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:51:46.632818 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:51:46.645179 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:51:46.648265 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:51:46.657821 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:51:46.664269 kernel: loop1: detected capacity change from 0 to 219144 Jan 23 18:51:46.670700 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:51:46.674380 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:51:46.700266 kernel: loop2: detected capacity change from 0 to 1640 Jan 23 18:51:46.706365 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Jan 23 18:51:46.706382 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Jan 23 18:51:46.711298 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:51:46.737274 kernel: loop3: detected capacity change from 0 to 110984 Jan 23 18:51:46.779839 kernel: loop4: detected capacity change from 0 to 128560 Jan 23 18:51:46.799097 kernel: loop5: detected capacity change from 0 to 219144 Jan 23 18:51:46.825276 kernel: loop6: detected capacity change from 0 to 1640 Jan 23 18:51:46.831265 kernel: loop7: detected capacity change from 0 to 110984 Jan 23 18:51:46.844936 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-stackit'. Jan 23 18:51:46.845409 (sd-merge)[1320]: Merged extensions into '/usr'. Jan 23 18:51:46.855655 systemd[1]: Reload requested from client PID 1288 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:51:46.855672 systemd[1]: Reloading... Jan 23 18:51:46.937469 zram_generator::config[1342]: No configuration found. Jan 23 18:51:47.103770 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:51:47.104020 systemd[1]: Reloading finished in 247 ms. Jan 23 18:51:47.121741 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:51:47.130363 systemd[1]: Starting ensure-sysext.service... Jan 23 18:51:47.132831 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:51:47.137177 ldconfig[1283]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:51:47.139855 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:51:47.155313 systemd[1]: Reload requested from client PID 1388 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:51:47.155327 systemd[1]: Reloading... Jan 23 18:51:47.176546 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:51:47.177390 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:51:47.177654 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:51:47.177856 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 18:51:47.180552 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 18:51:47.180765 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Jan 23 18:51:47.180807 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Jan 23 18:51:47.185888 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:51:47.189274 systemd-tmpfiles[1389]: Skipping /boot Jan 23 18:51:47.202156 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:51:47.202684 systemd-tmpfiles[1389]: Skipping /boot Jan 23 18:51:47.226268 zram_generator::config[1415]: No configuration found. Jan 23 18:51:47.383601 systemd[1]: Reloading finished in 228 ms. Jan 23 18:51:47.409808 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:51:47.414133 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:51:47.421382 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:51:47.429350 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:51:47.432338 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:51:47.435389 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:51:47.437740 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:51:47.443101 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:51:47.448696 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:47.449041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:51:47.450801 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:51:47.456700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:51:47.465293 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:51:47.465815 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:51:47.465921 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:51:47.466003 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:47.469407 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:51:47.475717 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:47.475882 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:51:47.476038 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:51:47.476141 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:51:47.477027 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:47.485432 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:51:47.491206 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:47.492339 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:51:47.496224 systemd-udevd[1466]: Using default interface naming scheme 'v255'. Jan 23 18:51:47.498941 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:51:47.502020 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 23 18:51:47.503621 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:51:47.503663 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:51:47.503743 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:51:47.504180 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:51:47.504558 systemd[1]: Finished ensure-sysext.service. Jan 23 18:51:47.505152 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:51:47.505529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:51:47.520394 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:51:47.522570 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:51:47.529293 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:51:47.529446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:51:47.529977 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:51:47.532893 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:51:47.533030 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:51:47.533621 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:51:47.542545 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:51:47.543306 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:51:47.550469 augenrules[1516]: No rules Jan 23 18:51:47.550800 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:51:47.552467 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:51:47.553449 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:51:47.554150 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:51:47.555831 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:51:47.556001 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:51:47.560831 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:51:47.571199 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 18:51:47.571274 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 18:51:47.579268 kernel: PTP clock support registered Jan 23 18:51:47.601034 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 23 18:51:47.602283 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 23 18:51:47.603607 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:51:47.639069 systemd-resolved[1465]: Positive Trust Anchors: Jan 23 18:51:47.639337 systemd-resolved[1465]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:51:47.639400 systemd-resolved[1465]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:51:47.644221 systemd-resolved[1465]: Using system hostname 'ci-4459-2-3-2-5d24c62718'. Jan 23 18:51:47.645303 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:51:47.646278 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:51:47.647047 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:51:47.647815 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:51:47.648469 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:51:47.649096 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:51:47.649843 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:51:47.650739 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:51:47.651310 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:51:47.652305 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:51:47.652331 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:51:47.652669 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:51:47.654742 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:51:47.657200 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:51:47.661759 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:51:47.663433 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:51:47.665290 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:51:47.671323 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:51:47.671969 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:51:47.673239 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:51:47.674461 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:51:47.675139 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:51:47.675744 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:51:47.675828 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:51:47.677877 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 18:51:47.680576 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 18:51:47.686364 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:51:47.692390 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:51:47.694102 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:51:47.697295 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:51:47.697477 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:51:47.697850 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:51:47.700554 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:51:47.702555 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:51:47.704535 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:51:47.711399 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:51:47.717794 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:51:47.718941 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:51:47.720368 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:51:47.722410 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:51:47.723078 jq[1553]: false Jan 23 18:51:47.728418 oslogin_cache_refresh[1557]: Refreshing passwd entry cache Jan 23 18:51:47.729474 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Refreshing passwd entry cache Jan 23 18:51:47.727437 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:51:47.743860 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Failure getting users, quitting Jan 23 18:51:47.743860 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:51:47.743860 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Refreshing group entry cache Jan 23 18:51:47.743860 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Failure getting groups, quitting Jan 23 18:51:47.743860 google_oslogin_nss_cache[1557]: oslogin_cache_refresh[1557]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:51:47.735282 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:51:47.731077 oslogin_cache_refresh[1557]: Failure getting users, quitting Jan 23 18:51:47.735950 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:51:47.731090 oslogin_cache_refresh[1557]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:51:47.736092 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:51:47.731119 oslogin_cache_refresh[1557]: Refreshing group entry cache Jan 23 18:51:47.736302 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:51:47.731971 oslogin_cache_refresh[1557]: Failure getting groups, quitting Jan 23 18:51:47.736431 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:51:47.731978 oslogin_cache_refresh[1557]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:51:47.737020 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:51:47.737141 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:51:47.743828 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:51:47.780581 extend-filesystems[1554]: Found /dev/vda6 Jan 23 18:51:47.785267 extend-filesystems[1554]: Found /dev/vda9 Jan 23 18:51:47.793326 extend-filesystems[1554]: Checking size of /dev/vda9 Jan 23 18:51:47.795529 jq[1563]: true Jan 23 18:51:47.807213 dbus-daemon[1551]: [system] SELinux support is enabled Jan 23 18:51:47.807362 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:51:47.810569 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:51:47.810591 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:51:47.811717 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:51:47.811735 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:51:47.812527 update_engine[1562]: I20260123 18:51:47.812428 1562 main.cc:92] Flatcar Update Engine starting Jan 23 18:51:47.818122 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:51:47.818921 update_engine[1562]: I20260123 18:51:47.818403 1562 update_check_scheduler.cc:74] Next update check in 6m41s Jan 23 18:51:47.826430 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:51:47.829396 jq[1587]: true Jan 23 18:51:47.829318 systemd-networkd[1508]: lo: Link UP Jan 23 18:51:47.829322 systemd-networkd[1508]: lo: Gained carrier Jan 23 18:51:47.829890 systemd-networkd[1508]: Enumeration completed Jan 23 18:51:47.829965 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:51:47.830880 systemd[1]: Reached target network.target - Network. Jan 23 18:51:47.832262 extend-filesystems[1554]: Resized partition /dev/vda9 Jan 23 18:51:47.834503 extend-filesystems[1596]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:51:47.835034 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:51:47.837345 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:51:47.841302 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:51:47.858277 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 12499963 blocks Jan 23 18:51:47.862768 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:51:47.864298 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:51:47.878674 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:51:47.892470 systemd-networkd[1508]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:51:47.892477 systemd-networkd[1508]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:51:47.893748 systemd-networkd[1508]: eth0: Link UP Jan 23 18:51:47.895067 systemd-networkd[1508]: eth0: Gained carrier Jan 23 18:51:47.895091 systemd-networkd[1508]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:51:47.904876 (ntainerd)[1610]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 18:51:47.907344 systemd-networkd[1508]: eth0: DHCPv4 address 10.0.4.9/25, gateway 10.0.4.1 acquired from 10.0.4.1 Jan 23 18:51:47.935282 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:51:48.015976 bash[1621]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:51:48.019059 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:51:48.025187 chronyd[1549]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 18:51:48.025546 systemd[1]: Starting sshkeys.service... Jan 23 18:51:48.030382 chronyd[1549]: Could not open PHC of iface /dev/ptp_kvm : No such device Jan 23 18:51:48.031138 chronyd[1549]: Fatal error : Could not open PHC Jan 23 18:51:48.030389 chronyd[1549]: Fatal error : Could not open PHC Jan 23 18:51:48.034375 systemd[1]: chronyd.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:51:48.034487 systemd[1]: chronyd.service: Failed with result 'exit-code'. Jan 23 18:51:48.034853 systemd[1]: Failed to start chronyd.service - NTP client/server. Jan 23 18:51:48.038650 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:51:48.045331 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input5 Jan 23 18:51:48.060856 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:51:48.067358 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:51:48.074811 kernel: ACPI: button: Power Button [PWRF] Jan 23 18:51:48.077714 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 18:51:48.081922 systemd-logind[1561]: New seat seat0. Jan 23 18:51:48.086439 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 18:51:48.090088 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:51:48.122843 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:51:48.129648 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:51:48.149356 containerd[1610]: time="2026-01-23T18:51:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:51:48.151292 containerd[1610]: time="2026-01-23T18:51:48.150954788Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 18:51:48.153982 systemd[1]: chronyd.service: Scheduled restart job, restart counter is at 1. Jan 23 18:51:48.156381 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 23 18:51:48.170804 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 23 18:51:48.171198 containerd[1610]: time="2026-01-23T18:51:48.171168655Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.965µs" Jan 23 18:51:48.171265 containerd[1610]: time="2026-01-23T18:51:48.171239812Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:51:48.171314 containerd[1610]: time="2026-01-23T18:51:48.171300380Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:51:48.171460 containerd[1610]: time="2026-01-23T18:51:48.171449738Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:51:48.171519 containerd[1610]: time="2026-01-23T18:51:48.171507675Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:51:48.171567 containerd[1610]: time="2026-01-23T18:51:48.171557674Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:51:48.171644 containerd[1610]: time="2026-01-23T18:51:48.171629490Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:51:48.171689 containerd[1610]: time="2026-01-23T18:51:48.171680531Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:51:48.171908 containerd[1610]: time="2026-01-23T18:51:48.171893576Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:51:48.171948 containerd[1610]: time="2026-01-23T18:51:48.171938180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:51:48.171994 containerd[1610]: time="2026-01-23T18:51:48.171982619Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:51:48.172030 containerd[1610]: time="2026-01-23T18:51:48.172019895Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:51:48.172115 containerd[1610]: time="2026-01-23T18:51:48.172106229Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:51:48.172291 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 23 18:51:48.172374 containerd[1610]: time="2026-01-23T18:51:48.172362298Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:51:48.172430 containerd[1610]: time="2026-01-23T18:51:48.172419780Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:51:48.172463 containerd[1610]: time="2026-01-23T18:51:48.172454498Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:51:48.172510 containerd[1610]: time="2026-01-23T18:51:48.172500689Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:51:48.176153 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 18:51:48.178654 containerd[1610]: time="2026-01-23T18:51:48.178576492Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:51:48.179193 containerd[1610]: time="2026-01-23T18:51:48.179086607Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:51:48.203266 kernel: EXT4-fs (vda9): resized filesystem to 12499963 Jan 23 18:51:48.217038 containerd[1610]: time="2026-01-23T18:51:48.217012303Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:51:48.217230 containerd[1610]: time="2026-01-23T18:51:48.217216634Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:51:48.218073 containerd[1610]: time="2026-01-23T18:51:48.217308102Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:51:48.218073 containerd[1610]: time="2026-01-23T18:51:48.217324445Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:51:48.218073 containerd[1610]: time="2026-01-23T18:51:48.217335982Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:51:48.218073 containerd[1610]: time="2026-01-23T18:51:48.217346142Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:51:48.218073 containerd[1610]: time="2026-01-23T18:51:48.217360218Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:51:48.218073 containerd[1610]: time="2026-01-23T18:51:48.217369560Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:51:48.218073 containerd[1610]: time="2026-01-23T18:51:48.217379175Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:51:48.218073 containerd[1610]: time="2026-01-23T18:51:48.217387898Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:51:48.218073 containerd[1610]: time="2026-01-23T18:51:48.217399422Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:51:48.218073 containerd[1610]: time="2026-01-23T18:51:48.217413132Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:51:48.218073 containerd[1610]: time="2026-01-23T18:51:48.217499212Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:51:48.218073 containerd[1610]: time="2026-01-23T18:51:48.217518548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:51:48.218073 containerd[1610]: time="2026-01-23T18:51:48.217531096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:51:48.218073 containerd[1610]: time="2026-01-23T18:51:48.217540565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:51:48.218308 containerd[1610]: time="2026-01-23T18:51:48.217548895Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:51:48.218308 containerd[1610]: time="2026-01-23T18:51:48.217557707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:51:48.218308 containerd[1610]: time="2026-01-23T18:51:48.217567655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:51:48.218308 containerd[1610]: time="2026-01-23T18:51:48.217575306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:51:48.218308 containerd[1610]: time="2026-01-23T18:51:48.217584810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:51:48.218308 containerd[1610]: time="2026-01-23T18:51:48.217593406Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:51:48.218308 containerd[1610]: time="2026-01-23T18:51:48.217601598Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:51:48.218308 containerd[1610]: time="2026-01-23T18:51:48.217641681Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:51:48.218308 containerd[1610]: time="2026-01-23T18:51:48.217652951Z" level=info msg="Start snapshots syncer" Jan 23 18:51:48.218308 containerd[1610]: time="2026-01-23T18:51:48.217672576Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:51:48.218454 containerd[1610]: time="2026-01-23T18:51:48.217890887Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:51:48.218454 containerd[1610]: time="2026-01-23T18:51:48.217927420Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:51:48.219397 sshd_keygen[1584]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:51:48.220533 extend-filesystems[1596]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 18:51:48.220533 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 23 18:51:48.220533 extend-filesystems[1596]: The filesystem on /dev/vda9 is now 12499963 (4k) blocks long. Jan 23 18:51:48.223173 extend-filesystems[1554]: Resized filesystem in /dev/vda9 Jan 23 18:51:48.223934 containerd[1610]: time="2026-01-23T18:51:48.222210878Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:51:48.223934 containerd[1610]: time="2026-01-23T18:51:48.222335309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:51:48.223934 containerd[1610]: time="2026-01-23T18:51:48.222365397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:51:48.223934 containerd[1610]: time="2026-01-23T18:51:48.222379922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:51:48.223934 containerd[1610]: time="2026-01-23T18:51:48.222399702Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:51:48.223934 containerd[1610]: time="2026-01-23T18:51:48.222413210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:51:48.223934 containerd[1610]: time="2026-01-23T18:51:48.222421593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:51:48.223934 containerd[1610]: time="2026-01-23T18:51:48.222430059Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:51:48.223934 containerd[1610]: time="2026-01-23T18:51:48.222452971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:51:48.223934 containerd[1610]: time="2026-01-23T18:51:48.222844978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:51:48.223934 containerd[1610]: time="2026-01-23T18:51:48.222863720Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:51:48.223934 containerd[1610]: time="2026-01-23T18:51:48.222900438Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:51:48.223934 containerd[1610]: time="2026-01-23T18:51:48.222926591Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:51:48.223934 containerd[1610]: time="2026-01-23T18:51:48.222933766Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:51:48.220623 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:51:48.224238 containerd[1610]: time="2026-01-23T18:51:48.222940899Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:51:48.224238 containerd[1610]: time="2026-01-23T18:51:48.222946743Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:51:48.224238 containerd[1610]: time="2026-01-23T18:51:48.222954202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:51:48.224238 containerd[1610]: time="2026-01-23T18:51:48.222967407Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:51:48.224238 containerd[1610]: time="2026-01-23T18:51:48.222980516Z" level=info msg="runtime interface created" Jan 23 18:51:48.224238 containerd[1610]: time="2026-01-23T18:51:48.222994760Z" level=info msg="created NRI interface" Jan 23 18:51:48.224238 containerd[1610]: time="2026-01-23T18:51:48.223001585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:51:48.224238 containerd[1610]: time="2026-01-23T18:51:48.223013690Z" level=info msg="Connect containerd service" Jan 23 18:51:48.224238 containerd[1610]: time="2026-01-23T18:51:48.223032376Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:51:48.221107 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:51:48.225012 containerd[1610]: time="2026-01-23T18:51:48.224990742Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:51:48.252620 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:51:48.258984 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:51:48.275880 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:51:48.276648 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:51:48.281567 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:51:48.294262 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 18:51:48.294474 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 18:51:48.294578 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 18:51:48.302568 chronyd[1648]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 18:51:48.307285 chronyd[1648]: Could not open PHC of iface /dev/ptp_kvm : No such device Jan 23 18:51:48.307560 chronyd[1648]: Fatal error : Could not open PHC Jan 23 18:51:48.307291 chronyd[1648]: Fatal error : Could not open PHC Jan 23 18:51:48.309372 systemd[1]: chronyd.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:51:48.309489 systemd[1]: chronyd.service: Failed with result 'exit-code'. Jan 23 18:51:48.309755 systemd[1]: Failed to start chronyd.service - NTP client/server. Jan 23 18:51:48.314143 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:51:48.318462 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:51:48.323382 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:51:48.324011 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:51:48.343529 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 23 18:51:48.395718 kernel: Console: switching to colour dummy device 80x25 Jan 23 18:51:48.397278 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 23 18:51:48.404617 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 18:51:48.404657 kernel: [drm] features: -context_init Jan 23 18:51:48.407740 containerd[1610]: time="2026-01-23T18:51:48.407703241Z" level=info msg="Start subscribing containerd event" Jan 23 18:51:48.407881 containerd[1610]: time="2026-01-23T18:51:48.407853778Z" level=info msg="Start recovering state" Jan 23 18:51:48.408490 containerd[1610]: time="2026-01-23T18:51:48.407854038Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:51:48.408537 containerd[1610]: time="2026-01-23T18:51:48.408524353Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:51:48.408574 containerd[1610]: time="2026-01-23T18:51:48.408561754Z" level=info msg="Start event monitor" Jan 23 18:51:48.408608 containerd[1610]: time="2026-01-23T18:51:48.408602121Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:51:48.408647 containerd[1610]: time="2026-01-23T18:51:48.408641696Z" level=info msg="Start streaming server" Jan 23 18:51:48.408680 containerd[1610]: time="2026-01-23T18:51:48.408675018Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:51:48.408725 containerd[1610]: time="2026-01-23T18:51:48.408718211Z" level=info msg="runtime interface starting up..." Jan 23 18:51:48.408752 containerd[1610]: time="2026-01-23T18:51:48.408746813Z" level=info msg="starting plugins..." Jan 23 18:51:48.408793 containerd[1610]: time="2026-01-23T18:51:48.408778918Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:51:48.408927 containerd[1610]: time="2026-01-23T18:51:48.408918671Z" level=info msg="containerd successfully booted in 0.259897s" Jan 23 18:51:48.408969 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:51:48.409821 systemd[1]: chronyd.service: Scheduled restart job, restart counter is at 2. Jan 23 18:51:48.413388 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 23 18:51:48.432273 kernel: [drm] number of scanouts: 1 Jan 23 18:51:48.432311 kernel: [drm] number of cap sets: 0 Jan 23 18:51:48.434323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:51:48.437699 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 23 18:51:48.438277 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 23 18:51:48.438367 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 23 18:51:48.443399 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:51:48.447539 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 23 18:51:48.447578 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 18:51:48.453228 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 18:51:48.465035 systemd-logind[1561]: Watching system buttons on /dev/input/event3 (Power Button) Jan 23 18:51:48.479133 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 18:51:48.534430 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:51:48.534606 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:48.536962 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:51:48.541950 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:51:48.562367 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:51:48.562544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:48.567521 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:51:48.600618 chronyd[1701]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 18:51:48.601186 chronyd[1701]: Loaded seccomp filter (level 2) Jan 23 18:51:48.601283 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 18:51:48.638138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:51:49.620275 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:51:49.622267 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:51:49.899760 systemd-networkd[1508]: eth0: Gained IPv6LL Jan 23 18:51:49.905092 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:51:49.906589 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:51:49.908648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:51:49.912440 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:51:49.933952 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:51:50.147077 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:51:50.149613 systemd[1]: Started sshd@0-10.0.4.9:22-20.161.92.111:33406.service - OpenSSH per-connection server daemon (20.161.92.111:33406). Jan 23 18:51:50.762635 sshd[1730]: Accepted publickey for core from 20.161.92.111 port 33406 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:51:50.764125 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:50.777384 systemd-logind[1561]: New session 1 of user core. Jan 23 18:51:50.778717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:51:50.781525 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:51:50.785589 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:51:50.790904 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:51:50.801149 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:51:50.805497 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:51:50.818479 (systemd)[1741]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 18:51:50.820463 systemd-logind[1561]: New session c1 of user core. Jan 23 18:51:50.928106 systemd[1741]: Queued start job for default target default.target. Jan 23 18:51:50.933528 systemd[1741]: Created slice app.slice - User Application Slice. Jan 23 18:51:50.933557 systemd[1741]: Reached target paths.target - Paths. Jan 23 18:51:50.933678 systemd[1741]: Reached target timers.target - Timers. Jan 23 18:51:50.936336 systemd[1741]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:51:50.945395 systemd[1741]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:51:50.946079 systemd[1741]: Reached target sockets.target - Sockets. Jan 23 18:51:50.946283 systemd[1741]: Reached target basic.target - Basic System. Jan 23 18:51:50.946342 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:51:50.948438 systemd[1741]: Reached target default.target - Main User Target. Jan 23 18:51:50.948475 systemd[1741]: Startup finished in 122ms. Jan 23 18:51:50.950439 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:51:51.287774 kubelet[1737]: E0123 18:51:51.287719 1737 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:51:51.290085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:51:51.290221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:51:51.290611 systemd[1]: kubelet.service: Consumed 847ms CPU time, 256.1M memory peak. Jan 23 18:51:51.387465 systemd[1]: Started sshd@1-10.0.4.9:22-20.161.92.111:54460.service - OpenSSH per-connection server daemon (20.161.92.111:54460). Jan 23 18:51:51.632278 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:51:51.632388 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:51:51.989873 sshd[1758]: Accepted publickey for core from 20.161.92.111 port 54460 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:51:51.991296 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:52.009685 systemd-logind[1561]: New session 2 of user core. Jan 23 18:51:52.023486 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 18:51:52.425850 sshd[1763]: Connection closed by 20.161.92.111 port 54460 Jan 23 18:51:52.425207 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Jan 23 18:51:52.430105 systemd[1]: sshd@1-10.0.4.9:22-20.161.92.111:54460.service: Deactivated successfully. Jan 23 18:51:52.431966 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 18:51:52.434127 systemd-logind[1561]: Session 2 logged out. Waiting for processes to exit. Jan 23 18:51:52.434971 systemd-logind[1561]: Removed session 2. Jan 23 18:51:52.530964 systemd[1]: Started sshd@2-10.0.4.9:22-20.161.92.111:54464.service - OpenSSH per-connection server daemon (20.161.92.111:54464). Jan 23 18:51:53.137405 sshd[1769]: Accepted publickey for core from 20.161.92.111 port 54464 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:51:53.138413 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:53.142659 systemd-logind[1561]: New session 3 of user core. Jan 23 18:51:53.151517 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:51:53.562678 sshd[1772]: Connection closed by 20.161.92.111 port 54464 Jan 23 18:51:53.563439 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Jan 23 18:51:53.566348 systemd-logind[1561]: Session 3 logged out. Waiting for processes to exit. Jan 23 18:51:53.567083 systemd[1]: sshd@2-10.0.4.9:22-20.161.92.111:54464.service: Deactivated successfully. Jan 23 18:51:53.569375 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 18:51:53.571827 systemd-logind[1561]: Removed session 3. Jan 23 18:51:55.647275 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:51:55.652273 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 18:51:55.652973 coreos-metadata[1550]: Jan 23 18:51:55.652 WARN failed to locate config-drive, using the metadata service API instead Jan 23 18:51:55.663303 coreos-metadata[1640]: Jan 23 18:51:55.663 WARN failed to locate config-drive, using the metadata service API instead Jan 23 18:51:55.667211 coreos-metadata[1550]: Jan 23 18:51:55.667 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 23 18:51:55.672400 coreos-metadata[1640]: Jan 23 18:51:55.672 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 23 18:51:57.941355 coreos-metadata[1550]: Jan 23 18:51:57.941 INFO Fetch successful Jan 23 18:51:57.941355 coreos-metadata[1550]: Jan 23 18:51:57.941 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 18:51:58.500645 coreos-metadata[1550]: Jan 23 18:51:58.500 INFO Fetch successful Jan 23 18:51:58.500645 coreos-metadata[1550]: Jan 23 18:51:58.500 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 23 18:51:59.132148 coreos-metadata[1550]: Jan 23 18:51:59.132 INFO Fetch successful Jan 23 18:51:59.132148 coreos-metadata[1550]: Jan 23 18:51:59.132 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 23 18:51:59.756813 coreos-metadata[1550]: Jan 23 18:51:59.756 INFO Fetch successful Jan 23 18:51:59.756813 coreos-metadata[1550]: Jan 23 18:51:59.756 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 23 18:52:00.379595 coreos-metadata[1550]: Jan 23 18:52:00.379 INFO Fetch successful Jan 23 18:52:00.379595 coreos-metadata[1550]: Jan 23 18:52:00.379 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 23 18:52:00.939142 coreos-metadata[1550]: Jan 23 18:52:00.939 INFO Fetch successful Jan 23 18:52:00.967939 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 18:52:00.968759 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:52:01.467611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:52:01.469699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:52:01.584188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:52:01.587468 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:52:01.621898 kubelet[1798]: E0123 18:52:01.621863 1798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:52:01.624922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:52:01.625116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:52:01.625625 systemd[1]: kubelet.service: Consumed 128ms CPU time, 109.7M memory peak. Jan 23 18:52:03.673345 systemd[1]: Started sshd@3-10.0.4.9:22-20.161.92.111:53938.service - OpenSSH per-connection server daemon (20.161.92.111:53938). Jan 23 18:52:04.187236 coreos-metadata[1640]: Jan 23 18:52:04.187 INFO Fetch successful Jan 23 18:52:04.187236 coreos-metadata[1640]: Jan 23 18:52:04.187 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 18:52:04.275516 sshd[1806]: Accepted publickey for core from 20.161.92.111 port 53938 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:52:04.276629 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:04.282310 systemd-logind[1561]: New session 4 of user core. Jan 23 18:52:04.287608 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:52:04.699136 sshd[1809]: Connection closed by 20.161.92.111 port 53938 Jan 23 18:52:04.699647 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:04.703169 systemd[1]: sshd@3-10.0.4.9:22-20.161.92.111:53938.service: Deactivated successfully. Jan 23 18:52:04.705040 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:52:04.706214 systemd-logind[1561]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:52:04.707242 systemd-logind[1561]: Removed session 4. Jan 23 18:52:04.805441 systemd[1]: Started sshd@4-10.0.4.9:22-20.161.92.111:53944.service - OpenSSH per-connection server daemon (20.161.92.111:53944). Jan 23 18:52:05.322816 coreos-metadata[1640]: Jan 23 18:52:05.322 INFO Fetch successful Jan 23 18:52:05.325397 unknown[1640]: wrote ssh authorized keys file for user: core Jan 23 18:52:05.345440 update-ssh-keys[1819]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:52:05.346567 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 18:52:05.348107 systemd[1]: Finished sshkeys.service. Jan 23 18:52:05.350204 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:52:05.350611 systemd[1]: Startup finished in 3.424s (kernel) + 13.022s (initrd) + 19.764s (userspace) = 36.212s. Jan 23 18:52:05.406184 sshd[1815]: Accepted publickey for core from 20.161.92.111 port 53944 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:52:05.407349 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:05.411099 systemd-logind[1561]: New session 5 of user core. Jan 23 18:52:05.424386 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:52:05.830645 sshd[1822]: Connection closed by 20.161.92.111 port 53944 Jan 23 18:52:05.831127 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:05.835086 systemd-logind[1561]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:52:05.835343 systemd[1]: sshd@4-10.0.4.9:22-20.161.92.111:53944.service: Deactivated successfully. Jan 23 18:52:05.836895 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:52:05.838182 systemd-logind[1561]: Removed session 5. Jan 23 18:52:11.719779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:52:11.722440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:52:11.836203 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:52:11.849626 (kubelet)[1835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:52:11.881879 kubelet[1835]: E0123 18:52:11.881838 1835 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:52:11.884410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:52:11.884519 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:52:11.884954 systemd[1]: kubelet.service: Consumed 127ms CPU time, 110.1M memory peak. Jan 23 18:52:12.388552 chronyd[1701]: Selected source PHC0 Jan 23 18:52:12.388574 chronyd[1701]: System clock wrong by 2.065378 seconds Jan 23 18:52:14.453975 chronyd[1701]: System clock was stepped by 2.065378 seconds Jan 23 18:52:14.454460 systemd-resolved[1465]: Clock change detected. Flushing caches. Jan 23 18:52:18.005852 systemd[1]: Started sshd@5-10.0.4.9:22-20.161.92.111:55996.service - OpenSSH per-connection server daemon (20.161.92.111:55996). Jan 23 18:52:18.610906 sshd[1843]: Accepted publickey for core from 20.161.92.111 port 55996 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:52:18.611971 sshd-session[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:18.615663 systemd-logind[1561]: New session 6 of user core. Jan 23 18:52:18.626821 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:52:19.034236 sshd[1846]: Connection closed by 20.161.92.111 port 55996 Jan 23 18:52:19.034953 sshd-session[1843]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:19.038415 systemd[1]: sshd@5-10.0.4.9:22-20.161.92.111:55996.service: Deactivated successfully. Jan 23 18:52:19.039771 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:52:19.040331 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:52:19.041233 systemd-logind[1561]: Removed session 6. Jan 23 18:52:19.151141 systemd[1]: Started sshd@6-10.0.4.9:22-20.161.92.111:55998.service - OpenSSH per-connection server daemon (20.161.92.111:55998). Jan 23 18:52:19.758752 sshd[1852]: Accepted publickey for core from 20.161.92.111 port 55998 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:52:19.759965 sshd-session[1852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:19.764151 systemd-logind[1561]: New session 7 of user core. Jan 23 18:52:19.775022 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:52:20.180852 sshd[1855]: Connection closed by 20.161.92.111 port 55998 Jan 23 18:52:20.181533 sshd-session[1852]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:20.186065 systemd[1]: sshd@6-10.0.4.9:22-20.161.92.111:55998.service: Deactivated successfully. Jan 23 18:52:20.188156 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:52:20.188811 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:52:20.189776 systemd-logind[1561]: Removed session 7. Jan 23 18:52:20.285184 systemd[1]: Started sshd@7-10.0.4.9:22-20.161.92.111:56008.service - OpenSSH per-connection server daemon (20.161.92.111:56008). Jan 23 18:52:20.891211 sshd[1861]: Accepted publickey for core from 20.161.92.111 port 56008 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:52:20.892338 sshd-session[1861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:20.896705 systemd-logind[1561]: New session 8 of user core. Jan 23 18:52:20.899788 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:52:21.315370 sshd[1864]: Connection closed by 20.161.92.111 port 56008 Jan 23 18:52:21.315860 sshd-session[1861]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:21.319612 systemd[1]: sshd@7-10.0.4.9:22-20.161.92.111:56008.service: Deactivated successfully. Jan 23 18:52:21.320883 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:52:21.321409 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:52:21.322193 systemd-logind[1561]: Removed session 8. Jan 23 18:52:21.423242 systemd[1]: Started sshd@8-10.0.4.9:22-20.161.92.111:56024.service - OpenSSH per-connection server daemon (20.161.92.111:56024). Jan 23 18:52:22.047335 sshd[1870]: Accepted publickey for core from 20.161.92.111 port 56024 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:52:22.048356 sshd-session[1870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:22.052274 systemd-logind[1561]: New session 9 of user core. Jan 23 18:52:22.059815 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:52:22.395520 sudo[1874]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:52:22.395740 sudo[1874]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:52:22.405552 sudo[1874]: pam_unix(sudo:session): session closed for user root Jan 23 18:52:22.503757 sshd[1873]: Connection closed by 20.161.92.111 port 56024 Jan 23 18:52:22.504291 sshd-session[1870]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:22.507269 systemd[1]: sshd@8-10.0.4.9:22-20.161.92.111:56024.service: Deactivated successfully. Jan 23 18:52:22.508427 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:52:22.508945 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:52:22.509726 systemd-logind[1561]: Removed session 9. Jan 23 18:52:22.608465 systemd[1]: Started sshd@9-10.0.4.9:22-20.161.92.111:59402.service - OpenSSH per-connection server daemon (20.161.92.111:59402). Jan 23 18:52:23.212603 sshd[1880]: Accepted publickey for core from 20.161.92.111 port 59402 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:52:23.213042 sshd-session[1880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:23.216686 systemd-logind[1561]: New session 10 of user core. Jan 23 18:52:23.222890 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:52:23.541456 sudo[1885]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:52:23.541926 sudo[1885]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:52:23.545974 sudo[1885]: pam_unix(sudo:session): session closed for user root Jan 23 18:52:23.550089 sudo[1884]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:52:23.550288 sudo[1884]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:52:23.558931 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:52:23.594443 augenrules[1907]: No rules Jan 23 18:52:23.595270 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:52:23.595531 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:52:23.596609 sudo[1884]: pam_unix(sudo:session): session closed for user root Jan 23 18:52:23.691955 sshd[1883]: Connection closed by 20.161.92.111 port 59402 Jan 23 18:52:23.692680 sshd-session[1880]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:23.696042 systemd[1]: sshd@9-10.0.4.9:22-20.161.92.111:59402.service: Deactivated successfully. Jan 23 18:52:23.697360 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:52:23.698837 systemd-logind[1561]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:52:23.699466 systemd-logind[1561]: Removed session 10. Jan 23 18:52:23.796340 systemd[1]: Started sshd@10-10.0.4.9:22-20.161.92.111:59410.service - OpenSSH per-connection server daemon (20.161.92.111:59410). Jan 23 18:52:24.032898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 18:52:24.034176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:52:24.154732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:52:24.162016 (kubelet)[1927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:52:24.194231 kubelet[1927]: E0123 18:52:24.194165 1927 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:52:24.196016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:52:24.196158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:52:24.196636 systemd[1]: kubelet.service: Consumed 126ms CPU time, 110.4M memory peak. Jan 23 18:52:24.399196 sshd[1916]: Accepted publickey for core from 20.161.92.111 port 59410 ssh2: RSA SHA256:VDlkwcKZUiCA3SJ7l6IBIk9gpMWQp4GVsOfhsnX+NJs Jan 23 18:52:24.400314 sshd-session[1916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:24.404254 systemd-logind[1561]: New session 11 of user core. Jan 23 18:52:24.413921 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:52:24.727540 sudo[1935]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:52:24.728186 sudo[1935]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:52:25.321921 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:52:25.322034 systemd[1]: kubelet.service: Consumed 126ms CPU time, 110.4M memory peak. Jan 23 18:52:25.323845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:52:25.346585 systemd[1]: Reload requested from client PID 1967 ('systemctl') (unit session-11.scope)... Jan 23 18:52:25.346722 systemd[1]: Reloading... Jan 23 18:52:25.425687 zram_generator::config[2015]: No configuration found. Jan 23 18:52:25.590661 systemd[1]: Reloading finished in 243 ms. Jan 23 18:52:25.620003 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:52:25.620181 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:52:25.620431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:52:25.620513 systemd[1]: kubelet.service: Consumed 77ms CPU time, 98.2M memory peak. Jan 23 18:52:25.621853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:52:25.738273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:52:25.749013 (kubelet)[2061]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:52:25.781674 kubelet[2061]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:52:25.781674 kubelet[2061]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:52:25.781948 kubelet[2061]: I0123 18:52:25.781722 2061 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:52:26.111104 kubelet[2061]: I0123 18:52:26.110996 2061 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 18:52:26.111104 kubelet[2061]: I0123 18:52:26.111021 2061 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:52:26.112656 kubelet[2061]: I0123 18:52:26.111848 2061 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 18:52:26.112656 kubelet[2061]: I0123 18:52:26.111863 2061 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:52:26.112656 kubelet[2061]: I0123 18:52:26.112090 2061 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:52:26.116422 kubelet[2061]: I0123 18:52:26.115562 2061 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:52:26.119516 kubelet[2061]: I0123 18:52:26.119490 2061 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:52:26.123640 kubelet[2061]: I0123 18:52:26.123558 2061 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 18:52:26.125226 kubelet[2061]: I0123 18:52:26.124819 2061 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:52:26.125226 kubelet[2061]: I0123 18:52:26.124851 2061 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.4.9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:52:26.125226 kubelet[2061]: I0123 18:52:26.125103 2061 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:52:26.125226 kubelet[2061]: I0123 18:52:26.125113 2061 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 18:52:26.125420 kubelet[2061]: I0123 18:52:26.125197 2061 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 18:52:26.128338 kubelet[2061]: I0123 18:52:26.128321 2061 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:52:26.128488 kubelet[2061]: I0123 18:52:26.128475 2061 kubelet.go:475] "Attempting to sync node with API server" Jan 23 18:52:26.128515 kubelet[2061]: I0123 18:52:26.128491 2061 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:52:26.128515 kubelet[2061]: I0123 18:52:26.128510 2061 kubelet.go:387] "Adding apiserver pod source" Jan 23 18:52:26.128565 kubelet[2061]: I0123 18:52:26.128528 2061 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:52:26.128989 kubelet[2061]: E0123 18:52:26.128967 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:26.129261 kubelet[2061]: E0123 18:52:26.129056 2061 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:26.132667 kubelet[2061]: I0123 18:52:26.131970 2061 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:52:26.132667 kubelet[2061]: I0123 18:52:26.132463 2061 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:52:26.132667 kubelet[2061]: I0123 18:52:26.132487 2061 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 18:52:26.132667 kubelet[2061]: W0123 18:52:26.132534 2061 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:52:26.136268 kubelet[2061]: I0123 18:52:26.136255 2061 server.go:1262] "Started kubelet" Jan 23 18:52:26.137838 kubelet[2061]: I0123 18:52:26.137827 2061 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:52:26.141560 kubelet[2061]: E0123 18:52:26.141535 2061 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.4.9\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:52:26.141740 kubelet[2061]: E0123 18:52:26.141728 2061 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:52:26.146912 kubelet[2061]: E0123 18:52:26.145665 2061 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.4.9.188d70e25c144fc7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.4.9,UID:10.0.4.9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.4.9,},FirstTimestamp:2026-01-23 18:52:26.136227783 +0000 UTC m=+0.384086106,LastTimestamp:2026-01-23 18:52:26.136227783 +0000 UTC m=+0.384086106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.4.9,}" Jan 23 18:52:26.147302 kubelet[2061]: E0123 18:52:26.147290 2061 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:52:26.147565 kubelet[2061]: I0123 18:52:26.147534 2061 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:52:26.148849 kubelet[2061]: I0123 18:52:26.148834 2061 server.go:310] "Adding debug handlers to kubelet server" Jan 23 18:52:26.151115 kubelet[2061]: I0123 18:52:26.150695 2061 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 18:52:26.151115 kubelet[2061]: E0123 18:52:26.150915 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:26.151115 kubelet[2061]: I0123 18:52:26.151103 2061 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 18:52:26.151208 kubelet[2061]: I0123 18:52:26.151168 2061 reconciler.go:29] "Reconciler: start to sync state" Jan 23 18:52:26.151483 kubelet[2061]: I0123 18:52:26.151454 2061 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:52:26.151521 kubelet[2061]: I0123 18:52:26.151498 2061 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 18:52:26.151950 kubelet[2061]: I0123 18:52:26.151940 2061 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:52:26.153791 kubelet[2061]: I0123 18:52:26.153774 2061 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:52:26.154198 kubelet[2061]: I0123 18:52:26.154185 2061 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:52:26.154278 kubelet[2061]: I0123 18:52:26.154262 2061 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:52:26.158404 kubelet[2061]: I0123 18:52:26.158392 2061 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:52:26.161735 kubelet[2061]: E0123 18:52:26.161720 2061 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.4.9\" not found" node="10.0.4.9" Jan 23 18:52:26.175285 kubelet[2061]: I0123 18:52:26.175264 2061 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:52:26.175285 kubelet[2061]: I0123 18:52:26.175278 2061 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:52:26.175399 kubelet[2061]: I0123 18:52:26.175295 2061 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:52:26.183477 kubelet[2061]: I0123 18:52:26.183450 2061 policy_none.go:49] "None policy: Start" Jan 23 18:52:26.183477 kubelet[2061]: I0123 18:52:26.183471 2061 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 18:52:26.183477 kubelet[2061]: I0123 18:52:26.183481 2061 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 18:52:26.186010 kubelet[2061]: I0123 18:52:26.185997 2061 policy_none.go:47] "Start" Jan 23 18:52:26.190659 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:52:26.200259 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:52:26.203799 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:52:26.208660 kubelet[2061]: E0123 18:52:26.208399 2061 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:52:26.208660 kubelet[2061]: I0123 18:52:26.208537 2061 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:52:26.208660 kubelet[2061]: I0123 18:52:26.208545 2061 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:52:26.210713 kubelet[2061]: I0123 18:52:26.209245 2061 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:52:26.211906 kubelet[2061]: E0123 18:52:26.211892 2061 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:52:26.211942 kubelet[2061]: E0123 18:52:26.211934 2061 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.4.9\" not found" Jan 23 18:52:26.221269 kubelet[2061]: I0123 18:52:26.221171 2061 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 18:52:26.222413 kubelet[2061]: I0123 18:52:26.222396 2061 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 18:52:26.222413 kubelet[2061]: I0123 18:52:26.222411 2061 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 18:52:26.222488 kubelet[2061]: I0123 18:52:26.222430 2061 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 18:52:26.222508 kubelet[2061]: E0123 18:52:26.222499 2061 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 23 18:52:26.251007 sudo[1935]: pam_unix(sudo:session): session closed for user root Jan 23 18:52:26.309347 kubelet[2061]: I0123 18:52:26.309161 2061 kubelet_node_status.go:75] "Attempting to register node" node="10.0.4.9" Jan 23 18:52:26.313324 kubelet[2061]: I0123 18:52:26.313310 2061 kubelet_node_status.go:78] "Successfully registered node" node="10.0.4.9" Jan 23 18:52:26.313388 kubelet[2061]: E0123 18:52:26.313381 2061 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"10.0.4.9\": node \"10.0.4.9\" not found" Jan 23 18:52:26.321611 kubelet[2061]: E0123 18:52:26.321583 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:26.346701 sshd[1934]: Connection closed by 20.161.92.111 port 59410 Jan 23 18:52:26.346271 sshd-session[1916]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:26.349141 systemd-logind[1561]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:52:26.349489 systemd[1]: sshd@10-10.0.4.9:22-20.161.92.111:59410.service: Deactivated successfully. Jan 23 18:52:26.351335 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:52:26.351550 systemd[1]: session-11.scope: Consumed 389ms CPU time, 76.5M memory peak. Jan 23 18:52:26.353620 systemd-logind[1561]: Removed session 11. Jan 23 18:52:26.422174 kubelet[2061]: E0123 18:52:26.422078 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:26.522605 kubelet[2061]: E0123 18:52:26.522568 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:26.623279 kubelet[2061]: E0123 18:52:26.623238 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:26.724601 kubelet[2061]: E0123 18:52:26.724504 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:26.825148 kubelet[2061]: E0123 18:52:26.825103 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:26.926061 kubelet[2061]: E0123 18:52:26.926018 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:27.026860 kubelet[2061]: E0123 18:52:27.026759 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:27.113498 kubelet[2061]: I0123 18:52:27.113356 2061 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 18:52:27.113597 kubelet[2061]: I0123 18:52:27.113525 2061 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 23 18:52:27.113597 kubelet[2061]: I0123 18:52:27.113563 2061 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 23 18:52:27.127180 kubelet[2061]: E0123 18:52:27.127157 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:27.129330 kubelet[2061]: E0123 18:52:27.129305 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:27.227718 kubelet[2061]: E0123 18:52:27.227693 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:27.328374 kubelet[2061]: E0123 18:52:27.328333 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:27.429417 kubelet[2061]: E0123 18:52:27.429372 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:27.530030 kubelet[2061]: E0123 18:52:27.529982 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:27.630868 kubelet[2061]: E0123 18:52:27.630754 2061 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.4.9\" not found" Jan 23 18:52:27.731938 kubelet[2061]: I0123 18:52:27.731904 2061 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 23 18:52:27.732156 containerd[1610]: time="2026-01-23T18:52:27.732122109Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:52:27.732490 kubelet[2061]: I0123 18:52:27.732301 2061 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 23 18:52:28.130069 kubelet[2061]: E0123 18:52:28.130024 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:28.130403 kubelet[2061]: I0123 18:52:28.130103 2061 apiserver.go:52] "Watching apiserver" Jan 23 18:52:28.145662 systemd[1]: Created slice kubepods-burstable-pod6ad8b71a_de93_48e0_a240_fe44d106d040.slice - libcontainer container kubepods-burstable-pod6ad8b71a_de93_48e0_a240_fe44d106d040.slice. Jan 23 18:52:28.152399 kubelet[2061]: I0123 18:52:28.152368 2061 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 18:52:28.158311 systemd[1]: Created slice kubepods-besteffort-pod8cde2c52_2bab_4be2_a5c7_b22b962d5e3f.slice - libcontainer container kubepods-besteffort-pod8cde2c52_2bab_4be2_a5c7_b22b962d5e3f.slice. Jan 23 18:52:28.161333 kubelet[2061]: I0123 18:52:28.161314 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cde2c52-2bab-4be2-a5c7-b22b962d5e3f-lib-modules\") pod \"kube-proxy-wgpvr\" (UID: \"8cde2c52-2bab-4be2-a5c7-b22b962d5e3f\") " pod="kube-system/kube-proxy-wgpvr" Jan 23 18:52:28.161423 kubelet[2061]: I0123 18:52:28.161337 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-etc-cni-netd\") pod \"cilium-z2g2s\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " pod="kube-system/cilium-z2g2s" Jan 23 18:52:28.161423 kubelet[2061]: I0123 18:52:28.161352 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ad8b71a-de93-48e0-a240-fe44d106d040-clustermesh-secrets\") pod \"cilium-z2g2s\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " pod="kube-system/cilium-z2g2s" Jan 23 18:52:28.161423 kubelet[2061]: I0123 18:52:28.161362 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-cni-path\") pod \"cilium-z2g2s\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " pod="kube-system/cilium-z2g2s" Jan 23 18:52:28.161423 kubelet[2061]: I0123 18:52:28.161377 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-host-proc-sys-net\") pod \"cilium-z2g2s\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " pod="kube-system/cilium-z2g2s" Jan 23 18:52:28.161423 kubelet[2061]: I0123 18:52:28.161390 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2httr\" (UniqueName: \"kubernetes.io/projected/6ad8b71a-de93-48e0-a240-fe44d106d040-kube-api-access-2httr\") pod \"cilium-z2g2s\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " pod="kube-system/cilium-z2g2s" Jan 23 18:52:28.161514 kubelet[2061]: I0123 18:52:28.161405 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7prk5\" (UniqueName: \"kubernetes.io/projected/8cde2c52-2bab-4be2-a5c7-b22b962d5e3f-kube-api-access-7prk5\") pod \"kube-proxy-wgpvr\" (UID: \"8cde2c52-2bab-4be2-a5c7-b22b962d5e3f\") " pod="kube-system/kube-proxy-wgpvr" Jan 23 18:52:28.161514 kubelet[2061]: I0123 18:52:28.161416 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-cilium-run\") pod \"cilium-z2g2s\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " pod="kube-system/cilium-z2g2s" Jan 23 18:52:28.161514 kubelet[2061]: I0123 18:52:28.161429 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-hostproc\") pod \"cilium-z2g2s\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " pod="kube-system/cilium-z2g2s" Jan 23 18:52:28.161514 kubelet[2061]: I0123 18:52:28.161447 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-cilium-cgroup\") pod \"cilium-z2g2s\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " pod="kube-system/cilium-z2g2s" Jan 23 18:52:28.161514 kubelet[2061]: I0123 18:52:28.161458 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-xtables-lock\") pod \"cilium-z2g2s\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " pod="kube-system/cilium-z2g2s" Jan 23 18:52:28.161514 kubelet[2061]: I0123 18:52:28.161476 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ad8b71a-de93-48e0-a240-fe44d106d040-hubble-tls\") pod \"cilium-z2g2s\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " pod="kube-system/cilium-z2g2s" Jan 23 18:52:28.162576 kubelet[2061]: I0123 18:52:28.161488 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-bpf-maps\") pod \"cilium-z2g2s\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " pod="kube-system/cilium-z2g2s" Jan 23 18:52:28.162576 kubelet[2061]: I0123 18:52:28.161497 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-lib-modules\") pod \"cilium-z2g2s\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " pod="kube-system/cilium-z2g2s" Jan 23 18:52:28.162576 kubelet[2061]: I0123 18:52:28.161508 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ad8b71a-de93-48e0-a240-fe44d106d040-cilium-config-path\") pod \"cilium-z2g2s\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " pod="kube-system/cilium-z2g2s" Jan 23 18:52:28.162576 kubelet[2061]: I0123 18:52:28.161521 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-host-proc-sys-kernel\") pod \"cilium-z2g2s\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " pod="kube-system/cilium-z2g2s" Jan 23 18:52:28.162576 kubelet[2061]: I0123 18:52:28.161532 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8cde2c52-2bab-4be2-a5c7-b22b962d5e3f-kube-proxy\") pod \"kube-proxy-wgpvr\" (UID: \"8cde2c52-2bab-4be2-a5c7-b22b962d5e3f\") " pod="kube-system/kube-proxy-wgpvr" Jan 23 18:52:28.162576 kubelet[2061]: I0123 18:52:28.161544 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cde2c52-2bab-4be2-a5c7-b22b962d5e3f-xtables-lock\") pod \"kube-proxy-wgpvr\" (UID: \"8cde2c52-2bab-4be2-a5c7-b22b962d5e3f\") " pod="kube-system/kube-proxy-wgpvr" Jan 23 18:52:28.458560 containerd[1610]: time="2026-01-23T18:52:28.458474682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z2g2s,Uid:6ad8b71a-de93-48e0-a240-fe44d106d040,Namespace:kube-system,Attempt:0,}" Jan 23 18:52:28.465994 containerd[1610]: time="2026-01-23T18:52:28.465966773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wgpvr,Uid:8cde2c52-2bab-4be2-a5c7-b22b962d5e3f,Namespace:kube-system,Attempt:0,}" Jan 23 18:52:29.022965 containerd[1610]: time="2026-01-23T18:52:29.022705760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:52:29.024529 containerd[1610]: time="2026-01-23T18:52:29.024498014Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321158" Jan 23 18:52:29.025659 containerd[1610]: time="2026-01-23T18:52:29.025232555Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:52:29.027082 containerd[1610]: time="2026-01-23T18:52:29.026130401Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:52:29.027082 containerd[1610]: time="2026-01-23T18:52:29.026659356Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:52:29.027783 containerd[1610]: time="2026-01-23T18:52:29.027759549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:52:29.028253 containerd[1610]: time="2026-01-23T18:52:29.028236652Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 565.152858ms" Jan 23 18:52:29.029909 containerd[1610]: time="2026-01-23T18:52:29.029876859Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 561.567472ms" Jan 23 18:52:29.051517 containerd[1610]: time="2026-01-23T18:52:29.051045210Z" level=info msg="connecting to shim 7835426d4f7e89719326357c2c25adb34a3337f9ff5ea6c0b208961931ede6ad" address="unix:///run/containerd/s/a4df4d8f15f89e4db2a521c85b555f2b9df6da33ad7194056e0ca4ca6c1fd7b6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:52:29.051779 containerd[1610]: time="2026-01-23T18:52:29.051762307Z" level=info msg="connecting to shim 5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b" address="unix:///run/containerd/s/2228f8430b55a86c01bda3f1a535b715340e837826d4b2d59c8a9a531db1b0b3" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:52:29.071790 systemd[1]: Started cri-containerd-7835426d4f7e89719326357c2c25adb34a3337f9ff5ea6c0b208961931ede6ad.scope - libcontainer container 7835426d4f7e89719326357c2c25adb34a3337f9ff5ea6c0b208961931ede6ad. Jan 23 18:52:29.075362 systemd[1]: Started cri-containerd-5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b.scope - libcontainer container 5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b. Jan 23 18:52:29.103462 containerd[1610]: time="2026-01-23T18:52:29.103427653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wgpvr,Uid:8cde2c52-2bab-4be2-a5c7-b22b962d5e3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7835426d4f7e89719326357c2c25adb34a3337f9ff5ea6c0b208961931ede6ad\"" Jan 23 18:52:29.106664 containerd[1610]: time="2026-01-23T18:52:29.106570341Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 18:52:29.112534 containerd[1610]: time="2026-01-23T18:52:29.112505553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z2g2s,Uid:6ad8b71a-de93-48e0-a240-fe44d106d040,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\"" Jan 23 18:52:29.130895 kubelet[2061]: E0123 18:52:29.130854 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:29.270162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714667249.mount: Deactivated successfully. Jan 23 18:52:29.945402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount810574168.mount: Deactivated successfully. Jan 23 18:52:30.131457 kubelet[2061]: E0123 18:52:30.131314 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:30.179556 containerd[1610]: time="2026-01-23T18:52:30.179098930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:30.179884 containerd[1610]: time="2026-01-23T18:52:30.179868352Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965319" Jan 23 18:52:30.180644 containerd[1610]: time="2026-01-23T18:52:30.180614483Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:30.182189 containerd[1610]: time="2026-01-23T18:52:30.182156407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:30.182553 containerd[1610]: time="2026-01-23T18:52:30.182537670Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.075936356s" Jan 23 18:52:30.182600 containerd[1610]: time="2026-01-23T18:52:30.182592274Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 23 18:52:30.184038 containerd[1610]: time="2026-01-23T18:52:30.184026380Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 18:52:30.186469 containerd[1610]: time="2026-01-23T18:52:30.186431235Z" level=info msg="CreateContainer within sandbox \"7835426d4f7e89719326357c2c25adb34a3337f9ff5ea6c0b208961931ede6ad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:52:30.200713 containerd[1610]: time="2026-01-23T18:52:30.198080353Z" level=info msg="Container 2ed491a12ba876b060e1acc6f07f38dc065821408c50bcf59bd981532ed0cfe0: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:30.200812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167604012.mount: Deactivated successfully. Jan 23 18:52:30.215773 containerd[1610]: time="2026-01-23T18:52:30.215741735Z" level=info msg="CreateContainer within sandbox \"7835426d4f7e89719326357c2c25adb34a3337f9ff5ea6c0b208961931ede6ad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2ed491a12ba876b060e1acc6f07f38dc065821408c50bcf59bd981532ed0cfe0\"" Jan 23 18:52:30.216381 containerd[1610]: time="2026-01-23T18:52:30.216362765Z" level=info msg="StartContainer for \"2ed491a12ba876b060e1acc6f07f38dc065821408c50bcf59bd981532ed0cfe0\"" Jan 23 18:52:30.217487 containerd[1610]: time="2026-01-23T18:52:30.217469371Z" level=info msg="connecting to shim 2ed491a12ba876b060e1acc6f07f38dc065821408c50bcf59bd981532ed0cfe0" address="unix:///run/containerd/s/a4df4d8f15f89e4db2a521c85b555f2b9df6da33ad7194056e0ca4ca6c1fd7b6" protocol=ttrpc version=3 Jan 23 18:52:30.233790 systemd[1]: Started cri-containerd-2ed491a12ba876b060e1acc6f07f38dc065821408c50bcf59bd981532ed0cfe0.scope - libcontainer container 2ed491a12ba876b060e1acc6f07f38dc065821408c50bcf59bd981532ed0cfe0. Jan 23 18:52:30.289825 containerd[1610]: time="2026-01-23T18:52:30.289763596Z" level=info msg="StartContainer for \"2ed491a12ba876b060e1acc6f07f38dc065821408c50bcf59bd981532ed0cfe0\" returns successfully" Jan 23 18:52:31.132194 kubelet[2061]: E0123 18:52:31.132107 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:31.249818 kubelet[2061]: I0123 18:52:31.249665 2061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wgpvr" podStartSLOduration=4.17254468 podStartE2EDuration="5.249649818s" podCreationTimestamp="2026-01-23 18:52:26 +0000 UTC" firstStartedPulling="2026-01-23 18:52:29.106242805 +0000 UTC m=+3.354101141" lastFinishedPulling="2026-01-23 18:52:30.183347955 +0000 UTC m=+4.431206279" observedRunningTime="2026-01-23 18:52:31.248437502 +0000 UTC m=+5.496295847" watchObservedRunningTime="2026-01-23 18:52:31.249649818 +0000 UTC m=+5.497508155" Jan 23 18:52:32.132517 kubelet[2061]: E0123 18:52:32.132468 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:33.133570 kubelet[2061]: E0123 18:52:33.133523 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:34.134332 kubelet[2061]: E0123 18:52:34.134261 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:35.134898 kubelet[2061]: E0123 18:52:35.134863 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:35.141702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3422669945.mount: Deactivated successfully. Jan 23 18:52:35.204325 update_engine[1562]: I20260123 18:52:35.204232 1562 update_attempter.cc:509] Updating boot flags... Jan 23 18:52:36.135356 kubelet[2061]: E0123 18:52:36.135327 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:36.657980 containerd[1610]: time="2026-01-23T18:52:36.657941016Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:36.659246 containerd[1610]: time="2026-01-23T18:52:36.659223492Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 18:52:36.659900 containerd[1610]: time="2026-01-23T18:52:36.659880292Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:36.661341 containerd[1610]: time="2026-01-23T18:52:36.661317728Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.477222728s" Jan 23 18:52:36.661380 containerd[1610]: time="2026-01-23T18:52:36.661345023Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 18:52:36.665238 containerd[1610]: time="2026-01-23T18:52:36.665209005Z" level=info msg="CreateContainer within sandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 18:52:36.671653 containerd[1610]: time="2026-01-23T18:52:36.670657336Z" level=info msg="Container d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:36.675754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3515048957.mount: Deactivated successfully. Jan 23 18:52:36.691252 containerd[1610]: time="2026-01-23T18:52:36.691223202Z" level=info msg="CreateContainer within sandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490\"" Jan 23 18:52:36.691761 containerd[1610]: time="2026-01-23T18:52:36.691698005Z" level=info msg="StartContainer for \"d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490\"" Jan 23 18:52:36.692475 containerd[1610]: time="2026-01-23T18:52:36.692444983Z" level=info msg="connecting to shim d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490" address="unix:///run/containerd/s/2228f8430b55a86c01bda3f1a535b715340e837826d4b2d59c8a9a531db1b0b3" protocol=ttrpc version=3 Jan 23 18:52:36.708753 systemd[1]: Started cri-containerd-d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490.scope - libcontainer container d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490. Jan 23 18:52:36.733803 containerd[1610]: time="2026-01-23T18:52:36.733774996Z" level=info msg="StartContainer for \"d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490\" returns successfully" Jan 23 18:52:36.741445 systemd[1]: cri-containerd-d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490.scope: Deactivated successfully. Jan 23 18:52:36.743127 containerd[1610]: time="2026-01-23T18:52:36.743062882Z" level=info msg="received container exit event container_id:\"d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490\" id:\"d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490\" pid:2440 exited_at:{seconds:1769194356 nanos:742458232}" Jan 23 18:52:36.759464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490-rootfs.mount: Deactivated successfully. Jan 23 18:52:37.136174 kubelet[2061]: E0123 18:52:37.136146 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:37.254458 containerd[1610]: time="2026-01-23T18:52:37.254429062Z" level=info msg="CreateContainer within sandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 18:52:37.262428 containerd[1610]: time="2026-01-23T18:52:37.261980886Z" level=info msg="Container 52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:37.269835 containerd[1610]: time="2026-01-23T18:52:37.269804869Z" level=info msg="CreateContainer within sandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286\"" Jan 23 18:52:37.270198 containerd[1610]: time="2026-01-23T18:52:37.270181378Z" level=info msg="StartContainer for \"52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286\"" Jan 23 18:52:37.270862 containerd[1610]: time="2026-01-23T18:52:37.270838187Z" level=info msg="connecting to shim 52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286" address="unix:///run/containerd/s/2228f8430b55a86c01bda3f1a535b715340e837826d4b2d59c8a9a531db1b0b3" protocol=ttrpc version=3 Jan 23 18:52:37.285760 systemd[1]: Started cri-containerd-52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286.scope - libcontainer container 52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286. Jan 23 18:52:37.309438 containerd[1610]: time="2026-01-23T18:52:37.309404770Z" level=info msg="StartContainer for \"52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286\" returns successfully" Jan 23 18:52:37.319084 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:52:37.319365 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:52:37.319477 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:52:37.321872 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:52:37.322026 systemd[1]: cri-containerd-52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286.scope: Deactivated successfully. Jan 23 18:52:37.326061 containerd[1610]: time="2026-01-23T18:52:37.325614801Z" level=info msg="received container exit event container_id:\"52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286\" id:\"52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286\" pid:2485 exited_at:{seconds:1769194357 nanos:325349492}" Jan 23 18:52:37.334758 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:52:38.136970 kubelet[2061]: E0123 18:52:38.136909 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:38.254162 containerd[1610]: time="2026-01-23T18:52:38.254134667Z" level=info msg="CreateContainer within sandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 18:52:38.272536 containerd[1610]: time="2026-01-23T18:52:38.269575309Z" level=info msg="Container e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:38.272018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474439549.mount: Deactivated successfully. Jan 23 18:52:38.280643 containerd[1610]: time="2026-01-23T18:52:38.280603767Z" level=info msg="CreateContainer within sandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb\"" Jan 23 18:52:38.281393 containerd[1610]: time="2026-01-23T18:52:38.281360363Z" level=info msg="StartContainer for \"e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb\"" Jan 23 18:52:38.282490 containerd[1610]: time="2026-01-23T18:52:38.282469484Z" level=info msg="connecting to shim e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb" address="unix:///run/containerd/s/2228f8430b55a86c01bda3f1a535b715340e837826d4b2d59c8a9a531db1b0b3" protocol=ttrpc version=3 Jan 23 18:52:38.304789 systemd[1]: Started cri-containerd-e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb.scope - libcontainer container e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb. Jan 23 18:52:38.365796 systemd[1]: cri-containerd-e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb.scope: Deactivated successfully. Jan 23 18:52:38.368108 containerd[1610]: time="2026-01-23T18:52:38.368074066Z" level=info msg="StartContainer for \"e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb\" returns successfully" Jan 23 18:52:38.369559 containerd[1610]: time="2026-01-23T18:52:38.369408901Z" level=info msg="received container exit event container_id:\"e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb\" id:\"e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb\" pid:2533 exited_at:{seconds:1769194358 nanos:368672336}" Jan 23 18:52:38.385505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb-rootfs.mount: Deactivated successfully. Jan 23 18:52:39.137418 kubelet[2061]: E0123 18:52:39.137374 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:39.256679 containerd[1610]: time="2026-01-23T18:52:39.256622334Z" level=info msg="CreateContainer within sandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 18:52:39.268776 containerd[1610]: time="2026-01-23T18:52:39.267105625Z" level=info msg="Container 2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:39.275329 containerd[1610]: time="2026-01-23T18:52:39.275305923Z" level=info msg="CreateContainer within sandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb\"" Jan 23 18:52:39.275883 containerd[1610]: time="2026-01-23T18:52:39.275867196Z" level=info msg="StartContainer for \"2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb\"" Jan 23 18:52:39.276492 containerd[1610]: time="2026-01-23T18:52:39.276473582Z" level=info msg="connecting to shim 2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb" address="unix:///run/containerd/s/2228f8430b55a86c01bda3f1a535b715340e837826d4b2d59c8a9a531db1b0b3" protocol=ttrpc version=3 Jan 23 18:52:39.295764 systemd[1]: Started cri-containerd-2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb.scope - libcontainer container 2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb. Jan 23 18:52:39.315445 systemd[1]: cri-containerd-2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb.scope: Deactivated successfully. Jan 23 18:52:39.317541 containerd[1610]: time="2026-01-23T18:52:39.317517579Z" level=info msg="received container exit event container_id:\"2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb\" id:\"2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb\" pid:2573 exited_at:{seconds:1769194359 nanos:315999557}" Jan 23 18:52:39.323512 containerd[1610]: time="2026-01-23T18:52:39.323395432Z" level=info msg="StartContainer for \"2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb\" returns successfully" Jan 23 18:52:39.332248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb-rootfs.mount: Deactivated successfully. Jan 23 18:52:40.138333 kubelet[2061]: E0123 18:52:40.138215 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:40.260641 containerd[1610]: time="2026-01-23T18:52:40.260453297Z" level=info msg="CreateContainer within sandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 18:52:40.271704 containerd[1610]: time="2026-01-23T18:52:40.271640891Z" level=info msg="Container 2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:40.273671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4217208279.mount: Deactivated successfully. Jan 23 18:52:40.282585 containerd[1610]: time="2026-01-23T18:52:40.282561217Z" level=info msg="CreateContainer within sandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\"" Jan 23 18:52:40.283724 containerd[1610]: time="2026-01-23T18:52:40.283706230Z" level=info msg="StartContainer for \"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\"" Jan 23 18:52:40.284384 containerd[1610]: time="2026-01-23T18:52:40.284326361Z" level=info msg="connecting to shim 2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8" address="unix:///run/containerd/s/2228f8430b55a86c01bda3f1a535b715340e837826d4b2d59c8a9a531db1b0b3" protocol=ttrpc version=3 Jan 23 18:52:40.300757 systemd[1]: Started cri-containerd-2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8.scope - libcontainer container 2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8. Jan 23 18:52:40.338173 containerd[1610]: time="2026-01-23T18:52:40.338094218Z" level=info msg="StartContainer for \"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\" returns successfully" Jan 23 18:52:40.481517 kubelet[2061]: I0123 18:52:40.480948 2061 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 18:52:40.617648 kernel: Initializing XFRM netlink socket Jan 23 18:52:41.138444 kubelet[2061]: E0123 18:52:41.138358 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:42.139528 kubelet[2061]: E0123 18:52:42.139488 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:42.247120 systemd-networkd[1508]: cilium_host: Link UP Jan 23 18:52:42.248577 systemd-networkd[1508]: cilium_net: Link UP Jan 23 18:52:42.249242 systemd-networkd[1508]: cilium_net: Gained carrier Jan 23 18:52:42.249350 systemd-networkd[1508]: cilium_host: Gained carrier Jan 23 18:52:42.328828 systemd-networkd[1508]: cilium_vxlan: Link UP Jan 23 18:52:42.328834 systemd-networkd[1508]: cilium_vxlan: Gained carrier Jan 23 18:52:42.438005 systemd-networkd[1508]: cilium_host: Gained IPv6LL Jan 23 18:52:42.510684 kernel: NET: Registered PF_ALG protocol family Jan 23 18:52:42.972820 systemd-networkd[1508]: cilium_net: Gained IPv6LL Jan 23 18:52:43.025262 systemd-networkd[1508]: lxc_health: Link UP Jan 23 18:52:43.030965 systemd-networkd[1508]: lxc_health: Gained carrier Jan 23 18:52:43.139977 kubelet[2061]: E0123 18:52:43.139835 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:43.484781 systemd-networkd[1508]: cilium_vxlan: Gained IPv6LL Jan 23 18:52:44.140080 kubelet[2061]: E0123 18:52:44.140042 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:44.501016 kubelet[2061]: I0123 18:52:44.500702 2061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z2g2s" podStartSLOduration=10.952421946 podStartE2EDuration="18.500689011s" podCreationTimestamp="2026-01-23 18:52:26 +0000 UTC" firstStartedPulling="2026-01-23 18:52:29.113671921 +0000 UTC m=+3.361530243" lastFinishedPulling="2026-01-23 18:52:36.661938987 +0000 UTC m=+10.909797308" observedRunningTime="2026-01-23 18:52:41.277814234 +0000 UTC m=+15.525672573" watchObservedRunningTime="2026-01-23 18:52:44.500689011 +0000 UTC m=+18.748547353" Jan 23 18:52:44.956857 systemd-networkd[1508]: lxc_health: Gained IPv6LL Jan 23 18:52:45.140898 kubelet[2061]: E0123 18:52:45.140841 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:46.123088 kubelet[2061]: I0123 18:52:46.123051 2061 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:52:46.128792 kubelet[2061]: E0123 18:52:46.128759 2061 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:46.141342 kubelet[2061]: E0123 18:52:46.141301 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:47.142236 kubelet[2061]: E0123 18:52:47.142200 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:47.891097 systemd[1]: Created slice kubepods-besteffort-podb51dce44_b3f9_4c60_a7e8_c53c80d10895.slice - libcontainer container kubepods-besteffort-podb51dce44_b3f9_4c60_a7e8_c53c80d10895.slice. Jan 23 18:52:47.996963 kubelet[2061]: I0123 18:52:47.996924 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xctg\" (UniqueName: \"kubernetes.io/projected/b51dce44-b3f9-4c60-a7e8-c53c80d10895-kube-api-access-5xctg\") pod \"nginx-deployment-bb8f74bfb-zf9hm\" (UID: \"b51dce44-b3f9-4c60-a7e8-c53c80d10895\") " pod="default/nginx-deployment-bb8f74bfb-zf9hm" Jan 23 18:52:48.143555 kubelet[2061]: E0123 18:52:48.143434 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:48.196714 containerd[1610]: time="2026-01-23T18:52:48.196437456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-zf9hm,Uid:b51dce44-b3f9-4c60-a7e8-c53c80d10895,Namespace:default,Attempt:0,}" Jan 23 18:52:48.220041 kernel: eth0: renamed from tmpb59cc Jan 23 18:52:48.219456 systemd-networkd[1508]: lxcab374488f394: Link UP Jan 23 18:52:48.221124 systemd-networkd[1508]: lxcab374488f394: Gained carrier Jan 23 18:52:48.332650 containerd[1610]: time="2026-01-23T18:52:48.332491242Z" level=info msg="connecting to shim b59ccb719a03d5873d712304933a6c6b4d1072297ce86e116935c0f88f49066c" address="unix:///run/containerd/s/9386f6a0c4cfd61e57eb90165b3da9f08b8f1351c88623584ba13c2818763ef0" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:52:48.351761 systemd[1]: Started cri-containerd-b59ccb719a03d5873d712304933a6c6b4d1072297ce86e116935c0f88f49066c.scope - libcontainer container b59ccb719a03d5873d712304933a6c6b4d1072297ce86e116935c0f88f49066c. Jan 23 18:52:48.391169 containerd[1610]: time="2026-01-23T18:52:48.391141941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-zf9hm,Uid:b51dce44-b3f9-4c60-a7e8-c53c80d10895,Namespace:default,Attempt:0,} returns sandbox id \"b59ccb719a03d5873d712304933a6c6b4d1072297ce86e116935c0f88f49066c\"" Jan 23 18:52:48.392585 containerd[1610]: time="2026-01-23T18:52:48.392566018Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 18:52:49.144640 kubelet[2061]: E0123 18:52:49.144498 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:49.757927 systemd-networkd[1508]: lxcab374488f394: Gained IPv6LL Jan 23 18:52:50.144912 kubelet[2061]: E0123 18:52:50.144834 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:50.513886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3312908786.mount: Deactivated successfully. Jan 23 18:52:51.145060 kubelet[2061]: E0123 18:52:51.145027 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:51.388275 containerd[1610]: time="2026-01-23T18:52:51.388047471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:51.389519 containerd[1610]: time="2026-01-23T18:52:51.389499518Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 23 18:52:51.390307 containerd[1610]: time="2026-01-23T18:52:51.390278651Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:51.393638 containerd[1610]: time="2026-01-23T18:52:51.393510582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:52:51.394042 containerd[1610]: time="2026-01-23T18:52:51.394015658Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 3.00142279s" Jan 23 18:52:51.394081 containerd[1610]: time="2026-01-23T18:52:51.394046836Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 18:52:51.397301 containerd[1610]: time="2026-01-23T18:52:51.397241414Z" level=info msg="CreateContainer within sandbox \"b59ccb719a03d5873d712304933a6c6b4d1072297ce86e116935c0f88f49066c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 23 18:52:51.406561 containerd[1610]: time="2026-01-23T18:52:51.406178907Z" level=info msg="Container ea99b1384dabf99b4986b859fae8ad2b415c2d2fbb50d91c1e71e055344310a4: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:51.408318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670040054.mount: Deactivated successfully. Jan 23 18:52:51.423427 containerd[1610]: time="2026-01-23T18:52:51.423402294Z" level=info msg="CreateContainer within sandbox \"b59ccb719a03d5873d712304933a6c6b4d1072297ce86e116935c0f88f49066c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"ea99b1384dabf99b4986b859fae8ad2b415c2d2fbb50d91c1e71e055344310a4\"" Jan 23 18:52:51.423879 containerd[1610]: time="2026-01-23T18:52:51.423863172Z" level=info msg="StartContainer for \"ea99b1384dabf99b4986b859fae8ad2b415c2d2fbb50d91c1e71e055344310a4\"" Jan 23 18:52:51.424443 containerd[1610]: time="2026-01-23T18:52:51.424418609Z" level=info msg="connecting to shim ea99b1384dabf99b4986b859fae8ad2b415c2d2fbb50d91c1e71e055344310a4" address="unix:///run/containerd/s/9386f6a0c4cfd61e57eb90165b3da9f08b8f1351c88623584ba13c2818763ef0" protocol=ttrpc version=3 Jan 23 18:52:51.447773 systemd[1]: Started cri-containerd-ea99b1384dabf99b4986b859fae8ad2b415c2d2fbb50d91c1e71e055344310a4.scope - libcontainer container ea99b1384dabf99b4986b859fae8ad2b415c2d2fbb50d91c1e71e055344310a4. Jan 23 18:52:51.471637 containerd[1610]: time="2026-01-23T18:52:51.471569321Z" level=info msg="StartContainer for \"ea99b1384dabf99b4986b859fae8ad2b415c2d2fbb50d91c1e71e055344310a4\" returns successfully" Jan 23 18:52:52.145848 kubelet[2061]: E0123 18:52:52.145803 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:52.293798 kubelet[2061]: I0123 18:52:52.293735 2061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-bb8f74bfb-zf9hm" podStartSLOduration=2.291056175 podStartE2EDuration="5.293721103s" podCreationTimestamp="2026-01-23 18:52:47 +0000 UTC" firstStartedPulling="2026-01-23 18:52:48.392017441 +0000 UTC m=+22.639875776" lastFinishedPulling="2026-01-23 18:52:51.394682379 +0000 UTC m=+25.642540704" observedRunningTime="2026-01-23 18:52:52.29370829 +0000 UTC m=+26.541566632" watchObservedRunningTime="2026-01-23 18:52:52.293721103 +0000 UTC m=+26.541579441" Jan 23 18:52:53.146812 kubelet[2061]: E0123 18:52:53.146762 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:54.147230 kubelet[2061]: E0123 18:52:54.147188 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:55.148354 kubelet[2061]: E0123 18:52:55.148311 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:56.148564 kubelet[2061]: E0123 18:52:56.148528 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:57.149655 kubelet[2061]: E0123 18:52:57.149593 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:58.150596 kubelet[2061]: E0123 18:52:58.150547 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:52:58.386539 systemd[1]: Created slice kubepods-besteffort-podf2a181a6_a830_4555_992d_a15898b248d1.slice - libcontainer container kubepods-besteffort-podf2a181a6_a830_4555_992d_a15898b248d1.slice. Jan 23 18:52:58.459900 kubelet[2061]: I0123 18:52:58.459606 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/f2a181a6-a830-4555-992d-a15898b248d1-data\") pod \"nfs-server-provisioner-0\" (UID: \"f2a181a6-a830-4555-992d-a15898b248d1\") " pod="default/nfs-server-provisioner-0" Jan 23 18:52:58.459900 kubelet[2061]: I0123 18:52:58.459859 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvwnx\" (UniqueName: \"kubernetes.io/projected/f2a181a6-a830-4555-992d-a15898b248d1-kube-api-access-jvwnx\") pod \"nfs-server-provisioner-0\" (UID: \"f2a181a6-a830-4555-992d-a15898b248d1\") " pod="default/nfs-server-provisioner-0" Jan 23 18:52:58.692114 containerd[1610]: time="2026-01-23T18:52:58.692071229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f2a181a6-a830-4555-992d-a15898b248d1,Namespace:default,Attempt:0,}" Jan 23 18:52:58.709139 systemd-networkd[1508]: lxc9d4714deb01e: Link UP Jan 23 18:52:58.715661 kernel: eth0: renamed from tmp1dec6 Jan 23 18:52:58.717727 systemd-networkd[1508]: lxc9d4714deb01e: Gained carrier Jan 23 18:52:58.873022 containerd[1610]: time="2026-01-23T18:52:58.872990755Z" level=info msg="connecting to shim 1dec600c1a114acc8e44f94d93e5779ff1a2fb41f18c3cc81f39c8213799274c" address="unix:///run/containerd/s/495f00206e96cd6f9e25b9b5ff8b23e8cf3eebec3602dffc23afd97219706862" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:52:58.895865 systemd[1]: Started cri-containerd-1dec600c1a114acc8e44f94d93e5779ff1a2fb41f18c3cc81f39c8213799274c.scope - libcontainer container 1dec600c1a114acc8e44f94d93e5779ff1a2fb41f18c3cc81f39c8213799274c. Jan 23 18:52:58.931758 containerd[1610]: time="2026-01-23T18:52:58.931731275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f2a181a6-a830-4555-992d-a15898b248d1,Namespace:default,Attempt:0,} returns sandbox id \"1dec600c1a114acc8e44f94d93e5779ff1a2fb41f18c3cc81f39c8213799274c\"" Jan 23 18:52:58.933236 containerd[1610]: time="2026-01-23T18:52:58.933213625Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 23 18:52:59.151660 kubelet[2061]: E0123 18:52:59.151603 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:00.152651 kubelet[2061]: E0123 18:53:00.152603 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:00.188964 systemd-networkd[1508]: lxc9d4714deb01e: Gained IPv6LL Jan 23 18:53:00.520830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3030794156.mount: Deactivated successfully. Jan 23 18:53:01.153674 kubelet[2061]: E0123 18:53:01.153640 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:01.767892 containerd[1610]: time="2026-01-23T18:53:01.767279281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:53:01.768658 containerd[1610]: time="2026-01-23T18:53:01.768434159Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039474" Jan 23 18:53:01.769839 containerd[1610]: time="2026-01-23T18:53:01.769809104Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:53:01.772690 containerd[1610]: time="2026-01-23T18:53:01.772663206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:53:01.773869 containerd[1610]: time="2026-01-23T18:53:01.773691653Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 2.840433182s" Jan 23 18:53:01.773869 containerd[1610]: time="2026-01-23T18:53:01.773742729Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 23 18:53:01.779643 containerd[1610]: time="2026-01-23T18:53:01.779605677Z" level=info msg="CreateContainer within sandbox \"1dec600c1a114acc8e44f94d93e5779ff1a2fb41f18c3cc81f39c8213799274c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 23 18:53:01.787093 containerd[1610]: time="2026-01-23T18:53:01.787066404Z" level=info msg="Container 2288ad46bac58ceaa54ca1e760ed9bc27fb14dfe064b40d83d88529ec18592df: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:53:01.800641 containerd[1610]: time="2026-01-23T18:53:01.800544667Z" level=info msg="CreateContainer within sandbox \"1dec600c1a114acc8e44f94d93e5779ff1a2fb41f18c3cc81f39c8213799274c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"2288ad46bac58ceaa54ca1e760ed9bc27fb14dfe064b40d83d88529ec18592df\"" Jan 23 18:53:01.802415 containerd[1610]: time="2026-01-23T18:53:01.801480914Z" level=info msg="StartContainer for \"2288ad46bac58ceaa54ca1e760ed9bc27fb14dfe064b40d83d88529ec18592df\"" Jan 23 18:53:01.802415 containerd[1610]: time="2026-01-23T18:53:01.802317049Z" level=info msg="connecting to shim 2288ad46bac58ceaa54ca1e760ed9bc27fb14dfe064b40d83d88529ec18592df" address="unix:///run/containerd/s/495f00206e96cd6f9e25b9b5ff8b23e8cf3eebec3602dffc23afd97219706862" protocol=ttrpc version=3 Jan 23 18:53:01.822817 systemd[1]: Started cri-containerd-2288ad46bac58ceaa54ca1e760ed9bc27fb14dfe064b40d83d88529ec18592df.scope - libcontainer container 2288ad46bac58ceaa54ca1e760ed9bc27fb14dfe064b40d83d88529ec18592df. Jan 23 18:53:01.849855 containerd[1610]: time="2026-01-23T18:53:01.849820488Z" level=info msg="StartContainer for \"2288ad46bac58ceaa54ca1e760ed9bc27fb14dfe064b40d83d88529ec18592df\" returns successfully" Jan 23 18:53:02.154091 kubelet[2061]: E0123 18:53:02.154012 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:02.325472 kubelet[2061]: I0123 18:53:02.325423 2061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.482616597 podStartE2EDuration="4.325408562s" podCreationTimestamp="2026-01-23 18:52:58 +0000 UTC" firstStartedPulling="2026-01-23 18:52:58.932952561 +0000 UTC m=+33.180810883" lastFinishedPulling="2026-01-23 18:53:01.775744525 +0000 UTC m=+36.023602848" observedRunningTime="2026-01-23 18:53:02.324382758 +0000 UTC m=+36.572241102" watchObservedRunningTime="2026-01-23 18:53:02.325408562 +0000 UTC m=+36.573266904" Jan 23 18:53:03.154917 kubelet[2061]: E0123 18:53:03.154881 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:04.155927 kubelet[2061]: E0123 18:53:04.155878 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:05.156890 kubelet[2061]: E0123 18:53:05.156828 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:06.129596 kubelet[2061]: E0123 18:53:06.129535 2061 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:06.157104 kubelet[2061]: E0123 18:53:06.157040 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:07.082091 systemd[1]: Created slice kubepods-besteffort-pod579f60b5_1676_4105_8b4d_73fdab45f1cb.slice - libcontainer container kubepods-besteffort-pod579f60b5_1676_4105_8b4d_73fdab45f1cb.slice. Jan 23 18:53:07.110987 kubelet[2061]: I0123 18:53:07.110942 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c92b13e-95f4-4dff-9826-9a144298a4b5\" (UniqueName: \"kubernetes.io/nfs/579f60b5-1676-4105-8b4d-73fdab45f1cb-pvc-7c92b13e-95f4-4dff-9826-9a144298a4b5\") pod \"test-pod-1\" (UID: \"579f60b5-1676-4105-8b4d-73fdab45f1cb\") " pod="default/test-pod-1" Jan 23 18:53:07.111136 kubelet[2061]: I0123 18:53:07.111112 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfdxc\" (UniqueName: \"kubernetes.io/projected/579f60b5-1676-4105-8b4d-73fdab45f1cb-kube-api-access-qfdxc\") pod \"test-pod-1\" (UID: \"579f60b5-1676-4105-8b4d-73fdab45f1cb\") " pod="default/test-pod-1" Jan 23 18:53:07.157224 kubelet[2061]: E0123 18:53:07.157188 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:07.245659 kernel: netfs: FS-Cache loaded Jan 23 18:53:07.296006 kernel: RPC: Registered named UNIX socket transport module. Jan 23 18:53:07.296109 kernel: RPC: Registered udp transport module. Jan 23 18:53:07.296126 kernel: RPC: Registered tcp transport module. Jan 23 18:53:07.296140 kernel: RPC: Registered tcp-with-tls transport module. Jan 23 18:53:07.296151 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 23 18:53:07.454935 kernel: NFS: Registering the id_resolver key type Jan 23 18:53:07.455037 kernel: Key type id_resolver registered Jan 23 18:53:07.455838 kernel: Key type id_legacy registered Jan 23 18:53:07.483895 nfsidmap[3411]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 23 18:53:07.484941 nfsidmap[3411]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 23 18:53:07.486746 nfsidmap[3412]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 23 18:53:07.486871 nfsidmap[3412]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 23 18:53:07.495262 nfsrahead[3414]: setting /var/lib/kubelet/pods/579f60b5-1676-4105-8b4d-73fdab45f1cb/volumes/kubernetes.io~nfs/pvc-7c92b13e-95f4-4dff-9826-9a144298a4b5 readahead to 128 Jan 23 18:53:07.689344 containerd[1610]: time="2026-01-23T18:53:07.689316070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:579f60b5-1676-4105-8b4d-73fdab45f1cb,Namespace:default,Attempt:0,}" Jan 23 18:53:07.707543 systemd-networkd[1508]: lxc37411236f351: Link UP Jan 23 18:53:07.716452 systemd-networkd[1508]: lxc37411236f351: Gained carrier Jan 23 18:53:07.716642 kernel: eth0: renamed from tmp2f254 Jan 23 18:53:07.865121 containerd[1610]: time="2026-01-23T18:53:07.865073558Z" level=info msg="connecting to shim 2f25445529f278c7fc5759659d2ae6b231d4a98f1dd47c50e69c49609b88d5c6" address="unix:///run/containerd/s/134b8993c0c5985c30e63a64cd52d5a691ebb19987a225cb3d10307eb186311a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:53:07.884783 systemd[1]: Started cri-containerd-2f25445529f278c7fc5759659d2ae6b231d4a98f1dd47c50e69c49609b88d5c6.scope - libcontainer container 2f25445529f278c7fc5759659d2ae6b231d4a98f1dd47c50e69c49609b88d5c6. Jan 23 18:53:07.924786 containerd[1610]: time="2026-01-23T18:53:07.924754939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:579f60b5-1676-4105-8b4d-73fdab45f1cb,Namespace:default,Attempt:0,} returns sandbox id \"2f25445529f278c7fc5759659d2ae6b231d4a98f1dd47c50e69c49609b88d5c6\"" Jan 23 18:53:07.925777 containerd[1610]: time="2026-01-23T18:53:07.925721243Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 18:53:08.157656 kubelet[2061]: E0123 18:53:08.157593 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:08.295791 containerd[1610]: time="2026-01-23T18:53:08.295728637Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:53:08.296654 containerd[1610]: time="2026-01-23T18:53:08.296482581Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 23 18:53:08.300636 containerd[1610]: time="2026-01-23T18:53:08.298705593Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 372.947801ms" Jan 23 18:53:08.300636 containerd[1610]: time="2026-01-23T18:53:08.298733589Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 18:53:08.305089 containerd[1610]: time="2026-01-23T18:53:08.305059353Z" level=info msg="CreateContainer within sandbox \"2f25445529f278c7fc5759659d2ae6b231d4a98f1dd47c50e69c49609b88d5c6\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 23 18:53:08.316648 containerd[1610]: time="2026-01-23T18:53:08.316454209Z" level=info msg="Container 64c50cba058ff10cec28b9ea89d6a9eb352e0349d2376cee5f34ce8958fb8302: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:53:08.329390 containerd[1610]: time="2026-01-23T18:53:08.329324471Z" level=info msg="CreateContainer within sandbox \"2f25445529f278c7fc5759659d2ae6b231d4a98f1dd47c50e69c49609b88d5c6\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"64c50cba058ff10cec28b9ea89d6a9eb352e0349d2376cee5f34ce8958fb8302\"" Jan 23 18:53:08.329830 containerd[1610]: time="2026-01-23T18:53:08.329807600Z" level=info msg="StartContainer for \"64c50cba058ff10cec28b9ea89d6a9eb352e0349d2376cee5f34ce8958fb8302\"" Jan 23 18:53:08.330690 containerd[1610]: time="2026-01-23T18:53:08.330620640Z" level=info msg="connecting to shim 64c50cba058ff10cec28b9ea89d6a9eb352e0349d2376cee5f34ce8958fb8302" address="unix:///run/containerd/s/134b8993c0c5985c30e63a64cd52d5a691ebb19987a225cb3d10307eb186311a" protocol=ttrpc version=3 Jan 23 18:53:08.351820 systemd[1]: Started cri-containerd-64c50cba058ff10cec28b9ea89d6a9eb352e0349d2376cee5f34ce8958fb8302.scope - libcontainer container 64c50cba058ff10cec28b9ea89d6a9eb352e0349d2376cee5f34ce8958fb8302. Jan 23 18:53:08.376176 containerd[1610]: time="2026-01-23T18:53:08.376141402Z" level=info msg="StartContainer for \"64c50cba058ff10cec28b9ea89d6a9eb352e0349d2376cee5f34ce8958fb8302\" returns successfully" Jan 23 18:53:09.158746 kubelet[2061]: E0123 18:53:09.158697 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:09.344324 kubelet[2061]: I0123 18:53:09.344230 2061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=9.967246027 podStartE2EDuration="10.344216819s" podCreationTimestamp="2026-01-23 18:52:59 +0000 UTC" firstStartedPulling="2026-01-23 18:53:07.92547168 +0000 UTC m=+42.173330002" lastFinishedPulling="2026-01-23 18:53:08.302442472 +0000 UTC m=+42.550300794" observedRunningTime="2026-01-23 18:53:09.343956522 +0000 UTC m=+43.591814866" watchObservedRunningTime="2026-01-23 18:53:09.344216819 +0000 UTC m=+43.592075162" Jan 23 18:53:09.533301 systemd-networkd[1508]: lxc37411236f351: Gained IPv6LL Jan 23 18:53:10.159256 kubelet[2061]: E0123 18:53:10.159202 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:11.159517 kubelet[2061]: E0123 18:53:11.159473 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:12.160019 kubelet[2061]: E0123 18:53:12.159965 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:13.161072 kubelet[2061]: E0123 18:53:13.161023 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:14.161484 kubelet[2061]: E0123 18:53:14.161440 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:15.161606 kubelet[2061]: E0123 18:53:15.161530 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:15.996302 containerd[1610]: time="2026-01-23T18:53:15.996067894Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:53:16.000975 containerd[1610]: time="2026-01-23T18:53:16.000953182Z" level=info msg="StopContainer for \"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\" with timeout 2 (s)" Jan 23 18:53:16.001356 containerd[1610]: time="2026-01-23T18:53:16.001301401Z" level=info msg="Stop container \"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\" with signal terminated" Jan 23 18:53:16.007736 systemd-networkd[1508]: lxc_health: Link DOWN Jan 23 18:53:16.007743 systemd-networkd[1508]: lxc_health: Lost carrier Jan 23 18:53:16.020099 systemd[1]: cri-containerd-2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8.scope: Deactivated successfully. Jan 23 18:53:16.020998 systemd[1]: cri-containerd-2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8.scope: Consumed 5.224s CPU time, 121.7M memory peak, 112K read from disk, 13.3M written to disk. Jan 23 18:53:16.022908 containerd[1610]: time="2026-01-23T18:53:16.022872504Z" level=info msg="received container exit event container_id:\"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\" id:\"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\" pid:2611 exited_at:{seconds:1769194396 nanos:22715638}" Jan 23 18:53:16.039955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8-rootfs.mount: Deactivated successfully. Jan 23 18:53:16.133010 containerd[1610]: time="2026-01-23T18:53:16.132983365Z" level=info msg="StopContainer for \"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\" returns successfully" Jan 23 18:53:16.133733 containerd[1610]: time="2026-01-23T18:53:16.133686687Z" level=info msg="StopPodSandbox for \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\"" Jan 23 18:53:16.133873 containerd[1610]: time="2026-01-23T18:53:16.133818175Z" level=info msg="Container to stop \"52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:53:16.133873 containerd[1610]: time="2026-01-23T18:53:16.133831593Z" level=info msg="Container to stop \"e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:53:16.133873 containerd[1610]: time="2026-01-23T18:53:16.133839100Z" level=info msg="Container to stop \"d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:53:16.133873 containerd[1610]: time="2026-01-23T18:53:16.133846425Z" level=info msg="Container to stop \"2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:53:16.133873 containerd[1610]: time="2026-01-23T18:53:16.133854382Z" level=info msg="Container to stop \"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:53:16.139352 systemd[1]: cri-containerd-5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b.scope: Deactivated successfully. Jan 23 18:53:16.144273 containerd[1610]: time="2026-01-23T18:53:16.144247762Z" level=info msg="received sandbox exit event container_id:\"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" id:\"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" exit_status:137 exited_at:{seconds:1769194396 nanos:144055148}" monitor_name=podsandbox Jan 23 18:53:16.160520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b-rootfs.mount: Deactivated successfully. Jan 23 18:53:16.162501 kubelet[2061]: E0123 18:53:16.162472 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:16.164633 containerd[1610]: time="2026-01-23T18:53:16.164583106Z" level=info msg="shim disconnected" id=5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b namespace=k8s.io Jan 23 18:53:16.164633 containerd[1610]: time="2026-01-23T18:53:16.164614598Z" level=warning msg="cleaning up after shim disconnected" id=5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b namespace=k8s.io Jan 23 18:53:16.164801 containerd[1610]: time="2026-01-23T18:53:16.164621757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 18:53:16.175553 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b-shm.mount: Deactivated successfully. Jan 23 18:53:16.175753 containerd[1610]: time="2026-01-23T18:53:16.175575560Z" level=info msg="received sandbox container exit event sandbox_id:\"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" exit_status:137 exited_at:{seconds:1769194396 nanos:144055148}" monitor_name=criService Jan 23 18:53:16.176818 containerd[1610]: time="2026-01-23T18:53:16.176720841Z" level=info msg="TearDown network for sandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" successfully" Jan 23 18:53:16.176818 containerd[1610]: time="2026-01-23T18:53:16.176738169Z" level=info msg="StopPodSandbox for \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" returns successfully" Jan 23 18:53:16.222986 kubelet[2061]: E0123 18:53:16.222937 2061 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 18:53:16.265110 kubelet[2061]: I0123 18:53:16.264586 2061 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2httr\" (UniqueName: \"kubernetes.io/projected/6ad8b71a-de93-48e0-a240-fe44d106d040-kube-api-access-2httr\") pod \"6ad8b71a-de93-48e0-a240-fe44d106d040\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " Jan 23 18:53:16.265110 kubelet[2061]: I0123 18:53:16.264617 2061 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-xtables-lock\") pod \"6ad8b71a-de93-48e0-a240-fe44d106d040\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " Jan 23 18:53:16.265110 kubelet[2061]: I0123 18:53:16.264640 2061 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-bpf-maps\") pod \"6ad8b71a-de93-48e0-a240-fe44d106d040\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " Jan 23 18:53:16.265110 kubelet[2061]: I0123 18:53:16.264652 2061 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-lib-modules\") pod \"6ad8b71a-de93-48e0-a240-fe44d106d040\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " Jan 23 18:53:16.265110 kubelet[2061]: I0123 18:53:16.264668 2061 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-cni-path\") pod \"6ad8b71a-de93-48e0-a240-fe44d106d040\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " Jan 23 18:53:16.265110 kubelet[2061]: I0123 18:53:16.264680 2061 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-hostproc\") pod \"6ad8b71a-de93-48e0-a240-fe44d106d040\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " Jan 23 18:53:16.265455 kubelet[2061]: I0123 18:53:16.264692 2061 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-host-proc-sys-kernel\") pod \"6ad8b71a-de93-48e0-a240-fe44d106d040\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " Jan 23 18:53:16.265455 kubelet[2061]: I0123 18:53:16.264706 2061 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-cilium-run\") pod \"6ad8b71a-de93-48e0-a240-fe44d106d040\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " Jan 23 18:53:16.265455 kubelet[2061]: I0123 18:53:16.264717 2061 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-cilium-cgroup\") pod \"6ad8b71a-de93-48e0-a240-fe44d106d040\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " Jan 23 18:53:16.265455 kubelet[2061]: I0123 18:53:16.264731 2061 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ad8b71a-de93-48e0-a240-fe44d106d040-hubble-tls\") pod \"6ad8b71a-de93-48e0-a240-fe44d106d040\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " Jan 23 18:53:16.265455 kubelet[2061]: I0123 18:53:16.264748 2061 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ad8b71a-de93-48e0-a240-fe44d106d040-cilium-config-path\") pod \"6ad8b71a-de93-48e0-a240-fe44d106d040\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " Jan 23 18:53:16.265455 kubelet[2061]: I0123 18:53:16.264761 2061 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-etc-cni-netd\") pod \"6ad8b71a-de93-48e0-a240-fe44d106d040\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " Jan 23 18:53:16.265608 kubelet[2061]: I0123 18:53:16.264777 2061 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ad8b71a-de93-48e0-a240-fe44d106d040-clustermesh-secrets\") pod \"6ad8b71a-de93-48e0-a240-fe44d106d040\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " Jan 23 18:53:16.265608 kubelet[2061]: I0123 18:53:16.264789 2061 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-host-proc-sys-net\") pod \"6ad8b71a-de93-48e0-a240-fe44d106d040\" (UID: \"6ad8b71a-de93-48e0-a240-fe44d106d040\") " Jan 23 18:53:16.265608 kubelet[2061]: I0123 18:53:16.264831 2061 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6ad8b71a-de93-48e0-a240-fe44d106d040" (UID: "6ad8b71a-de93-48e0-a240-fe44d106d040"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:53:16.266719 kubelet[2061]: I0123 18:53:16.266697 2061 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6ad8b71a-de93-48e0-a240-fe44d106d040" (UID: "6ad8b71a-de93-48e0-a240-fe44d106d040"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:53:16.266911 kubelet[2061]: I0123 18:53:16.266829 2061 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6ad8b71a-de93-48e0-a240-fe44d106d040" (UID: "6ad8b71a-de93-48e0-a240-fe44d106d040"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:53:16.266911 kubelet[2061]: I0123 18:53:16.266845 2061 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6ad8b71a-de93-48e0-a240-fe44d106d040" (UID: "6ad8b71a-de93-48e0-a240-fe44d106d040"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:53:16.266911 kubelet[2061]: I0123 18:53:16.266859 2061 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6ad8b71a-de93-48e0-a240-fe44d106d040" (UID: "6ad8b71a-de93-48e0-a240-fe44d106d040"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:53:16.266911 kubelet[2061]: I0123 18:53:16.266869 2061 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-cni-path" (OuterVolumeSpecName: "cni-path") pod "6ad8b71a-de93-48e0-a240-fe44d106d040" (UID: "6ad8b71a-de93-48e0-a240-fe44d106d040"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:53:16.266911 kubelet[2061]: I0123 18:53:16.266881 2061 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-hostproc" (OuterVolumeSpecName: "hostproc") pod "6ad8b71a-de93-48e0-a240-fe44d106d040" (UID: "6ad8b71a-de93-48e0-a240-fe44d106d040"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:53:16.268773 systemd[1]: var-lib-kubelet-pods-6ad8b71a\x2dde93\x2d48e0\x2da240\x2dfe44d106d040-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2httr.mount: Deactivated successfully. Jan 23 18:53:16.269973 kubelet[2061]: I0123 18:53:16.269956 2061 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad8b71a-de93-48e0-a240-fe44d106d040-kube-api-access-2httr" (OuterVolumeSpecName: "kube-api-access-2httr") pod "6ad8b71a-de93-48e0-a240-fe44d106d040" (UID: "6ad8b71a-de93-48e0-a240-fe44d106d040"). InnerVolumeSpecName "kube-api-access-2httr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:53:16.270044 kubelet[2061]: I0123 18:53:16.270035 2061 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6ad8b71a-de93-48e0-a240-fe44d106d040" (UID: "6ad8b71a-de93-48e0-a240-fe44d106d040"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:53:16.270087 kubelet[2061]: I0123 18:53:16.270080 2061 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6ad8b71a-de93-48e0-a240-fe44d106d040" (UID: "6ad8b71a-de93-48e0-a240-fe44d106d040"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:53:16.273271 systemd[1]: var-lib-kubelet-pods-6ad8b71a\x2dde93\x2d48e0\x2da240\x2dfe44d106d040-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 18:53:16.274667 kubelet[2061]: I0123 18:53:16.274189 2061 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6ad8b71a-de93-48e0-a240-fe44d106d040" (UID: "6ad8b71a-de93-48e0-a240-fe44d106d040"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:53:16.274667 kubelet[2061]: I0123 18:53:16.274616 2061 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad8b71a-de93-48e0-a240-fe44d106d040-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6ad8b71a-de93-48e0-a240-fe44d106d040" (UID: "6ad8b71a-de93-48e0-a240-fe44d106d040"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:53:16.275078 kubelet[2061]: I0123 18:53:16.275066 2061 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ad8b71a-de93-48e0-a240-fe44d106d040-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6ad8b71a-de93-48e0-a240-fe44d106d040" (UID: "6ad8b71a-de93-48e0-a240-fe44d106d040"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:53:16.279292 kubelet[2061]: I0123 18:53:16.279271 2061 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad8b71a-de93-48e0-a240-fe44d106d040-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6ad8b71a-de93-48e0-a240-fe44d106d040" (UID: "6ad8b71a-de93-48e0-a240-fe44d106d040"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 18:53:16.341204 kubelet[2061]: I0123 18:53:16.341165 2061 scope.go:117] "RemoveContainer" containerID="2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8" Jan 23 18:53:16.342643 containerd[1610]: time="2026-01-23T18:53:16.342602682Z" level=info msg="RemoveContainer for \"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\"" Jan 23 18:53:16.345705 systemd[1]: Removed slice kubepods-burstable-pod6ad8b71a_de93_48e0_a240_fe44d106d040.slice - libcontainer container kubepods-burstable-pod6ad8b71a_de93_48e0_a240_fe44d106d040.slice. Jan 23 18:53:16.345787 systemd[1]: kubepods-burstable-pod6ad8b71a_de93_48e0_a240_fe44d106d040.slice: Consumed 5.296s CPU time, 122.1M memory peak, 112K read from disk, 13.3M written to disk. Jan 23 18:53:16.346867 containerd[1610]: time="2026-01-23T18:53:16.346727961Z" level=info msg="RemoveContainer for \"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\" returns successfully" Jan 23 18:53:16.347094 kubelet[2061]: I0123 18:53:16.347029 2061 scope.go:117] "RemoveContainer" containerID="2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb" Jan 23 18:53:16.348309 containerd[1610]: time="2026-01-23T18:53:16.348293671Z" level=info msg="RemoveContainer for \"2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb\"" Jan 23 18:53:16.351634 containerd[1610]: time="2026-01-23T18:53:16.351604541Z" level=info msg="RemoveContainer for \"2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb\" returns successfully" Jan 23 18:53:16.351837 kubelet[2061]: I0123 18:53:16.351824 2061 scope.go:117] "RemoveContainer" containerID="e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb" Jan 23 18:53:16.353391 containerd[1610]: time="2026-01-23T18:53:16.353377029Z" level=info msg="RemoveContainer for \"e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb\"" Jan 23 18:53:16.356665 containerd[1610]: time="2026-01-23T18:53:16.356648542Z" level=info msg="RemoveContainer for \"e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb\" returns successfully" Jan 23 18:53:16.356954 kubelet[2061]: I0123 18:53:16.356944 2061 scope.go:117] "RemoveContainer" containerID="52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286" Jan 23 18:53:16.358000 containerd[1610]: time="2026-01-23T18:53:16.357986679Z" level=info msg="RemoveContainer for \"52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286\"" Jan 23 18:53:16.360680 containerd[1610]: time="2026-01-23T18:53:16.360662176Z" level=info msg="RemoveContainer for \"52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286\" returns successfully" Jan 23 18:53:16.360889 kubelet[2061]: I0123 18:53:16.360851 2061 scope.go:117] "RemoveContainer" containerID="d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490" Jan 23 18:53:16.361866 containerd[1610]: time="2026-01-23T18:53:16.361848639Z" level=info msg="RemoveContainer for \"d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490\"" Jan 23 18:53:16.365150 kubelet[2061]: I0123 18:53:16.365037 2061 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-cilium-run\") on node \"10.0.4.9\" DevicePath \"\"" Jan 23 18:53:16.365150 kubelet[2061]: I0123 18:53:16.365055 2061 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-cilium-cgroup\") on node \"10.0.4.9\" DevicePath \"\"" Jan 23 18:53:16.365150 kubelet[2061]: I0123 18:53:16.365063 2061 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ad8b71a-de93-48e0-a240-fe44d106d040-hubble-tls\") on node \"10.0.4.9\" DevicePath \"\"" Jan 23 18:53:16.365150 kubelet[2061]: I0123 18:53:16.365070 2061 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ad8b71a-de93-48e0-a240-fe44d106d040-cilium-config-path\") on node \"10.0.4.9\" DevicePath \"\"" Jan 23 18:53:16.365150 kubelet[2061]: I0123 18:53:16.365077 2061 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-etc-cni-netd\") on node \"10.0.4.9\" DevicePath \"\"" Jan 23 18:53:16.365150 kubelet[2061]: I0123 18:53:16.365083 2061 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ad8b71a-de93-48e0-a240-fe44d106d040-clustermesh-secrets\") on node \"10.0.4.9\" DevicePath \"\"" Jan 23 18:53:16.365150 kubelet[2061]: I0123 18:53:16.365089 2061 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-host-proc-sys-net\") on node \"10.0.4.9\" DevicePath \"\"" Jan 23 18:53:16.365150 kubelet[2061]: I0123 18:53:16.365096 2061 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2httr\" (UniqueName: \"kubernetes.io/projected/6ad8b71a-de93-48e0-a240-fe44d106d040-kube-api-access-2httr\") on node \"10.0.4.9\" DevicePath \"\"" Jan 23 18:53:16.365360 kubelet[2061]: I0123 18:53:16.365102 2061 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-xtables-lock\") on node \"10.0.4.9\" DevicePath \"\"" Jan 23 18:53:16.365360 kubelet[2061]: I0123 18:53:16.365112 2061 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-bpf-maps\") on node \"10.0.4.9\" DevicePath \"\"" Jan 23 18:53:16.365360 kubelet[2061]: I0123 18:53:16.365117 2061 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-lib-modules\") on node \"10.0.4.9\" DevicePath \"\"" Jan 23 18:53:16.365360 kubelet[2061]: I0123 18:53:16.365123 2061 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-cni-path\") on node \"10.0.4.9\" DevicePath \"\"" Jan 23 18:53:16.365360 kubelet[2061]: I0123 18:53:16.365129 2061 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-hostproc\") on node \"10.0.4.9\" DevicePath \"\"" Jan 23 18:53:16.365360 kubelet[2061]: I0123 18:53:16.365136 2061 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ad8b71a-de93-48e0-a240-fe44d106d040-host-proc-sys-kernel\") on node \"10.0.4.9\" DevicePath \"\"" Jan 23 18:53:16.365657 containerd[1610]: time="2026-01-23T18:53:16.365619555Z" level=info msg="RemoveContainer for \"d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490\" returns successfully" Jan 23 18:53:16.365801 kubelet[2061]: I0123 18:53:16.365779 2061 scope.go:117] "RemoveContainer" containerID="2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8" Jan 23 18:53:16.366044 containerd[1610]: time="2026-01-23T18:53:16.366007874Z" level=error msg="ContainerStatus for \"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\": not found" Jan 23 18:53:16.366169 kubelet[2061]: E0123 18:53:16.366141 2061 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\": not found" containerID="2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8" Jan 23 18:53:16.366246 kubelet[2061]: I0123 18:53:16.366215 2061 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8"} err="failed to get container status \"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"2fbca0945b0b0471bdb0c734bd7ecad36a5ba0247d15a9e948052848fd5746c8\": not found" Jan 23 18:53:16.366294 kubelet[2061]: I0123 18:53:16.366287 2061 scope.go:117] "RemoveContainer" containerID="2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb" Jan 23 18:53:16.366450 containerd[1610]: time="2026-01-23T18:53:16.366431699Z" level=error msg="ContainerStatus for \"2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb\": not found" Jan 23 18:53:16.366609 kubelet[2061]: E0123 18:53:16.366523 2061 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb\": not found" containerID="2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb" Jan 23 18:53:16.366609 kubelet[2061]: I0123 18:53:16.366537 2061 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb"} err="failed to get container status \"2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"2041441428f0b8fb3ad599f5252587d19a61dce8fb977bb67d3cf7cbce0fe7bb\": not found" Jan 23 18:53:16.366609 kubelet[2061]: I0123 18:53:16.366548 2061 scope.go:117] "RemoveContainer" containerID="e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb" Jan 23 18:53:16.366721 containerd[1610]: time="2026-01-23T18:53:16.366703497Z" level=error msg="ContainerStatus for \"e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb\": not found" Jan 23 18:53:16.366852 kubelet[2061]: E0123 18:53:16.366768 2061 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb\": not found" containerID="e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb" Jan 23 18:53:16.366852 kubelet[2061]: I0123 18:53:16.366781 2061 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb"} err="failed to get container status \"e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2dcef11467e916e631407a14ca7c105b05daac787a06d8407f0fc53b62f68eb\": not found" Jan 23 18:53:16.366852 kubelet[2061]: I0123 18:53:16.366791 2061 scope.go:117] "RemoveContainer" containerID="52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286" Jan 23 18:53:16.366922 containerd[1610]: time="2026-01-23T18:53:16.366872729Z" level=error msg="ContainerStatus for \"52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286\": not found" Jan 23 18:53:16.367030 kubelet[2061]: E0123 18:53:16.366993 2061 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286\": not found" containerID="52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286" Jan 23 18:53:16.367030 kubelet[2061]: I0123 18:53:16.367007 2061 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286"} err="failed to get container status \"52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286\": rpc error: code = NotFound desc = an error occurred when try to find container \"52a09f907db412d66ab06bb939be11a389edd7fd5227305bbc5f97287790d286\": not found" Jan 23 18:53:16.367030 kubelet[2061]: I0123 18:53:16.367020 2061 scope.go:117] "RemoveContainer" containerID="d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490" Jan 23 18:53:16.367219 containerd[1610]: time="2026-01-23T18:53:16.367197020Z" level=error msg="ContainerStatus for \"d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490\": not found" Jan 23 18:53:16.367297 kubelet[2061]: E0123 18:53:16.367273 2061 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490\": not found" containerID="d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490" Jan 23 18:53:16.367297 kubelet[2061]: I0123 18:53:16.367285 2061 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490"} err="failed to get container status \"d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3d6ab554c87bac15007ebb80eba447666879b5ad727a66153dee6b75bded490\": not found" Jan 23 18:53:17.040054 systemd[1]: var-lib-kubelet-pods-6ad8b71a\x2dde93\x2d48e0\x2da240\x2dfe44d106d040-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 18:53:17.163276 kubelet[2061]: E0123 18:53:17.163228 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:17.329709 kubelet[2061]: I0123 18:53:17.328989 2061 setters.go:543] "Node became not ready" node="10.0.4.9" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:53:17Z","lastTransitionTime":"2026-01-23T18:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 18:53:18.163825 kubelet[2061]: E0123 18:53:18.163769 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:18.225203 kubelet[2061]: I0123 18:53:18.224808 2061 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ad8b71a-de93-48e0-a240-fe44d106d040" path="/var/lib/kubelet/pods/6ad8b71a-de93-48e0-a240-fe44d106d040/volumes" Jan 23 18:53:19.164687 kubelet[2061]: E0123 18:53:19.164644 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:19.870007 systemd[1]: Created slice kubepods-besteffort-pod02a99ec0_90d3_4a1f_9067_c921e468c081.slice - libcontainer container kubepods-besteffort-pod02a99ec0_90d3_4a1f_9067_c921e468c081.slice. Jan 23 18:53:19.882217 systemd[1]: Created slice kubepods-burstable-poda45e344f_4bc0_4778_a67d_4bd1db10adc5.slice - libcontainer container kubepods-burstable-poda45e344f_4bc0_4778_a67d_4bd1db10adc5.slice. Jan 23 18:53:19.885392 kubelet[2061]: I0123 18:53:19.885376 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a45e344f-4bc0-4778-a67d-4bd1db10adc5-cni-path\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.885529 kubelet[2061]: I0123 18:53:19.885519 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a45e344f-4bc0-4778-a67d-4bd1db10adc5-host-proc-sys-kernel\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.885603 kubelet[2061]: I0123 18:53:19.885592 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a45e344f-4bc0-4778-a67d-4bd1db10adc5-cilium-run\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.885658 kubelet[2061]: I0123 18:53:19.885636 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a45e344f-4bc0-4778-a67d-4bd1db10adc5-cilium-cgroup\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.885679 kubelet[2061]: I0123 18:53:19.885660 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a45e344f-4bc0-4778-a67d-4bd1db10adc5-hubble-tls\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.885788 kubelet[2061]: I0123 18:53:19.885679 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdqrv\" (UniqueName: \"kubernetes.io/projected/a45e344f-4bc0-4778-a67d-4bd1db10adc5-kube-api-access-fdqrv\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.885788 kubelet[2061]: I0123 18:53:19.885693 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a45e344f-4bc0-4778-a67d-4bd1db10adc5-bpf-maps\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.885788 kubelet[2061]: I0123 18:53:19.885706 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a45e344f-4bc0-4778-a67d-4bd1db10adc5-hostproc\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.885788 kubelet[2061]: I0123 18:53:19.885735 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a45e344f-4bc0-4778-a67d-4bd1db10adc5-clustermesh-secrets\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.885788 kubelet[2061]: I0123 18:53:19.885756 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a45e344f-4bc0-4778-a67d-4bd1db10adc5-cilium-config-path\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.885788 kubelet[2061]: I0123 18:53:19.885767 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a45e344f-4bc0-4778-a67d-4bd1db10adc5-cilium-ipsec-secrets\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.886021 kubelet[2061]: I0123 18:53:19.885956 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a45e344f-4bc0-4778-a67d-4bd1db10adc5-host-proc-sys-net\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.886021 kubelet[2061]: I0123 18:53:19.885972 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr4nl\" (UniqueName: \"kubernetes.io/projected/02a99ec0-90d3-4a1f-9067-c921e468c081-kube-api-access-tr4nl\") pod \"cilium-operator-6f9c7c5859-88ttm\" (UID: \"02a99ec0-90d3-4a1f-9067-c921e468c081\") " pod="kube-system/cilium-operator-6f9c7c5859-88ttm" Jan 23 18:53:19.886021 kubelet[2061]: I0123 18:53:19.885985 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a45e344f-4bc0-4778-a67d-4bd1db10adc5-etc-cni-netd\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.886219 kubelet[2061]: I0123 18:53:19.885998 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a45e344f-4bc0-4778-a67d-4bd1db10adc5-lib-modules\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.886219 kubelet[2061]: I0123 18:53:19.886175 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a45e344f-4bc0-4778-a67d-4bd1db10adc5-xtables-lock\") pod \"cilium-7qr94\" (UID: \"a45e344f-4bc0-4778-a67d-4bd1db10adc5\") " pod="kube-system/cilium-7qr94" Jan 23 18:53:19.886219 kubelet[2061]: I0123 18:53:19.886189 2061 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02a99ec0-90d3-4a1f-9067-c921e468c081-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-88ttm\" (UID: \"02a99ec0-90d3-4a1f-9067-c921e468c081\") " pod="kube-system/cilium-operator-6f9c7c5859-88ttm" Jan 23 18:53:20.164853 kubelet[2061]: E0123 18:53:20.164734 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:20.174581 containerd[1610]: time="2026-01-23T18:53:20.174550160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-88ttm,Uid:02a99ec0-90d3-4a1f-9067-c921e468c081,Namespace:kube-system,Attempt:0,}" Jan 23 18:53:20.187093 containerd[1610]: time="2026-01-23T18:53:20.187024084Z" level=info msg="connecting to shim 94da2d3fe188dda1265a11907d33c5acf6331bafaece7add729770d8667dc6cf" address="unix:///run/containerd/s/b8be5fe5fa83d8248e7be53d2295ab83184eba93287220b17fd17243e21a95d8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:53:20.190248 containerd[1610]: time="2026-01-23T18:53:20.190065544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7qr94,Uid:a45e344f-4bc0-4778-a67d-4bd1db10adc5,Namespace:kube-system,Attempt:0,}" Jan 23 18:53:20.207719 containerd[1610]: time="2026-01-23T18:53:20.207686923Z" level=info msg="connecting to shim fea0a9cfe082b204618c63471571382bb425af0393673c23e6041d9df1815aaa" address="unix:///run/containerd/s/0329a56e249001c86e8b3d5d520f9d001969c1f7def9fe2da016c26994785eba" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:53:20.211811 systemd[1]: Started cri-containerd-94da2d3fe188dda1265a11907d33c5acf6331bafaece7add729770d8667dc6cf.scope - libcontainer container 94da2d3fe188dda1265a11907d33c5acf6331bafaece7add729770d8667dc6cf. Jan 23 18:53:20.234801 systemd[1]: Started cri-containerd-fea0a9cfe082b204618c63471571382bb425af0393673c23e6041d9df1815aaa.scope - libcontainer container fea0a9cfe082b204618c63471571382bb425af0393673c23e6041d9df1815aaa. Jan 23 18:53:20.262754 containerd[1610]: time="2026-01-23T18:53:20.262702648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-88ttm,Uid:02a99ec0-90d3-4a1f-9067-c921e468c081,Namespace:kube-system,Attempt:0,} returns sandbox id \"94da2d3fe188dda1265a11907d33c5acf6331bafaece7add729770d8667dc6cf\"" Jan 23 18:53:20.264216 containerd[1610]: time="2026-01-23T18:53:20.264057942Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 18:53:20.267404 containerd[1610]: time="2026-01-23T18:53:20.267384225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7qr94,Uid:a45e344f-4bc0-4778-a67d-4bd1db10adc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"fea0a9cfe082b204618c63471571382bb425af0393673c23e6041d9df1815aaa\"" Jan 23 18:53:20.270615 containerd[1610]: time="2026-01-23T18:53:20.270597018Z" level=info msg="CreateContainer within sandbox \"fea0a9cfe082b204618c63471571382bb425af0393673c23e6041d9df1815aaa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 18:53:20.282791 containerd[1610]: time="2026-01-23T18:53:20.282771720Z" level=info msg="Container 7dc658a14f2e0224e700ff5e6690b10f4d6097f539c433c00b5d6d6b681565f9: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:53:20.288206 containerd[1610]: time="2026-01-23T18:53:20.288184237Z" level=info msg="CreateContainer within sandbox \"fea0a9cfe082b204618c63471571382bb425af0393673c23e6041d9df1815aaa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7dc658a14f2e0224e700ff5e6690b10f4d6097f539c433c00b5d6d6b681565f9\"" Jan 23 18:53:20.288711 containerd[1610]: time="2026-01-23T18:53:20.288670194Z" level=info msg="StartContainer for \"7dc658a14f2e0224e700ff5e6690b10f4d6097f539c433c00b5d6d6b681565f9\"" Jan 23 18:53:20.289267 containerd[1610]: time="2026-01-23T18:53:20.289240087Z" level=info msg="connecting to shim 7dc658a14f2e0224e700ff5e6690b10f4d6097f539c433c00b5d6d6b681565f9" address="unix:///run/containerd/s/0329a56e249001c86e8b3d5d520f9d001969c1f7def9fe2da016c26994785eba" protocol=ttrpc version=3 Jan 23 18:53:20.305881 systemd[1]: Started cri-containerd-7dc658a14f2e0224e700ff5e6690b10f4d6097f539c433c00b5d6d6b681565f9.scope - libcontainer container 7dc658a14f2e0224e700ff5e6690b10f4d6097f539c433c00b5d6d6b681565f9. Jan 23 18:53:20.330911 containerd[1610]: time="2026-01-23T18:53:20.330884667Z" level=info msg="StartContainer for \"7dc658a14f2e0224e700ff5e6690b10f4d6097f539c433c00b5d6d6b681565f9\" returns successfully" Jan 23 18:53:20.335598 systemd[1]: cri-containerd-7dc658a14f2e0224e700ff5e6690b10f4d6097f539c433c00b5d6d6b681565f9.scope: Deactivated successfully. Jan 23 18:53:20.337455 containerd[1610]: time="2026-01-23T18:53:20.337384310Z" level=info msg="received container exit event container_id:\"7dc658a14f2e0224e700ff5e6690b10f4d6097f539c433c00b5d6d6b681565f9\" id:\"7dc658a14f2e0224e700ff5e6690b10f4d6097f539c433c00b5d6d6b681565f9\" pid:3721 exited_at:{seconds:1769194400 nanos:336621006}" Jan 23 18:53:21.165646 kubelet[2061]: E0123 18:53:21.165585 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:21.224056 kubelet[2061]: E0123 18:53:21.224010 2061 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 18:53:21.359327 containerd[1610]: time="2026-01-23T18:53:21.358904177Z" level=info msg="CreateContainer within sandbox \"fea0a9cfe082b204618c63471571382bb425af0393673c23e6041d9df1815aaa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 18:53:21.367995 containerd[1610]: time="2026-01-23T18:53:21.365906620Z" level=info msg="Container fa2802e45fbe02cc8722d2a8f1acb5a02a3725902d3f87738cd0181262d5c41d: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:53:21.378254 containerd[1610]: time="2026-01-23T18:53:21.378215539Z" level=info msg="CreateContainer within sandbox \"fea0a9cfe082b204618c63471571382bb425af0393673c23e6041d9df1815aaa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fa2802e45fbe02cc8722d2a8f1acb5a02a3725902d3f87738cd0181262d5c41d\"" Jan 23 18:53:21.378760 containerd[1610]: time="2026-01-23T18:53:21.378737279Z" level=info msg="StartContainer for \"fa2802e45fbe02cc8722d2a8f1acb5a02a3725902d3f87738cd0181262d5c41d\"" Jan 23 18:53:21.379371 containerd[1610]: time="2026-01-23T18:53:21.379354265Z" level=info msg="connecting to shim fa2802e45fbe02cc8722d2a8f1acb5a02a3725902d3f87738cd0181262d5c41d" address="unix:///run/containerd/s/0329a56e249001c86e8b3d5d520f9d001969c1f7def9fe2da016c26994785eba" protocol=ttrpc version=3 Jan 23 18:53:21.398761 systemd[1]: Started cri-containerd-fa2802e45fbe02cc8722d2a8f1acb5a02a3725902d3f87738cd0181262d5c41d.scope - libcontainer container fa2802e45fbe02cc8722d2a8f1acb5a02a3725902d3f87738cd0181262d5c41d. Jan 23 18:53:21.421236 containerd[1610]: time="2026-01-23T18:53:21.421146874Z" level=info msg="StartContainer for \"fa2802e45fbe02cc8722d2a8f1acb5a02a3725902d3f87738cd0181262d5c41d\" returns successfully" Jan 23 18:53:21.424886 systemd[1]: cri-containerd-fa2802e45fbe02cc8722d2a8f1acb5a02a3725902d3f87738cd0181262d5c41d.scope: Deactivated successfully. Jan 23 18:53:21.426196 containerd[1610]: time="2026-01-23T18:53:21.426066599Z" level=info msg="received container exit event container_id:\"fa2802e45fbe02cc8722d2a8f1acb5a02a3725902d3f87738cd0181262d5c41d\" id:\"fa2802e45fbe02cc8722d2a8f1acb5a02a3725902d3f87738cd0181262d5c41d\" pid:3764 exited_at:{seconds:1769194401 nanos:425567102}" Jan 23 18:53:21.446697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa2802e45fbe02cc8722d2a8f1acb5a02a3725902d3f87738cd0181262d5c41d-rootfs.mount: Deactivated successfully. Jan 23 18:53:21.978579 containerd[1610]: time="2026-01-23T18:53:21.978519930Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:53:21.979355 containerd[1610]: time="2026-01-23T18:53:21.979337405Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 18:53:21.979865 containerd[1610]: time="2026-01-23T18:53:21.979850075Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:53:21.980815 containerd[1610]: time="2026-01-23T18:53:21.980790693Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.716707363s" Jan 23 18:53:21.980892 containerd[1610]: time="2026-01-23T18:53:21.980825595Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 18:53:21.984225 containerd[1610]: time="2026-01-23T18:53:21.984200109Z" level=info msg="CreateContainer within sandbox \"94da2d3fe188dda1265a11907d33c5acf6331bafaece7add729770d8667dc6cf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 18:53:21.990661 containerd[1610]: time="2026-01-23T18:53:21.990373809Z" level=info msg="Container d806c7863c68c22481ac3224c7021e4a815412469c407749a8a31ad31d8ca8ee: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:53:22.002914 containerd[1610]: time="2026-01-23T18:53:22.002800098Z" level=info msg="CreateContainer within sandbox \"94da2d3fe188dda1265a11907d33c5acf6331bafaece7add729770d8667dc6cf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d806c7863c68c22481ac3224c7021e4a815412469c407749a8a31ad31d8ca8ee\"" Jan 23 18:53:22.003552 containerd[1610]: time="2026-01-23T18:53:22.003447764Z" level=info msg="StartContainer for \"d806c7863c68c22481ac3224c7021e4a815412469c407749a8a31ad31d8ca8ee\"" Jan 23 18:53:22.004271 containerd[1610]: time="2026-01-23T18:53:22.004251547Z" level=info msg="connecting to shim d806c7863c68c22481ac3224c7021e4a815412469c407749a8a31ad31d8ca8ee" address="unix:///run/containerd/s/b8be5fe5fa83d8248e7be53d2295ab83184eba93287220b17fd17243e21a95d8" protocol=ttrpc version=3 Jan 23 18:53:22.025822 systemd[1]: Started cri-containerd-d806c7863c68c22481ac3224c7021e4a815412469c407749a8a31ad31d8ca8ee.scope - libcontainer container d806c7863c68c22481ac3224c7021e4a815412469c407749a8a31ad31d8ca8ee. Jan 23 18:53:22.056378 containerd[1610]: time="2026-01-23T18:53:22.056340820Z" level=info msg="StartContainer for \"d806c7863c68c22481ac3224c7021e4a815412469c407749a8a31ad31d8ca8ee\" returns successfully" Jan 23 18:53:22.166143 kubelet[2061]: E0123 18:53:22.166026 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:22.365668 containerd[1610]: time="2026-01-23T18:53:22.365304059Z" level=info msg="CreateContainer within sandbox \"fea0a9cfe082b204618c63471571382bb425af0393673c23e6041d9df1815aaa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 18:53:22.376056 kubelet[2061]: I0123 18:53:22.375691 2061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-88ttm" podStartSLOduration=1.657554775 podStartE2EDuration="3.375673133s" podCreationTimestamp="2026-01-23 18:53:19 +0000 UTC" firstStartedPulling="2026-01-23 18:53:20.26358292 +0000 UTC m=+54.511441250" lastFinishedPulling="2026-01-23 18:53:21.981701285 +0000 UTC m=+56.229559608" observedRunningTime="2026-01-23 18:53:22.37534109 +0000 UTC m=+56.623199431" watchObservedRunningTime="2026-01-23 18:53:22.375673133 +0000 UTC m=+56.623531470" Jan 23 18:53:22.378214 containerd[1610]: time="2026-01-23T18:53:22.378179933Z" level=info msg="Container 8fe1d6f976b3608d451114e5caade934ce2e97c93705c7529fd0701b0d70ffdd: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:53:22.391614 containerd[1610]: time="2026-01-23T18:53:22.391564163Z" level=info msg="CreateContainer within sandbox \"fea0a9cfe082b204618c63471571382bb425af0393673c23e6041d9df1815aaa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8fe1d6f976b3608d451114e5caade934ce2e97c93705c7529fd0701b0d70ffdd\"" Jan 23 18:53:22.392982 containerd[1610]: time="2026-01-23T18:53:22.392956318Z" level=info msg="StartContainer for \"8fe1d6f976b3608d451114e5caade934ce2e97c93705c7529fd0701b0d70ffdd\"" Jan 23 18:53:22.394523 containerd[1610]: time="2026-01-23T18:53:22.394470810Z" level=info msg="connecting to shim 8fe1d6f976b3608d451114e5caade934ce2e97c93705c7529fd0701b0d70ffdd" address="unix:///run/containerd/s/0329a56e249001c86e8b3d5d520f9d001969c1f7def9fe2da016c26994785eba" protocol=ttrpc version=3 Jan 23 18:53:22.414780 systemd[1]: Started cri-containerd-8fe1d6f976b3608d451114e5caade934ce2e97c93705c7529fd0701b0d70ffdd.scope - libcontainer container 8fe1d6f976b3608d451114e5caade934ce2e97c93705c7529fd0701b0d70ffdd. Jan 23 18:53:22.496849 systemd[1]: cri-containerd-8fe1d6f976b3608d451114e5caade934ce2e97c93705c7529fd0701b0d70ffdd.scope: Deactivated successfully. Jan 23 18:53:22.500495 containerd[1610]: time="2026-01-23T18:53:22.500458982Z" level=info msg="received container exit event container_id:\"8fe1d6f976b3608d451114e5caade934ce2e97c93705c7529fd0701b0d70ffdd\" id:\"8fe1d6f976b3608d451114e5caade934ce2e97c93705c7529fd0701b0d70ffdd\" pid:3856 exited_at:{seconds:1769194402 nanos:499919107}" Jan 23 18:53:22.501799 containerd[1610]: time="2026-01-23T18:53:22.501677029Z" level=info msg="StartContainer for \"8fe1d6f976b3608d451114e5caade934ce2e97c93705c7529fd0701b0d70ffdd\" returns successfully" Jan 23 18:53:22.992697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fe1d6f976b3608d451114e5caade934ce2e97c93705c7529fd0701b0d70ffdd-rootfs.mount: Deactivated successfully. Jan 23 18:53:23.166992 kubelet[2061]: E0123 18:53:23.166944 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:23.369499 containerd[1610]: time="2026-01-23T18:53:23.369432220Z" level=info msg="CreateContainer within sandbox \"fea0a9cfe082b204618c63471571382bb425af0393673c23e6041d9df1815aaa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 18:53:23.379380 containerd[1610]: time="2026-01-23T18:53:23.379356499Z" level=info msg="Container 190555d993be2686f5af04a5c937fa8e0e7b86de69e8dc7bdd387d0b376ab63b: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:53:23.386743 containerd[1610]: time="2026-01-23T18:53:23.386719357Z" level=info msg="CreateContainer within sandbox \"fea0a9cfe082b204618c63471571382bb425af0393673c23e6041d9df1815aaa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"190555d993be2686f5af04a5c937fa8e0e7b86de69e8dc7bdd387d0b376ab63b\"" Jan 23 18:53:23.387262 containerd[1610]: time="2026-01-23T18:53:23.387215663Z" level=info msg="StartContainer for \"190555d993be2686f5af04a5c937fa8e0e7b86de69e8dc7bdd387d0b376ab63b\"" Jan 23 18:53:23.387907 containerd[1610]: time="2026-01-23T18:53:23.387889500Z" level=info msg="connecting to shim 190555d993be2686f5af04a5c937fa8e0e7b86de69e8dc7bdd387d0b376ab63b" address="unix:///run/containerd/s/0329a56e249001c86e8b3d5d520f9d001969c1f7def9fe2da016c26994785eba" protocol=ttrpc version=3 Jan 23 18:53:23.410752 systemd[1]: Started cri-containerd-190555d993be2686f5af04a5c937fa8e0e7b86de69e8dc7bdd387d0b376ab63b.scope - libcontainer container 190555d993be2686f5af04a5c937fa8e0e7b86de69e8dc7bdd387d0b376ab63b. Jan 23 18:53:23.432071 systemd[1]: cri-containerd-190555d993be2686f5af04a5c937fa8e0e7b86de69e8dc7bdd387d0b376ab63b.scope: Deactivated successfully. Jan 23 18:53:23.434098 containerd[1610]: time="2026-01-23T18:53:23.434066166Z" level=info msg="received container exit event container_id:\"190555d993be2686f5af04a5c937fa8e0e7b86de69e8dc7bdd387d0b376ab63b\" id:\"190555d993be2686f5af04a5c937fa8e0e7b86de69e8dc7bdd387d0b376ab63b\" pid:3897 exited_at:{seconds:1769194403 nanos:433119737}" Jan 23 18:53:23.440547 containerd[1610]: time="2026-01-23T18:53:23.440527880Z" level=info msg="StartContainer for \"190555d993be2686f5af04a5c937fa8e0e7b86de69e8dc7bdd387d0b376ab63b\" returns successfully" Jan 23 18:53:23.450174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-190555d993be2686f5af04a5c937fa8e0e7b86de69e8dc7bdd387d0b376ab63b-rootfs.mount: Deactivated successfully. Jan 23 18:53:24.167109 kubelet[2061]: E0123 18:53:24.167063 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:24.373294 containerd[1610]: time="2026-01-23T18:53:24.373264426Z" level=info msg="CreateContainer within sandbox \"fea0a9cfe082b204618c63471571382bb425af0393673c23e6041d9df1815aaa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 18:53:24.387999 containerd[1610]: time="2026-01-23T18:53:24.387964183Z" level=info msg="Container 993f5ddba339b98d815e28c9784f91fa5a05f0dceacb2923c3935a84a5897904: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:53:24.389735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount257405629.mount: Deactivated successfully. Jan 23 18:53:24.397542 containerd[1610]: time="2026-01-23T18:53:24.397501588Z" level=info msg="CreateContainer within sandbox \"fea0a9cfe082b204618c63471571382bb425af0393673c23e6041d9df1815aaa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"993f5ddba339b98d815e28c9784f91fa5a05f0dceacb2923c3935a84a5897904\"" Jan 23 18:53:24.398785 containerd[1610]: time="2026-01-23T18:53:24.398032909Z" level=info msg="StartContainer for \"993f5ddba339b98d815e28c9784f91fa5a05f0dceacb2923c3935a84a5897904\"" Jan 23 18:53:24.398785 containerd[1610]: time="2026-01-23T18:53:24.398738763Z" level=info msg="connecting to shim 993f5ddba339b98d815e28c9784f91fa5a05f0dceacb2923c3935a84a5897904" address="unix:///run/containerd/s/0329a56e249001c86e8b3d5d520f9d001969c1f7def9fe2da016c26994785eba" protocol=ttrpc version=3 Jan 23 18:53:24.425785 systemd[1]: Started cri-containerd-993f5ddba339b98d815e28c9784f91fa5a05f0dceacb2923c3935a84a5897904.scope - libcontainer container 993f5ddba339b98d815e28c9784f91fa5a05f0dceacb2923c3935a84a5897904. Jan 23 18:53:24.463151 containerd[1610]: time="2026-01-23T18:53:24.463123127Z" level=info msg="StartContainer for \"993f5ddba339b98d815e28c9784f91fa5a05f0dceacb2923c3935a84a5897904\" returns successfully" Jan 23 18:53:24.711652 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_256)) Jan 23 18:53:25.167862 kubelet[2061]: E0123 18:53:25.167803 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:25.408307 kubelet[2061]: I0123 18:53:25.408040 2061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7qr94" podStartSLOduration=6.408027279 podStartE2EDuration="6.408027279s" podCreationTimestamp="2026-01-23 18:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:53:25.406383664 +0000 UTC m=+59.654242000" watchObservedRunningTime="2026-01-23 18:53:25.408027279 +0000 UTC m=+59.655885624" Jan 23 18:53:26.128990 kubelet[2061]: E0123 18:53:26.128835 2061 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:26.147097 containerd[1610]: time="2026-01-23T18:53:26.146795645Z" level=info msg="StopPodSandbox for \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\"" Jan 23 18:53:26.147097 containerd[1610]: time="2026-01-23T18:53:26.146906396Z" level=info msg="TearDown network for sandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" successfully" Jan 23 18:53:26.147097 containerd[1610]: time="2026-01-23T18:53:26.146916006Z" level=info msg="StopPodSandbox for \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" returns successfully" Jan 23 18:53:26.148026 containerd[1610]: time="2026-01-23T18:53:26.147995625Z" level=info msg="RemovePodSandbox for \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\"" Jan 23 18:53:26.148104 containerd[1610]: time="2026-01-23T18:53:26.148093842Z" level=info msg="Forcibly stopping sandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\"" Jan 23 18:53:26.148201 containerd[1610]: time="2026-01-23T18:53:26.148192280Z" level=info msg="TearDown network for sandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" successfully" Jan 23 18:53:26.149078 containerd[1610]: time="2026-01-23T18:53:26.149060033Z" level=info msg="Ensure that sandbox 5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b in task-service has been cleanup successfully" Jan 23 18:53:26.154985 containerd[1610]: time="2026-01-23T18:53:26.154908139Z" level=info msg="RemovePodSandbox \"5ce6b63367d4ac16912d00f91a659392745afaac651a8a99b0d944b644982e1b\" returns successfully" Jan 23 18:53:26.168845 kubelet[2061]: E0123 18:53:26.168810 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:27.169070 kubelet[2061]: E0123 18:53:27.169023 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:27.352239 systemd-networkd[1508]: lxc_health: Link UP Jan 23 18:53:27.353733 systemd-networkd[1508]: lxc_health: Gained carrier Jan 23 18:53:28.169295 kubelet[2061]: E0123 18:53:28.169237 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:29.169774 kubelet[2061]: E0123 18:53:29.169704 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:29.308935 systemd-networkd[1508]: lxc_health: Gained IPv6LL Jan 23 18:53:30.170366 kubelet[2061]: E0123 18:53:30.170298 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:31.170518 kubelet[2061]: E0123 18:53:31.170463 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:32.171044 kubelet[2061]: E0123 18:53:32.171005 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:33.172212 kubelet[2061]: E0123 18:53:33.172137 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:34.172686 kubelet[2061]: E0123 18:53:34.172622 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:35.173574 kubelet[2061]: E0123 18:53:35.173525 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:36.173760 kubelet[2061]: E0123 18:53:36.173718 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:37.174176 kubelet[2061]: E0123 18:53:37.174128 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:38.175059 kubelet[2061]: E0123 18:53:38.175014 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:39.175563 kubelet[2061]: E0123 18:53:39.175495 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:40.176053 kubelet[2061]: E0123 18:53:40.176001 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:41.176778 kubelet[2061]: E0123 18:53:41.176708 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:42.177279 kubelet[2061]: E0123 18:53:42.177207 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:43.178013 kubelet[2061]: E0123 18:53:43.177950 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:44.178390 kubelet[2061]: E0123 18:53:44.178324 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:45.179158 kubelet[2061]: E0123 18:53:45.179092 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:46.128878 kubelet[2061]: E0123 18:53:46.128816 2061 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:46.179651 kubelet[2061]: E0123 18:53:46.179583 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:47.180010 kubelet[2061]: E0123 18:53:47.179932 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:48.180812 kubelet[2061]: E0123 18:53:48.180746 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:49.181207 kubelet[2061]: E0123 18:53:49.181146 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:50.181659 kubelet[2061]: E0123 18:53:50.181574 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:51.182069 kubelet[2061]: E0123 18:53:51.182004 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:52.183117 kubelet[2061]: E0123 18:53:52.183051 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:53.183339 kubelet[2061]: E0123 18:53:53.183271 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:54.184779 kubelet[2061]: E0123 18:53:54.184726 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:55.185266 kubelet[2061]: E0123 18:53:55.185201 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:56.186175 kubelet[2061]: E0123 18:53:56.186058 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:57.186925 kubelet[2061]: E0123 18:53:57.186846 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:58.187107 kubelet[2061]: E0123 18:53:58.187037 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:53:58.841355 kubelet[2061]: E0123 18:53:58.841307 2061 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.4.81:55936->10.0.4.50:2379: read: connection timed out" Jan 23 18:53:59.187409 kubelet[2061]: E0123 18:53:59.187271 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:00.139015 systemd[1]: cri-containerd-d806c7863c68c22481ac3224c7021e4a815412469c407749a8a31ad31d8ca8ee.scope: Deactivated successfully. Jan 23 18:54:00.140907 containerd[1610]: time="2026-01-23T18:54:00.140870175Z" level=info msg="received container exit event container_id:\"d806c7863c68c22481ac3224c7021e4a815412469c407749a8a31ad31d8ca8ee\" id:\"d806c7863c68c22481ac3224c7021e4a815412469c407749a8a31ad31d8ca8ee\" pid:3824 exit_status:1 exited_at:{seconds:1769194440 nanos:139954204}" Jan 23 18:54:00.161118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d806c7863c68c22481ac3224c7021e4a815412469c407749a8a31ad31d8ca8ee-rootfs.mount: Deactivated successfully. Jan 23 18:54:00.188303 kubelet[2061]: E0123 18:54:00.188137 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:00.447478 kubelet[2061]: I0123 18:54:00.447340 2061 scope.go:117] "RemoveContainer" containerID="d806c7863c68c22481ac3224c7021e4a815412469c407749a8a31ad31d8ca8ee" Jan 23 18:54:00.449278 containerd[1610]: time="2026-01-23T18:54:00.449239317Z" level=info msg="CreateContainer within sandbox \"94da2d3fe188dda1265a11907d33c5acf6331bafaece7add729770d8667dc6cf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Jan 23 18:54:00.457000 containerd[1610]: time="2026-01-23T18:54:00.456969956Z" level=info msg="Container 2aeb5fb578a2b24fc1bd1287a7fe9f50f191c0ee503fb141e63acbbcdc2106a8: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:54:00.463653 containerd[1610]: time="2026-01-23T18:54:00.463608462Z" level=info msg="CreateContainer within sandbox \"94da2d3fe188dda1265a11907d33c5acf6331bafaece7add729770d8667dc6cf\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"2aeb5fb578a2b24fc1bd1287a7fe9f50f191c0ee503fb141e63acbbcdc2106a8\"" Jan 23 18:54:00.464239 containerd[1610]: time="2026-01-23T18:54:00.464202263Z" level=info msg="StartContainer for \"2aeb5fb578a2b24fc1bd1287a7fe9f50f191c0ee503fb141e63acbbcdc2106a8\"" Jan 23 18:54:00.465194 containerd[1610]: time="2026-01-23T18:54:00.465124899Z" level=info msg="connecting to shim 2aeb5fb578a2b24fc1bd1287a7fe9f50f191c0ee503fb141e63acbbcdc2106a8" address="unix:///run/containerd/s/b8be5fe5fa83d8248e7be53d2295ab83184eba93287220b17fd17243e21a95d8" protocol=ttrpc version=3 Jan 23 18:54:00.485788 systemd[1]: Started cri-containerd-2aeb5fb578a2b24fc1bd1287a7fe9f50f191c0ee503fb141e63acbbcdc2106a8.scope - libcontainer container 2aeb5fb578a2b24fc1bd1287a7fe9f50f191c0ee503fb141e63acbbcdc2106a8. Jan 23 18:54:00.515685 containerd[1610]: time="2026-01-23T18:54:00.515621312Z" level=info msg="StartContainer for \"2aeb5fb578a2b24fc1bd1287a7fe9f50f191c0ee503fb141e63acbbcdc2106a8\" returns successfully" Jan 23 18:54:01.189112 kubelet[2061]: E0123 18:54:01.189023 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:02.189942 kubelet[2061]: E0123 18:54:02.189899 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:02.744695 kubelet[2061]: E0123 18:54:02.744378 2061 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.4.81:55572->10.0.4.50:2379: read: connection timed out" event="&Event{ObjectMeta:{cilium-operator-6f9c7c5859-88ttm.188d70f851809e01 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:cilium-operator-6f9c7c5859-88ttm,UID:02a99ec0-90d3-4a1f-9067-c921e468c081,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{cilium-operator},},Reason:Pulled,Message:Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine,Source:EventSource{Component:kubelet,Host:10.0.4.9,},FirstTimestamp:2026-01-23 18:54:00.448056833 +0000 UTC m=+94.695915170,LastTimestamp:2026-01-23 18:54:00.448056833 +0000 UTC m=+94.695915170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.4.9,}" Jan 23 18:54:03.190780 kubelet[2061]: E0123 18:54:03.190735 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:04.191296 kubelet[2061]: E0123 18:54:04.191253 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:05.192346 kubelet[2061]: E0123 18:54:05.192278 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:06.129328 kubelet[2061]: E0123 18:54:06.129265 2061 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:06.193475 kubelet[2061]: E0123 18:54:06.193425 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:07.194585 kubelet[2061]: E0123 18:54:07.194515 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:08.194980 kubelet[2061]: E0123 18:54:08.194911 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:08.842565 kubelet[2061]: E0123 18:54:08.842231 2061 controller.go:195] "Failed to update lease" err="Put \"https://10.0.4.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.4.9?timeout=10s\": context deadline exceeded" Jan 23 18:54:09.196157 kubelet[2061]: E0123 18:54:09.196021 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:10.197398 kubelet[2061]: E0123 18:54:10.197333 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:11.198436 kubelet[2061]: E0123 18:54:11.198375 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:12.198873 kubelet[2061]: E0123 18:54:12.198811 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:13.199961 kubelet[2061]: E0123 18:54:13.199903 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:14.200892 kubelet[2061]: E0123 18:54:14.200844 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:15.201971 kubelet[2061]: E0123 18:54:15.201926 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:16.202213 kubelet[2061]: E0123 18:54:16.202172 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:17.202465 kubelet[2061]: E0123 18:54:17.202414 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:18.203547 kubelet[2061]: E0123 18:54:18.203496 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:18.842975 kubelet[2061]: E0123 18:54:18.842850 2061 controller.go:195] "Failed to update lease" err="Put \"https://10.0.4.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.4.9?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:54:19.204083 kubelet[2061]: E0123 18:54:19.203953 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:20.205019 kubelet[2061]: E0123 18:54:20.204975 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:21.205506 kubelet[2061]: E0123 18:54:21.205452 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:22.206290 kubelet[2061]: E0123 18:54:22.206248 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:23.206824 kubelet[2061]: E0123 18:54:23.206774 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:24.207754 kubelet[2061]: E0123 18:54:24.207610 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:25.208216 kubelet[2061]: E0123 18:54:25.208179 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:26.129342 kubelet[2061]: E0123 18:54:26.129300 2061 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:26.208494 kubelet[2061]: E0123 18:54:26.208449 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:27.209650 kubelet[2061]: E0123 18:54:27.209584 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:28.210610 kubelet[2061]: E0123 18:54:28.210576 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:28.843975 kubelet[2061]: E0123 18:54:28.843921 2061 controller.go:195] "Failed to update lease" err="Put \"https://10.0.4.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.4.9?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:54:29.211008 kubelet[2061]: E0123 18:54:29.210899 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:30.211709 kubelet[2061]: E0123 18:54:30.211658 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:54:31.212185 kubelet[2061]: E0123 18:54:31.212133 2061 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"