Jan 23 00:59:14.768343 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 00:59:14.768366 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:59:14.768375 kernel: BIOS-provided physical RAM map: Jan 23 00:59:14.768381 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 00:59:14.768387 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 00:59:14.768392 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 00:59:14.768401 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 00:59:14.768407 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 00:59:14.768412 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 00:59:14.768418 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 00:59:14.768424 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000007e93efff] usable Jan 23 00:59:14.768429 kernel: BIOS-e820: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 23 00:59:14.768435 kernel: BIOS-e820: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 23 00:59:14.768441 kernel: BIOS-e820: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 23 00:59:14.768461 kernel: BIOS-e820: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 23 00:59:14.768467 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 00:59:14.768474 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 00:59:14.768480 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 00:59:14.768486 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 23 00:59:14.768492 kernel: BIOS-e820: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 23 00:59:14.768498 kernel: BIOS-e820: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 23 00:59:14.768505 kernel: BIOS-e820: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 23 00:59:14.768511 kernel: BIOS-e820: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 23 00:59:14.768517 kernel: BIOS-e820: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 23 00:59:14.768523 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 00:59:14.768529 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 00:59:14.768535 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 00:59:14.768541 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 00:59:14.768547 kernel: NX (Execute Disable) protection: active Jan 23 00:59:14.768553 kernel: APIC: Static calls initialized Jan 23 00:59:14.768559 kernel: e820: update [mem 0x7df7f018-0x7df88a57] usable ==> usable Jan 23 00:59:14.768565 kernel: e820: update [mem 0x7df57018-0x7df7e457] usable ==> usable Jan 23 00:59:14.768571 kernel: extended physical RAM map: Jan 23 00:59:14.768578 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 00:59:14.768584 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 00:59:14.768590 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 00:59:14.768596 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 00:59:14.768602 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 00:59:14.768608 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 00:59:14.768614 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 00:59:14.768622 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000007df57017] usable Jan 23 00:59:14.768630 kernel: reserve setup_data: [mem 0x000000007df57018-0x000000007df7e457] usable Jan 23 00:59:14.768636 kernel: reserve setup_data: [mem 0x000000007df7e458-0x000000007df7f017] usable Jan 23 00:59:14.768642 kernel: reserve setup_data: [mem 0x000000007df7f018-0x000000007df88a57] usable Jan 23 00:59:14.768649 kernel: reserve setup_data: [mem 0x000000007df88a58-0x000000007e93efff] usable Jan 23 00:59:14.768655 kernel: reserve setup_data: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Jan 23 00:59:14.768661 kernel: reserve setup_data: [mem 0x000000007ea00000-0x000000007ec70fff] usable Jan 23 00:59:14.768667 kernel: reserve setup_data: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Jan 23 00:59:14.768675 kernel: reserve setup_data: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Jan 23 00:59:14.768681 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 00:59:14.768687 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 00:59:14.768694 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 00:59:14.768700 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007feaefff] usable Jan 23 00:59:14.768706 kernel: reserve setup_data: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Jan 23 00:59:14.768712 kernel: reserve setup_data: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Jan 23 00:59:14.768718 kernel: reserve setup_data: [mem 0x000000007feb5000-0x000000007feebfff] usable Jan 23 00:59:14.768724 kernel: reserve setup_data: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Jan 23 00:59:14.768730 kernel: reserve setup_data: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Jan 23 00:59:14.768736 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 00:59:14.768744 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 00:59:14.768750 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 00:59:14.768757 kernel: reserve setup_data: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 00:59:14.768763 kernel: efi: EFI v2.7 by EDK II Jan 23 00:59:14.768769 kernel: efi: SMBIOS=0x7f972000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7dfd8018 RNG=0x7fb72018 Jan 23 00:59:14.768775 kernel: random: crng init done Jan 23 00:59:14.768781 kernel: efi: Remove mem139: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 00:59:14.768787 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 00:59:14.768794 kernel: secureboot: Secure boot disabled Jan 23 00:59:14.768800 kernel: SMBIOS 2.8 present. Jan 23 00:59:14.768806 kernel: DMI: STACKIT Cloud OpenStack Nova/Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 23 00:59:14.768812 kernel: DMI: Memory slots populated: 1/1 Jan 23 00:59:14.768820 kernel: Hypervisor detected: KVM Jan 23 00:59:14.768826 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 23 00:59:14.768832 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 00:59:14.768838 kernel: kvm-clock: using sched offset of 5972667538 cycles Jan 23 00:59:14.768845 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 00:59:14.768851 kernel: tsc: Detected 2294.590 MHz processor Jan 23 00:59:14.768858 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 00:59:14.768864 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 00:59:14.768871 kernel: last_pfn = 0x180000 max_arch_pfn = 0x10000000000 Jan 23 00:59:14.768877 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 00:59:14.768885 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 00:59:14.768892 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Jan 23 00:59:14.768898 kernel: Using GB pages for direct mapping Jan 23 00:59:14.768905 kernel: ACPI: Early table checksum verification disabled Jan 23 00:59:14.768911 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 23 00:59:14.768917 kernel: ACPI: XSDT 0x000000007FB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jan 23 00:59:14.768924 kernel: ACPI: FACP 0x000000007FB77000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:59:14.768930 kernel: ACPI: DSDT 0x000000007FB78000 00423C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:59:14.768936 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 23 00:59:14.768944 kernel: ACPI: APIC 0x000000007FB76000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:59:14.768950 kernel: ACPI: MCFG 0x000000007FB75000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:59:14.768957 kernel: ACPI: WAET 0x000000007FB74000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:59:14.768963 kernel: ACPI: BGRT 0x000000007FB73000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 00:59:14.768969 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb77000-0x7fb770f3] Jan 23 00:59:14.768976 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb78000-0x7fb7c23b] Jan 23 00:59:14.768982 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 23 00:59:14.768988 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb76000-0x7fb7607f] Jan 23 00:59:14.768995 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb75000-0x7fb7503b] Jan 23 00:59:14.769003 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb74000-0x7fb74027] Jan 23 00:59:14.769009 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb73000-0x7fb73037] Jan 23 00:59:14.769015 kernel: No NUMA configuration found Jan 23 00:59:14.769022 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 23 00:59:14.769028 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jan 23 00:59:14.769034 kernel: Zone ranges: Jan 23 00:59:14.769041 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 00:59:14.769047 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 00:59:14.769053 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 23 00:59:14.769061 kernel: Device empty Jan 23 00:59:14.769068 kernel: Movable zone start for each node Jan 23 00:59:14.769074 kernel: Early memory node ranges Jan 23 00:59:14.769080 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 00:59:14.769086 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 23 00:59:14.769093 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 23 00:59:14.769099 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 23 00:59:14.769105 kernel: node 0: [mem 0x0000000000900000-0x000000007e93efff] Jan 23 00:59:14.769530 kernel: node 0: [mem 0x000000007ea00000-0x000000007ec70fff] Jan 23 00:59:14.769545 kernel: node 0: [mem 0x000000007ed85000-0x000000007f8ecfff] Jan 23 00:59:14.769561 kernel: node 0: [mem 0x000000007fbff000-0x000000007feaefff] Jan 23 00:59:14.769568 kernel: node 0: [mem 0x000000007feb5000-0x000000007feebfff] Jan 23 00:59:14.769575 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 23 00:59:14.769584 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 23 00:59:14.769591 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 00:59:14.769598 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 00:59:14.769605 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 23 00:59:14.769612 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 00:59:14.769621 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 23 00:59:14.769628 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 00:59:14.769635 kernel: On node 0, zone DMA32: 276 pages in unavailable ranges Jan 23 00:59:14.769642 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 00:59:14.769649 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 23 00:59:14.769656 kernel: On node 0, zone Normal: 276 pages in unavailable ranges Jan 23 00:59:14.769664 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 00:59:14.769671 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 00:59:14.769678 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 00:59:14.769687 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 00:59:14.769694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 00:59:14.769701 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 00:59:14.769708 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 00:59:14.769715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 00:59:14.769722 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 00:59:14.769729 kernel: TSC deadline timer available Jan 23 00:59:14.769736 kernel: CPU topo: Max. logical packages: 2 Jan 23 00:59:14.769743 kernel: CPU topo: Max. logical dies: 2 Jan 23 00:59:14.769752 kernel: CPU topo: Max. dies per package: 1 Jan 23 00:59:14.769759 kernel: CPU topo: Max. threads per core: 1 Jan 23 00:59:14.769766 kernel: CPU topo: Num. cores per package: 1 Jan 23 00:59:14.769773 kernel: CPU topo: Num. threads per package: 1 Jan 23 00:59:14.769780 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 00:59:14.769787 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 00:59:14.769794 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 00:59:14.769801 kernel: kvm-guest: setup PV sched yield Jan 23 00:59:14.769808 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 23 00:59:14.769816 kernel: Booting paravirtualized kernel on KVM Jan 23 00:59:14.769823 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 00:59:14.769830 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 00:59:14.769837 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 00:59:14.769844 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 00:59:14.769851 kernel: pcpu-alloc: [0] 0 1 Jan 23 00:59:14.769858 kernel: kvm-guest: PV spinlocks enabled Jan 23 00:59:14.769864 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 00:59:14.769872 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:59:14.769881 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 00:59:14.769888 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 00:59:14.769895 kernel: Fallback order for Node 0: 0 Jan 23 00:59:14.769902 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1046694 Jan 23 00:59:14.769909 kernel: Policy zone: Normal Jan 23 00:59:14.769915 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 00:59:14.769922 kernel: software IO TLB: area num 2. Jan 23 00:59:14.769929 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 00:59:14.769938 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 00:59:14.769945 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 00:59:14.769951 kernel: Dynamic Preempt: voluntary Jan 23 00:59:14.769958 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 00:59:14.769965 kernel: rcu: RCU event tracing is enabled. Jan 23 00:59:14.769972 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 00:59:14.769979 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 00:59:14.769986 kernel: Rude variant of Tasks RCU enabled. Jan 23 00:59:14.769993 kernel: Tracing variant of Tasks RCU enabled. Jan 23 00:59:14.770002 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 00:59:14.770008 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 00:59:14.770015 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:59:14.770023 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:59:14.770030 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:59:14.770037 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 00:59:14.770044 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 00:59:14.770050 kernel: Console: colour dummy device 80x25 Jan 23 00:59:14.770057 kernel: printk: legacy console [tty0] enabled Jan 23 00:59:14.770066 kernel: printk: legacy console [ttyS0] enabled Jan 23 00:59:14.770073 kernel: ACPI: Core revision 20240827 Jan 23 00:59:14.770080 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 00:59:14.770087 kernel: x2apic enabled Jan 23 00:59:14.770094 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 00:59:14.770102 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 00:59:14.770109 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 00:59:14.770127 kernel: kvm-guest: setup PV IPIs Jan 23 00:59:14.770134 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21133e85697, max_idle_ns: 440795250946 ns Jan 23 00:59:14.770143 kernel: Calibrating delay loop (skipped) preset value.. 4589.18 BogoMIPS (lpj=2294590) Jan 23 00:59:14.770150 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 00:59:14.770157 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 00:59:14.770164 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 00:59:14.770171 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 00:59:14.770178 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jan 23 00:59:14.770184 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 23 00:59:14.770191 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 23 00:59:14.770198 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 00:59:14.770205 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 00:59:14.770212 kernel: TAA: Mitigation: Clear CPU buffers Jan 23 00:59:14.770220 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 23 00:59:14.770227 kernel: active return thunk: its_return_thunk Jan 23 00:59:14.770233 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 00:59:14.770240 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 00:59:14.770248 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 00:59:14.770254 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 00:59:14.770261 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 00:59:14.770268 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 00:59:14.770274 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 00:59:14.770281 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 00:59:14.770289 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 00:59:14.770296 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 00:59:14.770302 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 00:59:14.770309 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 00:59:14.770315 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 23 00:59:14.770322 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 23 00:59:14.770329 kernel: Freeing SMP alternatives memory: 32K Jan 23 00:59:14.770335 kernel: pid_max: default: 32768 minimum: 301 Jan 23 00:59:14.770342 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 00:59:14.770348 kernel: landlock: Up and running. Jan 23 00:59:14.770355 kernel: SELinux: Initializing. Jan 23 00:59:14.770361 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:59:14.770369 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:59:14.770376 kernel: smpboot: CPU0: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (family: 0x6, model: 0x6a, stepping: 0x6) Jan 23 00:59:14.770383 kernel: Performance Events: PEBS fmt0-, Icelake events, full-width counters, Intel PMU driver. Jan 23 00:59:14.770390 kernel: ... version: 2 Jan 23 00:59:14.770397 kernel: ... bit width: 48 Jan 23 00:59:14.770403 kernel: ... generic registers: 8 Jan 23 00:59:14.770410 kernel: ... value mask: 0000ffffffffffff Jan 23 00:59:14.770417 kernel: ... max period: 00007fffffffffff Jan 23 00:59:14.770424 kernel: ... fixed-purpose events: 3 Jan 23 00:59:14.770431 kernel: ... event mask: 00000007000000ff Jan 23 00:59:14.770439 kernel: signal: max sigframe size: 3632 Jan 23 00:59:14.770446 kernel: rcu: Hierarchical SRCU implementation. Jan 23 00:59:14.770453 kernel: rcu: Max phase no-delay instances is 400. Jan 23 00:59:14.770459 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 00:59:14.770466 kernel: smp: Bringing up secondary CPUs ... Jan 23 00:59:14.770473 kernel: smpboot: x86: Booting SMP configuration: Jan 23 00:59:14.770480 kernel: .... node #0, CPUs: #1 Jan 23 00:59:14.770487 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 00:59:14.770494 kernel: smpboot: Total of 2 processors activated (9178.36 BogoMIPS) Jan 23 00:59:14.770503 kernel: Memory: 3945188K/4186776K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 236708K reserved, 0K cma-reserved) Jan 23 00:59:14.770510 kernel: devtmpfs: initialized Jan 23 00:59:14.770517 kernel: x86/mm: Memory block size: 128MB Jan 23 00:59:14.770524 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 23 00:59:14.770531 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 23 00:59:14.770539 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 23 00:59:14.771162 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 23 00:59:14.771170 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feb3000-0x7feb4fff] (8192 bytes) Jan 23 00:59:14.771178 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7ff70000-0x7fffffff] (589824 bytes) Jan 23 00:59:14.771188 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 00:59:14.771195 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 00:59:14.771202 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 00:59:14.771209 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 00:59:14.771215 kernel: audit: initializing netlink subsys (disabled) Jan 23 00:59:14.771223 kernel: audit: type=2000 audit(1769129951.799:1): state=initialized audit_enabled=0 res=1 Jan 23 00:59:14.771229 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 00:59:14.771236 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 00:59:14.771245 kernel: cpuidle: using governor menu Jan 23 00:59:14.771251 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 00:59:14.771258 kernel: dca service started, version 1.12.1 Jan 23 00:59:14.771265 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 00:59:14.771272 kernel: PCI: Using configuration type 1 for base access Jan 23 00:59:14.771279 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 00:59:14.771286 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 00:59:14.771293 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 00:59:14.771300 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 00:59:14.771308 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 00:59:14.771315 kernel: ACPI: Added _OSI(Module Device) Jan 23 00:59:14.771329 kernel: ACPI: Added _OSI(Processor Device) Jan 23 00:59:14.771336 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 00:59:14.771342 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 00:59:14.771349 kernel: ACPI: Interpreter enabled Jan 23 00:59:14.771356 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 00:59:14.771363 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 00:59:14.771370 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 00:59:14.771377 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 00:59:14.771386 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 00:59:14.771393 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 00:59:14.771514 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 00:59:14.771583 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 00:59:14.771647 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 00:59:14.771656 kernel: PCI host bridge to bus 0000:00 Jan 23 00:59:14.771722 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 00:59:14.771792 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 00:59:14.771866 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 00:59:14.771923 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 23 00:59:14.771979 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 00:59:14.772036 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38e800003fff window] Jan 23 00:59:14.772093 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 00:59:14.774630 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 00:59:14.774719 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 23 00:59:14.774786 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Jan 23 00:59:14.774851 kernel: pci 0000:00:01.0: BAR 2 [mem 0x38e800000000-0x38e800003fff 64bit pref] Jan 23 00:59:14.774916 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8439e000-0x8439efff] Jan 23 00:59:14.775046 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 00:59:14.775197 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 00:59:14.775277 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.775344 kernel: pci 0000:00:02.0: BAR 0 [mem 0x8439d000-0x8439dfff] Jan 23 00:59:14.775410 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 00:59:14.775474 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 23 00:59:14.775538 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 23 00:59:14.775601 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 00:59:14.775669 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.775736 kernel: pci 0000:00:02.1: BAR 0 [mem 0x8439c000-0x8439cfff] Jan 23 00:59:14.775800 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 00:59:14.775863 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 23 00:59:14.775927 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 00:59:14.775996 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.776061 kernel: pci 0000:00:02.2: BAR 0 [mem 0x8439b000-0x8439bfff] Jan 23 00:59:14.776634 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 00:59:14.776714 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 23 00:59:14.776780 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 00:59:14.776856 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.776924 kernel: pci 0000:00:02.3: BAR 0 [mem 0x8439a000-0x8439afff] Jan 23 00:59:14.776990 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 00:59:14.777056 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 23 00:59:14.778158 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 00:59:14.778252 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.778320 kernel: pci 0000:00:02.4: BAR 0 [mem 0x84399000-0x84399fff] Jan 23 00:59:14.778387 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 00:59:14.778453 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 23 00:59:14.778518 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 00:59:14.778589 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.778656 kernel: pci 0000:00:02.5: BAR 0 [mem 0x84398000-0x84398fff] Jan 23 00:59:14.778724 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 00:59:14.778789 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 23 00:59:14.778855 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 00:59:14.778928 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.778995 kernel: pci 0000:00:02.6: BAR 0 [mem 0x84397000-0x84397fff] Jan 23 00:59:14.779061 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 00:59:14.779140 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 23 00:59:14.779207 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 00:59:14.779277 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.779344 kernel: pci 0000:00:02.7: BAR 0 [mem 0x84396000-0x84396fff] Jan 23 00:59:14.779410 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 00:59:14.779475 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 23 00:59:14.779540 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 00:59:14.779617 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.779683 kernel: pci 0000:00:03.0: BAR 0 [mem 0x84395000-0x84395fff] Jan 23 00:59:14.779748 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 00:59:14.781218 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 23 00:59:14.781305 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 00:59:14.781385 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.781472 kernel: pci 0000:00:03.1: BAR 0 [mem 0x84394000-0x84394fff] Jan 23 00:59:14.781544 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 00:59:14.781611 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 23 00:59:14.781677 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 00:59:14.781750 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.781817 kernel: pci 0000:00:03.2: BAR 0 [mem 0x84393000-0x84393fff] Jan 23 00:59:14.781884 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 00:59:14.781952 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 23 00:59:14.782022 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 00:59:14.782095 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.783230 kernel: pci 0000:00:03.3: BAR 0 [mem 0x84392000-0x84392fff] Jan 23 00:59:14.783316 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 00:59:14.783388 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 23 00:59:14.783457 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 00:59:14.783537 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.783607 kernel: pci 0000:00:03.4: BAR 0 [mem 0x84391000-0x84391fff] Jan 23 00:59:14.783675 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 00:59:14.783743 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 23 00:59:14.783812 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 00:59:14.783885 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.783957 kernel: pci 0000:00:03.5: BAR 0 [mem 0x84390000-0x84390fff] Jan 23 00:59:14.784025 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 00:59:14.784092 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 23 00:59:14.785188 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 00:59:14.785271 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.785342 kernel: pci 0000:00:03.6: BAR 0 [mem 0x8438f000-0x8438ffff] Jan 23 00:59:14.785411 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 00:59:14.785483 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 23 00:59:14.785552 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 00:59:14.785626 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.785696 kernel: pci 0000:00:03.7: BAR 0 [mem 0x8438e000-0x8438efff] Jan 23 00:59:14.785764 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 00:59:14.785832 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 23 00:59:14.785900 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 00:59:14.785978 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.786048 kernel: pci 0000:00:04.0: BAR 0 [mem 0x8438d000-0x8438dfff] Jan 23 00:59:14.786130 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 00:59:14.786200 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 23 00:59:14.786268 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 00:59:14.786339 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.786405 kernel: pci 0000:00:04.1: BAR 0 [mem 0x8438c000-0x8438cfff] Jan 23 00:59:14.786475 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 00:59:14.786541 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 23 00:59:14.786607 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 00:59:14.786678 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.786744 kernel: pci 0000:00:04.2: BAR 0 [mem 0x8438b000-0x8438bfff] Jan 23 00:59:14.786810 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 00:59:14.786874 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 23 00:59:14.786942 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 00:59:14.787013 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.787081 kernel: pci 0000:00:04.3: BAR 0 [mem 0x8438a000-0x8438afff] Jan 23 00:59:14.789109 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 00:59:14.789209 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 23 00:59:14.789279 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 00:59:14.789354 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.789423 kernel: pci 0000:00:04.4: BAR 0 [mem 0x84389000-0x84389fff] Jan 23 00:59:14.789488 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 00:59:14.789552 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 23 00:59:14.789616 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 00:59:14.789686 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.789751 kernel: pci 0000:00:04.5: BAR 0 [mem 0x84388000-0x84388fff] Jan 23 00:59:14.789814 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 00:59:14.789878 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 23 00:59:14.789945 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 00:59:14.790015 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.790079 kernel: pci 0000:00:04.6: BAR 0 [mem 0x84387000-0x84387fff] Jan 23 00:59:14.791695 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 00:59:14.791773 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 23 00:59:14.791840 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 00:59:14.791915 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.791983 kernel: pci 0000:00:04.7: BAR 0 [mem 0x84386000-0x84386fff] Jan 23 00:59:14.792050 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 00:59:14.792135 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 23 00:59:14.792204 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 00:59:14.792281 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.792347 kernel: pci 0000:00:05.0: BAR 0 [mem 0x84385000-0x84385fff] Jan 23 00:59:14.792414 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 00:59:14.792493 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 23 00:59:14.792562 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 00:59:14.792637 kernel: pci 0000:00:05.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.792704 kernel: pci 0000:00:05.1: BAR 0 [mem 0x84384000-0x84384fff] Jan 23 00:59:14.792772 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 00:59:14.792837 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 23 00:59:14.792906 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 00:59:14.792977 kernel: pci 0000:00:05.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.793043 kernel: pci 0000:00:05.2: BAR 0 [mem 0x84383000-0x84383fff] Jan 23 00:59:14.793108 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 00:59:14.793195 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 00:59:14.793264 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 00:59:14.793335 kernel: pci 0000:00:05.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.793402 kernel: pci 0000:00:05.3: BAR 0 [mem 0x84382000-0x84382fff] Jan 23 00:59:14.793467 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 00:59:14.793532 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 00:59:14.793598 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 00:59:14.793669 kernel: pci 0000:00:05.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:59:14.793738 kernel: pci 0000:00:05.4: BAR 0 [mem 0x84381000-0x84381fff] Jan 23 00:59:14.793804 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 00:59:14.793869 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 23 00:59:14.793935 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 00:59:14.794007 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 00:59:14.794075 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 00:59:14.794490 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 00:59:14.794567 kernel: pci 0000:00:1f.2: BAR 4 [io 0x7040-0x705f] Jan 23 00:59:14.794634 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x84380000-0x84380fff] Jan 23 00:59:14.794707 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 00:59:14.794774 kernel: pci 0000:00:1f.3: BAR 4 [io 0x7000-0x703f] Jan 23 00:59:14.794849 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 23 00:59:14.794919 kernel: pci 0000:01:00.0: BAR 0 [mem 0x84200000-0x842000ff 64bit] Jan 23 00:59:14.794990 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 00:59:14.795064 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 23 00:59:14.795158 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 23 00:59:14.795229 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 00:59:14.795301 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 00:59:14.795380 kernel: pci_bus 0000:02: extended config space not accessible Jan 23 00:59:14.795391 kernel: acpiphp: Slot [1] registered Jan 23 00:59:14.795398 kernel: acpiphp: Slot [0] registered Jan 23 00:59:14.795408 kernel: acpiphp: Slot [2] registered Jan 23 00:59:14.795415 kernel: acpiphp: Slot [3] registered Jan 23 00:59:14.795423 kernel: acpiphp: Slot [4] registered Jan 23 00:59:14.795430 kernel: acpiphp: Slot [5] registered Jan 23 00:59:14.795437 kernel: acpiphp: Slot [6] registered Jan 23 00:59:14.795444 kernel: acpiphp: Slot [7] registered Jan 23 00:59:14.795452 kernel: acpiphp: Slot [8] registered Jan 23 00:59:14.795459 kernel: acpiphp: Slot [9] registered Jan 23 00:59:14.795466 kernel: acpiphp: Slot [10] registered Jan 23 00:59:14.795473 kernel: acpiphp: Slot [11] registered Jan 23 00:59:14.795483 kernel: acpiphp: Slot [12] registered Jan 23 00:59:14.795491 kernel: acpiphp: Slot [13] registered Jan 23 00:59:14.795498 kernel: acpiphp: Slot [14] registered Jan 23 00:59:14.795505 kernel: acpiphp: Slot [15] registered Jan 23 00:59:14.795513 kernel: acpiphp: Slot [16] registered Jan 23 00:59:14.795520 kernel: acpiphp: Slot [17] registered Jan 23 00:59:14.795527 kernel: acpiphp: Slot [18] registered Jan 23 00:59:14.795534 kernel: acpiphp: Slot [19] registered Jan 23 00:59:14.795542 kernel: acpiphp: Slot [20] registered Jan 23 00:59:14.795550 kernel: acpiphp: Slot [21] registered Jan 23 00:59:14.795557 kernel: acpiphp: Slot [22] registered Jan 23 00:59:14.795564 kernel: acpiphp: Slot [23] registered Jan 23 00:59:14.795572 kernel: acpiphp: Slot [24] registered Jan 23 00:59:14.795579 kernel: acpiphp: Slot [25] registered Jan 23 00:59:14.795586 kernel: acpiphp: Slot [26] registered Jan 23 00:59:14.795594 kernel: acpiphp: Slot [27] registered Jan 23 00:59:14.795601 kernel: acpiphp: Slot [28] registered Jan 23 00:59:14.795608 kernel: acpiphp: Slot [29] registered Jan 23 00:59:14.795615 kernel: acpiphp: Slot [30] registered Jan 23 00:59:14.795624 kernel: acpiphp: Slot [31] registered Jan 23 00:59:14.795703 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Jan 23 00:59:14.795776 kernel: pci 0000:02:01.0: BAR 4 [io 0x6000-0x601f] Jan 23 00:59:14.795847 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 00:59:14.795856 kernel: acpiphp: Slot [0-2] registered Jan 23 00:59:14.795933 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 00:59:14.796003 kernel: pci 0000:03:00.0: BAR 1 [mem 0x83e00000-0x83e00fff] Jan 23 00:59:14.796074 kernel: pci 0000:03:00.0: BAR 4 [mem 0x380800000000-0x380800003fff 64bit pref] Jan 23 00:59:14.796357 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 00:59:14.796433 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 00:59:14.796444 kernel: acpiphp: Slot [0-3] registered Jan 23 00:59:14.796535 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 23 00:59:14.796617 kernel: pci 0000:04:00.0: BAR 1 [mem 0x83c00000-0x83c00fff] Jan 23 00:59:14.796686 kernel: pci 0000:04:00.0: BAR 4 [mem 0x381000000000-0x381000003fff 64bit pref] Jan 23 00:59:14.796757 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 00:59:14.796768 kernel: acpiphp: Slot [0-4] registered Jan 23 00:59:14.796841 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 00:59:14.796913 kernel: pci 0000:05:00.0: BAR 4 [mem 0x381800000000-0x381800003fff 64bit pref] Jan 23 00:59:14.796984 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 00:59:14.796994 kernel: acpiphp: Slot [0-5] registered Jan 23 00:59:14.797073 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 00:59:14.797160 kernel: pci 0000:06:00.0: BAR 1 [mem 0x83800000-0x83800fff] Jan 23 00:59:14.797233 kernel: pci 0000:06:00.0: BAR 4 [mem 0x382000000000-0x382000003fff 64bit pref] Jan 23 00:59:14.797303 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 00:59:14.797313 kernel: acpiphp: Slot [0-6] registered Jan 23 00:59:14.797383 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 00:59:14.797393 kernel: acpiphp: Slot [0-7] registered Jan 23 00:59:14.797461 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 00:59:14.797471 kernel: acpiphp: Slot [0-8] registered Jan 23 00:59:14.797860 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 00:59:14.797876 kernel: acpiphp: Slot [0-9] registered Jan 23 00:59:14.797948 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 00:59:14.797959 kernel: acpiphp: Slot [0-10] registered Jan 23 00:59:14.798027 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 00:59:14.798037 kernel: acpiphp: Slot [0-11] registered Jan 23 00:59:14.798105 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 00:59:14.798154 kernel: acpiphp: Slot [0-12] registered Jan 23 00:59:14.798229 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 00:59:14.798239 kernel: acpiphp: Slot [0-13] registered Jan 23 00:59:14.798307 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 00:59:14.798317 kernel: acpiphp: Slot [0-14] registered Jan 23 00:59:14.799175 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 00:59:14.799191 kernel: acpiphp: Slot [0-15] registered Jan 23 00:59:14.799278 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 00:59:14.799290 kernel: acpiphp: Slot [0-16] registered Jan 23 00:59:14.799371 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 00:59:14.799383 kernel: acpiphp: Slot [0-17] registered Jan 23 00:59:14.799462 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 00:59:14.799473 kernel: acpiphp: Slot [0-18] registered Jan 23 00:59:14.799541 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 00:59:14.799551 kernel: acpiphp: Slot [0-19] registered Jan 23 00:59:14.799618 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 00:59:14.799628 kernel: acpiphp: Slot [0-20] registered Jan 23 00:59:14.799696 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 00:59:14.799707 kernel: acpiphp: Slot [0-21] registered Jan 23 00:59:14.799774 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 00:59:14.799784 kernel: acpiphp: Slot [0-22] registered Jan 23 00:59:14.799852 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 00:59:14.799862 kernel: acpiphp: Slot [0-23] registered Jan 23 00:59:14.799928 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 00:59:14.799940 kernel: acpiphp: Slot [0-24] registered Jan 23 00:59:14.800006 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 00:59:14.800015 kernel: acpiphp: Slot [0-25] registered Jan 23 00:59:14.800082 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 00:59:14.800093 kernel: acpiphp: Slot [0-26] registered Jan 23 00:59:14.800189 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 00:59:14.800200 kernel: acpiphp: Slot [0-27] registered Jan 23 00:59:14.800267 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 00:59:14.800279 kernel: acpiphp: Slot [0-28] registered Jan 23 00:59:14.800347 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 00:59:14.800357 kernel: acpiphp: Slot [0-29] registered Jan 23 00:59:14.800424 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 00:59:14.800435 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 00:59:14.800443 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 00:59:14.800461 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 00:59:14.800469 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 00:59:14.800477 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 00:59:14.800488 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 00:59:14.800495 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 00:59:14.800503 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 00:59:14.800510 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 00:59:14.800518 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 00:59:14.800526 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 00:59:14.800533 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 00:59:14.800541 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 00:59:14.800549 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 00:59:14.800558 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 00:59:14.800566 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 00:59:14.800573 kernel: iommu: Default domain type: Translated Jan 23 00:59:14.800581 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 00:59:14.800588 kernel: efivars: Registered efivars operations Jan 23 00:59:14.800596 kernel: PCI: Using ACPI for IRQ routing Jan 23 00:59:14.800603 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 00:59:14.800611 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 23 00:59:14.800618 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 23 00:59:14.800627 kernel: e820: reserve RAM buffer [mem 0x7df57018-0x7fffffff] Jan 23 00:59:14.800634 kernel: e820: reserve RAM buffer [mem 0x7df7f018-0x7fffffff] Jan 23 00:59:14.800641 kernel: e820: reserve RAM buffer [mem 0x7e93f000-0x7fffffff] Jan 23 00:59:14.800648 kernel: e820: reserve RAM buffer [mem 0x7ec71000-0x7fffffff] Jan 23 00:59:14.800655 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 23 00:59:14.800662 kernel: e820: reserve RAM buffer [mem 0x7feaf000-0x7fffffff] Jan 23 00:59:14.800669 kernel: e820: reserve RAM buffer [mem 0x7feec000-0x7fffffff] Jan 23 00:59:14.800739 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 00:59:14.800808 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 00:59:14.800883 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 00:59:14.800906 kernel: vgaarb: loaded Jan 23 00:59:14.800925 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 00:59:14.800943 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 00:59:14.800962 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 00:59:14.800971 kernel: pnp: PnP ACPI init Jan 23 00:59:14.801047 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 00:59:14.801061 kernel: pnp: PnP ACPI: found 5 devices Jan 23 00:59:14.801068 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 00:59:14.801076 kernel: NET: Registered PF_INET protocol family Jan 23 00:59:14.801083 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 00:59:14.801090 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 00:59:14.801098 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 00:59:14.801105 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 00:59:14.802136 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 00:59:14.802153 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 00:59:14.802166 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:59:14.802176 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:59:14.802185 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 00:59:14.802194 kernel: NET: Registered PF_XDP protocol family Jan 23 00:59:14.802294 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 00:59:14.802369 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 00:59:14.802441 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 00:59:14.802512 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 00:59:14.802583 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 00:59:14.802655 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 00:59:14.802723 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 00:59:14.802791 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 00:59:14.802859 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 23 00:59:14.802927 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 23 00:59:14.802995 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 23 00:59:14.803062 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 23 00:59:14.803144 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 23 00:59:14.803212 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 23 00:59:14.803280 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 23 00:59:14.803348 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 23 00:59:14.803415 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 23 00:59:14.803482 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 23 00:59:14.803549 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 23 00:59:14.803616 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 23 00:59:14.803686 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 23 00:59:14.803753 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 23 00:59:14.803820 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 23 00:59:14.803903 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 23 00:59:14.803971 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 23 00:59:14.804037 kernel: pci 0000:00:05.1: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 23 00:59:14.804103 kernel: pci 0000:00:05.2: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 23 00:59:14.805246 kernel: pci 0000:00:05.3: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 23 00:59:14.805323 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 23 00:59:14.805392 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Jan 23 00:59:14.805462 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Jan 23 00:59:14.805529 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Jan 23 00:59:14.805598 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Jan 23 00:59:14.805666 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Jan 23 00:59:14.805733 kernel: pci 0000:00:02.6: bridge window [io 0x8000-0x8fff]: assigned Jan 23 00:59:14.805799 kernel: pci 0000:00:02.7: bridge window [io 0x9000-0x9fff]: assigned Jan 23 00:59:14.805869 kernel: pci 0000:00:03.0: bridge window [io 0xa000-0xafff]: assigned Jan 23 00:59:14.805936 kernel: pci 0000:00:03.1: bridge window [io 0xb000-0xbfff]: assigned Jan 23 00:59:14.806003 kernel: pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]: assigned Jan 23 00:59:14.806070 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff]: assigned Jan 23 00:59:14.806149 kernel: pci 0000:00:03.4: bridge window [io 0xe000-0xefff]: assigned Jan 23 00:59:14.806215 kernel: pci 0000:00:03.5: bridge window [io 0xf000-0xffff]: assigned Jan 23 00:59:14.806282 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.806348 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.806417 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.806483 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.806549 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.806615 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.806682 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.806749 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.806815 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.806881 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.806949 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.807015 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.807081 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.808187 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.808269 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.808341 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.808410 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.808495 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.808564 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.808630 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.808689 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.808794 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.808921 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.809050 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.809143 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.809211 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.809275 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.809340 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.809401 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.809461 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.809522 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff]: assigned Jan 23 00:59:14.809586 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff]: assigned Jan 23 00:59:14.809652 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 00:59:14.809718 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff]: assigned Jan 23 00:59:14.809781 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff]: assigned Jan 23 00:59:14.809844 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 00:59:14.809907 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff]: assigned Jan 23 00:59:14.809970 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff]: assigned Jan 23 00:59:14.810034 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff]: assigned Jan 23 00:59:14.810098 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff]: assigned Jan 23 00:59:14.810178 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff]: assigned Jan 23 00:59:14.810646 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff]: assigned Jan 23 00:59:14.810718 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff]: assigned Jan 23 00:59:14.810788 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.810879 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.810947 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.811013 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.811078 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.811162 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.811228 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.811297 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.811364 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.811431 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.811497 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.811563 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.811628 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.811695 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.811765 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.811834 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.811900 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.811967 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.812034 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.812099 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.812178 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.812246 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.812317 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.812385 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.812464 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.812533 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.812601 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.812667 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.812734 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 00:59:14.812802 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 23 00:59:14.812899 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 00:59:14.812971 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Jan 23 00:59:14.813039 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Jan 23 00:59:14.813108 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 00:59:14.813205 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 00:59:14.813271 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Jan 23 00:59:14.813336 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Jan 23 00:59:14.813400 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 00:59:14.813469 kernel: pci 0000:03:00.0: ROM [mem 0x83e80000-0x83efffff pref]: assigned Jan 23 00:59:14.813536 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 00:59:14.813599 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Jan 23 00:59:14.813663 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 00:59:14.813726 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 00:59:14.813790 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Jan 23 00:59:14.813854 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 00:59:14.813917 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 00:59:14.813981 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Jan 23 00:59:14.814044 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 00:59:14.814107 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 00:59:14.814182 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Jan 23 00:59:14.814246 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 00:59:14.814314 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 00:59:14.814378 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Jan 23 00:59:14.814441 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 00:59:14.814508 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 00:59:14.814575 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Jan 23 00:59:14.814641 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 00:59:14.814707 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 00:59:14.814773 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Jan 23 00:59:14.814839 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 00:59:14.814902 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Jan 23 00:59:14.814973 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Jan 23 00:59:14.815037 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 00:59:14.815100 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Jan 23 00:59:14.815200 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Jan 23 00:59:14.815268 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 00:59:14.815332 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Jan 23 00:59:14.815396 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Jan 23 00:59:14.815461 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 00:59:14.815526 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Jan 23 00:59:14.815591 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Jan 23 00:59:14.815658 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 00:59:14.815722 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Jan 23 00:59:14.815787 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Jan 23 00:59:14.815852 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 00:59:14.815920 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Jan 23 00:59:14.815986 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Jan 23 00:59:14.816050 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 00:59:14.816125 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Jan 23 00:59:14.816193 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Jan 23 00:59:14.816259 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 00:59:14.816325 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Jan 23 00:59:14.816391 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Jan 23 00:59:14.816468 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 00:59:14.816542 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Jan 23 00:59:14.816610 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff] Jan 23 00:59:14.816677 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Jan 23 00:59:14.816744 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 00:59:14.816810 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Jan 23 00:59:14.816875 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff] Jan 23 00:59:14.816941 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Jan 23 00:59:14.817009 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 00:59:14.817076 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Jan 23 00:59:14.817157 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff] Jan 23 00:59:14.817223 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Jan 23 00:59:14.817289 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 00:59:14.817355 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Jan 23 00:59:14.817421 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff] Jan 23 00:59:14.817518 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Jan 23 00:59:14.817593 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 00:59:14.817675 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Jan 23 00:59:14.817740 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff] Jan 23 00:59:14.817804 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Jan 23 00:59:14.817867 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 00:59:14.817931 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Jan 23 00:59:14.817995 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff] Jan 23 00:59:14.818061 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Jan 23 00:59:14.818135 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 00:59:14.818203 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Jan 23 00:59:14.818269 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff] Jan 23 00:59:14.818336 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Jan 23 00:59:14.818401 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 00:59:14.818467 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Jan 23 00:59:14.818544 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff] Jan 23 00:59:14.818611 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Jan 23 00:59:14.818676 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 00:59:14.818743 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Jan 23 00:59:14.818809 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff] Jan 23 00:59:14.818874 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Jan 23 00:59:14.818939 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 00:59:14.819010 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Jan 23 00:59:14.819076 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff] Jan 23 00:59:14.819156 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Jan 23 00:59:14.819222 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 00:59:14.819290 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Jan 23 00:59:14.819355 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff] Jan 23 00:59:14.819421 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 00:59:14.819486 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 00:59:14.819556 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Jan 23 00:59:14.819622 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff] Jan 23 00:59:14.819688 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 00:59:14.819754 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 00:59:14.819822 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Jan 23 00:59:14.819888 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff] Jan 23 00:59:14.819954 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Jan 23 00:59:14.820019 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 00:59:14.820089 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 00:59:14.820172 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 00:59:14.820234 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 00:59:14.820295 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 23 00:59:14.820355 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 00:59:14.820415 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x38e800003fff window] Jan 23 00:59:14.821157 kernel: pci_bus 0000:01: resource 0 [io 0x6000-0x6fff] Jan 23 00:59:14.821246 kernel: pci_bus 0000:01: resource 1 [mem 0x84000000-0x842fffff] Jan 23 00:59:14.821311 kernel: pci_bus 0000:01: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 00:59:14.821379 kernel: pci_bus 0000:02: resource 0 [io 0x6000-0x6fff] Jan 23 00:59:14.821445 kernel: pci_bus 0000:02: resource 1 [mem 0x84000000-0x841fffff] Jan 23 00:59:14.821508 kernel: pci_bus 0000:02: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Jan 23 00:59:14.821576 kernel: pci_bus 0000:03: resource 1 [mem 0x83e00000-0x83ffffff] Jan 23 00:59:14.821639 kernel: pci_bus 0000:03: resource 2 [mem 0x380800000000-0x380fffffffff 64bit pref] Jan 23 00:59:14.821704 kernel: pci_bus 0000:04: resource 1 [mem 0x83c00000-0x83dfffff] Jan 23 00:59:14.821763 kernel: pci_bus 0000:04: resource 2 [mem 0x381000000000-0x3817ffffffff 64bit pref] Jan 23 00:59:14.821827 kernel: pci_bus 0000:05: resource 1 [mem 0x83a00000-0x83bfffff] Jan 23 00:59:14.821888 kernel: pci_bus 0000:05: resource 2 [mem 0x381800000000-0x381fffffffff 64bit pref] Jan 23 00:59:14.821952 kernel: pci_bus 0000:06: resource 1 [mem 0x83800000-0x839fffff] Jan 23 00:59:14.822012 kernel: pci_bus 0000:06: resource 2 [mem 0x382000000000-0x3827ffffffff 64bit pref] Jan 23 00:59:14.822082 kernel: pci_bus 0000:07: resource 1 [mem 0x83600000-0x837fffff] Jan 23 00:59:14.823213 kernel: pci_bus 0000:07: resource 2 [mem 0x382800000000-0x382fffffffff 64bit pref] Jan 23 00:59:14.823304 kernel: pci_bus 0000:08: resource 1 [mem 0x83400000-0x835fffff] Jan 23 00:59:14.823369 kernel: pci_bus 0000:08: resource 2 [mem 0x383000000000-0x3837ffffffff 64bit pref] Jan 23 00:59:14.823436 kernel: pci_bus 0000:09: resource 1 [mem 0x83200000-0x833fffff] Jan 23 00:59:14.823499 kernel: pci_bus 0000:09: resource 2 [mem 0x383800000000-0x383fffffffff 64bit pref] Jan 23 00:59:14.823567 kernel: pci_bus 0000:0a: resource 1 [mem 0x83000000-0x831fffff] Jan 23 00:59:14.823627 kernel: pci_bus 0000:0a: resource 2 [mem 0x384000000000-0x3847ffffffff 64bit pref] Jan 23 00:59:14.823692 kernel: pci_bus 0000:0b: resource 1 [mem 0x82e00000-0x82ffffff] Jan 23 00:59:14.823754 kernel: pci_bus 0000:0b: resource 2 [mem 0x384800000000-0x384fffffffff 64bit pref] Jan 23 00:59:14.823824 kernel: pci_bus 0000:0c: resource 1 [mem 0x82c00000-0x82dfffff] Jan 23 00:59:14.823885 kernel: pci_bus 0000:0c: resource 2 [mem 0x385000000000-0x3857ffffffff 64bit pref] Jan 23 00:59:14.823953 kernel: pci_bus 0000:0d: resource 1 [mem 0x82a00000-0x82bfffff] Jan 23 00:59:14.824015 kernel: pci_bus 0000:0d: resource 2 [mem 0x385800000000-0x385fffffffff 64bit pref] Jan 23 00:59:14.824080 kernel: pci_bus 0000:0e: resource 1 [mem 0x82800000-0x829fffff] Jan 23 00:59:14.824180 kernel: pci_bus 0000:0e: resource 2 [mem 0x386000000000-0x3867ffffffff 64bit pref] Jan 23 00:59:14.824248 kernel: pci_bus 0000:0f: resource 1 [mem 0x82600000-0x827fffff] Jan 23 00:59:14.824315 kernel: pci_bus 0000:0f: resource 2 [mem 0x386800000000-0x386fffffffff 64bit pref] Jan 23 00:59:14.824386 kernel: pci_bus 0000:10: resource 1 [mem 0x82400000-0x825fffff] Jan 23 00:59:14.824461 kernel: pci_bus 0000:10: resource 2 [mem 0x387000000000-0x3877ffffffff 64bit pref] Jan 23 00:59:14.824532 kernel: pci_bus 0000:11: resource 1 [mem 0x82200000-0x823fffff] Jan 23 00:59:14.824596 kernel: pci_bus 0000:11: resource 2 [mem 0x387800000000-0x387fffffffff 64bit pref] Jan 23 00:59:14.824664 kernel: pci_bus 0000:12: resource 0 [io 0xf000-0xffff] Jan 23 00:59:14.824782 kernel: pci_bus 0000:12: resource 1 [mem 0x82000000-0x821fffff] Jan 23 00:59:14.824850 kernel: pci_bus 0000:12: resource 2 [mem 0x388000000000-0x3887ffffffff 64bit pref] Jan 23 00:59:14.824917 kernel: pci_bus 0000:13: resource 0 [io 0xe000-0xefff] Jan 23 00:59:14.824979 kernel: pci_bus 0000:13: resource 1 [mem 0x81e00000-0x81ffffff] Jan 23 00:59:14.825039 kernel: pci_bus 0000:13: resource 2 [mem 0x388800000000-0x388fffffffff 64bit pref] Jan 23 00:59:14.825106 kernel: pci_bus 0000:14: resource 0 [io 0xd000-0xdfff] Jan 23 00:59:14.826651 kernel: pci_bus 0000:14: resource 1 [mem 0x81c00000-0x81dfffff] Jan 23 00:59:14.826728 kernel: pci_bus 0000:14: resource 2 [mem 0x389000000000-0x3897ffffffff 64bit pref] Jan 23 00:59:14.826799 kernel: pci_bus 0000:15: resource 0 [io 0xc000-0xcfff] Jan 23 00:59:14.826863 kernel: pci_bus 0000:15: resource 1 [mem 0x81a00000-0x81bfffff] Jan 23 00:59:14.826926 kernel: pci_bus 0000:15: resource 2 [mem 0x389800000000-0x389fffffffff 64bit pref] Jan 23 00:59:14.826995 kernel: pci_bus 0000:16: resource 0 [io 0xb000-0xbfff] Jan 23 00:59:14.827059 kernel: pci_bus 0000:16: resource 1 [mem 0x81800000-0x819fffff] Jan 23 00:59:14.827132 kernel: pci_bus 0000:16: resource 2 [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Jan 23 00:59:14.827204 kernel: pci_bus 0000:17: resource 0 [io 0xa000-0xafff] Jan 23 00:59:14.827267 kernel: pci_bus 0000:17: resource 1 [mem 0x81600000-0x817fffff] Jan 23 00:59:14.827331 kernel: pci_bus 0000:17: resource 2 [mem 0x38a800000000-0x38afffffffff 64bit pref] Jan 23 00:59:14.827399 kernel: pci_bus 0000:18: resource 0 [io 0x9000-0x9fff] Jan 23 00:59:14.827463 kernel: pci_bus 0000:18: resource 1 [mem 0x81400000-0x815fffff] Jan 23 00:59:14.827525 kernel: pci_bus 0000:18: resource 2 [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Jan 23 00:59:14.827594 kernel: pci_bus 0000:19: resource 0 [io 0x8000-0x8fff] Jan 23 00:59:14.827658 kernel: pci_bus 0000:19: resource 1 [mem 0x81200000-0x813fffff] Jan 23 00:59:14.827719 kernel: pci_bus 0000:19: resource 2 [mem 0x38b800000000-0x38bfffffffff 64bit pref] Jan 23 00:59:14.827785 kernel: pci_bus 0000:1a: resource 0 [io 0x5000-0x5fff] Jan 23 00:59:14.827847 kernel: pci_bus 0000:1a: resource 1 [mem 0x81000000-0x811fffff] Jan 23 00:59:14.827907 kernel: pci_bus 0000:1a: resource 2 [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Jan 23 00:59:14.827972 kernel: pci_bus 0000:1b: resource 0 [io 0x4000-0x4fff] Jan 23 00:59:14.828036 kernel: pci_bus 0000:1b: resource 1 [mem 0x80e00000-0x80ffffff] Jan 23 00:59:14.828097 kernel: pci_bus 0000:1b: resource 2 [mem 0x38c800000000-0x38cfffffffff 64bit pref] Jan 23 00:59:14.830219 kernel: pci_bus 0000:1c: resource 0 [io 0x3000-0x3fff] Jan 23 00:59:14.830297 kernel: pci_bus 0000:1c: resource 1 [mem 0x80c00000-0x80dfffff] Jan 23 00:59:14.830366 kernel: pci_bus 0000:1c: resource 2 [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Jan 23 00:59:14.830437 kernel: pci_bus 0000:1d: resource 0 [io 0x2000-0x2fff] Jan 23 00:59:14.830499 kernel: pci_bus 0000:1d: resource 1 [mem 0x80a00000-0x80bfffff] Jan 23 00:59:14.830564 kernel: pci_bus 0000:1d: resource 2 [mem 0x38d800000000-0x38dfffffffff 64bit pref] Jan 23 00:59:14.830628 kernel: pci_bus 0000:1e: resource 0 [io 0x1000-0x1fff] Jan 23 00:59:14.830688 kernel: pci_bus 0000:1e: resource 1 [mem 0x80800000-0x809fffff] Jan 23 00:59:14.830747 kernel: pci_bus 0000:1e: resource 2 [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Jan 23 00:59:14.830758 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 00:59:14.830766 kernel: PCI: CLS 0 bytes, default 64 Jan 23 00:59:14.830773 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 00:59:14.830781 kernel: software IO TLB: mapped [mem 0x0000000077ede000-0x000000007bede000] (64MB) Jan 23 00:59:14.830790 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 00:59:14.830797 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21133e85697, max_idle_ns: 440795250946 ns Jan 23 00:59:14.830805 kernel: Initialise system trusted keyrings Jan 23 00:59:14.830812 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 00:59:14.830819 kernel: Key type asymmetric registered Jan 23 00:59:14.830826 kernel: Asymmetric key parser 'x509' registered Jan 23 00:59:14.830834 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 00:59:14.830841 kernel: io scheduler mq-deadline registered Jan 23 00:59:14.830849 kernel: io scheduler kyber registered Jan 23 00:59:14.830856 kernel: io scheduler bfq registered Jan 23 00:59:14.830929 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 23 00:59:14.830997 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 23 00:59:14.831064 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 23 00:59:14.831152 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 23 00:59:14.831220 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 23 00:59:14.831291 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 23 00:59:14.831358 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 23 00:59:14.831424 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 23 00:59:14.831489 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 23 00:59:14.831553 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 23 00:59:14.831618 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 23 00:59:14.831686 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 23 00:59:14.831751 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 23 00:59:14.831816 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 23 00:59:14.831880 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 23 00:59:14.831945 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 23 00:59:14.831954 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 00:59:14.832023 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 23 00:59:14.832089 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 23 00:59:14.833096 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33 Jan 23 00:59:14.833185 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33 Jan 23 00:59:14.833254 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34 Jan 23 00:59:14.833322 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34 Jan 23 00:59:14.833393 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35 Jan 23 00:59:14.833458 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35 Jan 23 00:59:14.833524 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36 Jan 23 00:59:14.833588 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36 Jan 23 00:59:14.833820 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37 Jan 23 00:59:14.833891 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37 Jan 23 00:59:14.833958 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38 Jan 23 00:59:14.834022 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38 Jan 23 00:59:14.834088 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39 Jan 23 00:59:14.834164 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39 Jan 23 00:59:14.834174 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 00:59:14.834238 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40 Jan 23 00:59:14.834306 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40 Jan 23 00:59:14.834375 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 41 Jan 23 00:59:14.834440 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 41 Jan 23 00:59:14.834508 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 42 Jan 23 00:59:14.834575 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 42 Jan 23 00:59:14.834642 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 43 Jan 23 00:59:14.834708 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 43 Jan 23 00:59:14.834776 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 44 Jan 23 00:59:14.834843 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 44 Jan 23 00:59:14.834910 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 45 Jan 23 00:59:14.834972 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 45 Jan 23 00:59:14.835032 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 46 Jan 23 00:59:14.835093 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 46 Jan 23 00:59:14.836427 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 47 Jan 23 00:59:14.836521 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 47 Jan 23 00:59:14.836533 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 23 00:59:14.836602 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 48 Jan 23 00:59:14.836670 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 48 Jan 23 00:59:14.836732 kernel: pcieport 0000:00:05.1: PME: Signaling with IRQ 49 Jan 23 00:59:14.836794 kernel: pcieport 0000:00:05.1: AER: enabled with IRQ 49 Jan 23 00:59:14.836855 kernel: pcieport 0000:00:05.2: PME: Signaling with IRQ 50 Jan 23 00:59:14.836916 kernel: pcieport 0000:00:05.2: AER: enabled with IRQ 50 Jan 23 00:59:14.837508 kernel: pcieport 0000:00:05.3: PME: Signaling with IRQ 51 Jan 23 00:59:14.837582 kernel: pcieport 0000:00:05.3: AER: enabled with IRQ 51 Jan 23 00:59:14.837654 kernel: pcieport 0000:00:05.4: PME: Signaling with IRQ 52 Jan 23 00:59:14.837722 kernel: pcieport 0000:00:05.4: AER: enabled with IRQ 52 Jan 23 00:59:14.837736 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 00:59:14.837744 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 00:59:14.837752 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 00:59:14.837760 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 00:59:14.837767 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 00:59:14.837774 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 00:59:14.837847 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 00:59:14.837858 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 00:59:14.837921 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 00:59:14.837981 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T00:59:14 UTC (1769129954) Jan 23 00:59:14.838040 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 23 00:59:14.838049 kernel: intel_pstate: CPU model not supported Jan 23 00:59:14.838056 kernel: efifb: probing for efifb Jan 23 00:59:14.838064 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Jan 23 00:59:14.838071 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 00:59:14.838079 kernel: efifb: scrolling: redraw Jan 23 00:59:14.838088 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 00:59:14.838096 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 00:59:14.838103 kernel: fb0: EFI VGA frame buffer device Jan 23 00:59:14.838110 kernel: pstore: Using crash dump compression: deflate Jan 23 00:59:14.838133 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 00:59:14.838141 kernel: NET: Registered PF_INET6 protocol family Jan 23 00:59:14.838148 kernel: Segment Routing with IPv6 Jan 23 00:59:14.838155 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 00:59:14.838583 kernel: NET: Registered PF_PACKET protocol family Jan 23 00:59:14.838591 kernel: Key type dns_resolver registered Jan 23 00:59:14.838602 kernel: IPI shorthand broadcast: enabled Jan 23 00:59:14.838609 kernel: sched_clock: Marking stable (3481001993, 147320644)->(3724168764, -95846127) Jan 23 00:59:14.838616 kernel: registered taskstats version 1 Jan 23 00:59:14.838624 kernel: Loading compiled-in X.509 certificates Jan 23 00:59:14.838631 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 00:59:14.838639 kernel: Demotion targets for Node 0: null Jan 23 00:59:14.838646 kernel: Key type .fscrypt registered Jan 23 00:59:14.838653 kernel: Key type fscrypt-provisioning registered Jan 23 00:59:14.838661 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 00:59:14.838670 kernel: ima: Allocated hash algorithm: sha1 Jan 23 00:59:14.838677 kernel: ima: No architecture policies found Jan 23 00:59:14.838684 kernel: clk: Disabling unused clocks Jan 23 00:59:14.838692 kernel: Warning: unable to open an initial console. Jan 23 00:59:14.838700 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 00:59:14.838708 kernel: Write protecting the kernel read-only data: 40960k Jan 23 00:59:14.838715 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 00:59:14.838722 kernel: Run /init as init process Jan 23 00:59:14.838730 kernel: with arguments: Jan 23 00:59:14.838739 kernel: /init Jan 23 00:59:14.838746 kernel: with environment: Jan 23 00:59:14.838753 kernel: HOME=/ Jan 23 00:59:14.838760 kernel: TERM=linux Jan 23 00:59:14.838769 systemd[1]: Successfully made /usr/ read-only. Jan 23 00:59:14.838780 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:59:14.838789 systemd[1]: Detected virtualization kvm. Jan 23 00:59:14.838799 systemd[1]: Detected architecture x86-64. Jan 23 00:59:14.838806 systemd[1]: Running in initrd. Jan 23 00:59:14.838814 systemd[1]: No hostname configured, using default hostname. Jan 23 00:59:14.838823 systemd[1]: Hostname set to . Jan 23 00:59:14.838831 systemd[1]: Initializing machine ID from VM UUID. Jan 23 00:59:14.838849 systemd[1]: Queued start job for default target initrd.target. Jan 23 00:59:14.838858 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:59:14.838867 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:59:14.838876 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 00:59:14.838884 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:59:14.838892 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 00:59:14.838903 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 00:59:14.838912 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 00:59:14.838920 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 00:59:14.838929 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:59:14.838938 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:59:14.838946 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:59:14.838954 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:59:14.838964 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:59:14.838972 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:59:14.838980 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:59:14.838988 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:59:14.838995 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 00:59:14.839003 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 00:59:14.839011 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:59:14.839018 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:59:14.839028 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:59:14.839035 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:59:14.839043 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 00:59:14.839051 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:59:14.839058 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 00:59:14.839066 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 00:59:14.839074 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 00:59:14.839082 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:59:14.839089 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:59:14.839099 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:59:14.839107 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 00:59:14.839381 systemd-journald[224]: Collecting audit messages is disabled. Jan 23 00:59:14.839406 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:59:14.839414 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 00:59:14.839423 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:59:14.839431 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:59:14.839443 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 00:59:14.839452 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 00:59:14.839460 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:59:14.839467 kernel: Bridge firewalling registered Jan 23 00:59:14.839475 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:59:14.839483 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:59:14.839491 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:59:14.839499 systemd-journald[224]: Journal started Jan 23 00:59:14.839519 systemd-journald[224]: Runtime Journal (/run/log/journal/93f0fb9d2ebb4ac7a9e2635376bfeb1e) is 8M, max 78M, 70M free. Jan 23 00:59:14.775143 systemd-modules-load[226]: Inserted module 'overlay' Jan 23 00:59:14.818539 systemd-modules-load[226]: Inserted module 'br_netfilter' Jan 23 00:59:14.842138 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:59:14.844154 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:59:14.848234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:59:14.849819 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 00:59:14.851261 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:59:14.852136 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:59:14.867010 systemd-tmpfiles[262]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 00:59:14.871142 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:59:14.873567 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:59:14.873875 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:59:14.911937 systemd-resolved[277]: Positive Trust Anchors: Jan 23 00:59:14.911949 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:59:14.911980 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:59:14.914885 systemd-resolved[277]: Defaulting to hostname 'linux'. Jan 23 00:59:14.915729 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:59:14.917033 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:59:14.954230 kernel: SCSI subsystem initialized Jan 23 00:59:14.964138 kernel: Loading iSCSI transport class v2.0-870. Jan 23 00:59:14.974136 kernel: iscsi: registered transport (tcp) Jan 23 00:59:14.993499 kernel: iscsi: registered transport (qla4xxx) Jan 23 00:59:14.993562 kernel: QLogic iSCSI HBA Driver Jan 23 00:59:15.009847 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:59:15.024200 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:59:15.026025 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:59:15.061871 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 00:59:15.063595 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 00:59:15.112146 kernel: raid6: avx512x4 gen() 42949 MB/s Jan 23 00:59:15.129146 kernel: raid6: avx512x2 gen() 44456 MB/s Jan 23 00:59:15.146172 kernel: raid6: avx512x1 gen() 44515 MB/s Jan 23 00:59:15.163155 kernel: raid6: avx2x4 gen() 35233 MB/s Jan 23 00:59:15.180162 kernel: raid6: avx2x2 gen() 34924 MB/s Jan 23 00:59:15.197629 kernel: raid6: avx2x1 gen() 27320 MB/s Jan 23 00:59:15.197720 kernel: raid6: using algorithm avx512x1 gen() 44515 MB/s Jan 23 00:59:15.217159 kernel: raid6: .... xor() 25239 MB/s, rmw enabled Jan 23 00:59:15.217258 kernel: raid6: using avx512x2 recovery algorithm Jan 23 00:59:15.235152 kernel: xor: automatically using best checksumming function avx Jan 23 00:59:15.359328 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 00:59:15.364256 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:59:15.366630 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:59:15.389200 systemd-udevd[475]: Using default interface naming scheme 'v255'. Jan 23 00:59:15.393426 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:59:15.395751 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 00:59:15.419445 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Jan 23 00:59:15.440048 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:59:15.441522 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:59:15.507126 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:59:15.511017 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 00:59:15.568172 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 23 00:59:15.572312 kernel: virtio_blk virtio2: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 23 00:59:15.583945 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 00:59:15.583986 kernel: GPT:17805311 != 104857599 Jan 23 00:59:15.583997 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 00:59:15.584006 kernel: GPT:17805311 != 104857599 Jan 23 00:59:15.584014 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 00:59:15.584023 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 00:59:15.592127 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 00:59:15.613139 kernel: AES CTR mode by8 optimization enabled Jan 23 00:59:15.617243 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:59:15.618094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:59:15.619196 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:59:15.623453 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:59:15.658656 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 00:59:15.662517 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:59:15.662612 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:59:15.666316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:59:15.682300 kernel: ACPI: bus type USB registered Jan 23 00:59:15.682347 kernel: usbcore: registered new interface driver usbfs Jan 23 00:59:15.683268 kernel: usbcore: registered new interface driver hub Jan 23 00:59:15.684126 kernel: usbcore: registered new device driver usb Jan 23 00:59:15.704136 kernel: libata version 3.00 loaded. Jan 23 00:59:15.711199 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller Jan 23 00:59:15.711393 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 00:59:15.711405 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1 Jan 23 00:59:15.713775 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 00:59:15.716082 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:59:15.724006 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 00:59:15.724152 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 00:59:15.724164 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 00:59:15.724248 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 00:59:15.724326 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 00:59:15.724406 kernel: uhci_hcd 0000:02:01.0: detected 2 ports Jan 23 00:59:15.724518 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x00006000 Jan 23 00:59:15.724605 kernel: hub 1-0:1.0: USB hub found Jan 23 00:59:15.726128 kernel: hub 1-0:1.0: 2 ports detected Jan 23 00:59:15.728128 kernel: scsi host0: ahci Jan 23 00:59:15.731413 kernel: scsi host1: ahci Jan 23 00:59:15.733135 kernel: scsi host2: ahci Jan 23 00:59:15.735137 kernel: scsi host3: ahci Jan 23 00:59:15.736827 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 00:59:15.739136 kernel: scsi host4: ahci Jan 23 00:59:15.743286 kernel: scsi host5: ahci Jan 23 00:59:15.744199 kernel: ata1: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380100 irq 61 lpm-pol 1 Jan 23 00:59:15.744213 kernel: ata2: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380180 irq 61 lpm-pol 1 Jan 23 00:59:15.744222 kernel: ata3: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380200 irq 61 lpm-pol 1 Jan 23 00:59:15.744239 kernel: ata4: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380280 irq 61 lpm-pol 1 Jan 23 00:59:15.744251 kernel: ata5: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380300 irq 61 lpm-pol 1 Jan 23 00:59:15.744260 kernel: ata6: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380380 irq 61 lpm-pol 1 Jan 23 00:59:15.742860 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 00:59:15.749150 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 00:59:15.751210 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 00:59:15.768435 disk-uuid[672]: Primary Header is updated. Jan 23 00:59:15.768435 disk-uuid[672]: Secondary Entries is updated. Jan 23 00:59:15.768435 disk-uuid[672]: Secondary Header is updated. Jan 23 00:59:15.774127 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 00:59:15.780130 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 00:59:15.949151 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Jan 23 00:59:16.053215 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 00:59:16.053279 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 00:59:16.053289 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 00:59:16.053299 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 00:59:16.053308 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 00:59:16.054133 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 00:59:16.066149 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 00:59:16.067539 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:59:16.068211 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:59:16.068574 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:59:16.069716 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 00:59:16.081740 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:59:16.131143 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 00:59:16.137379 kernel: usbcore: registered new interface driver usbhid Jan 23 00:59:16.137426 kernel: usbhid: USB HID core driver Jan 23 00:59:16.143212 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 23 00:59:16.143252 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0 Jan 23 00:59:16.788150 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 00:59:16.788204 disk-uuid[673]: The operation has completed successfully. Jan 23 00:59:16.824862 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 00:59:16.824955 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 00:59:16.856567 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 00:59:16.871445 sh[699]: Success Jan 23 00:59:16.887189 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 00:59:16.887249 kernel: device-mapper: uevent: version 1.0.3 Jan 23 00:59:16.887261 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 00:59:16.897152 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 00:59:16.945558 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 00:59:16.948182 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 00:59:16.963922 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 00:59:16.975155 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (711) Jan 23 00:59:16.978310 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 00:59:16.978343 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:59:16.991230 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 00:59:16.991282 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 00:59:16.993029 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 00:59:16.993810 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:59:16.994508 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 00:59:16.995192 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 00:59:16.998226 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 00:59:17.025136 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (742) Jan 23 00:59:17.029212 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:59:17.029244 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:59:17.034412 kernel: BTRFS info (device vda6): turning on async discard Jan 23 00:59:17.034470 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 00:59:17.040179 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:59:17.041920 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 00:59:17.043692 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 00:59:17.105230 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:59:17.115289 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:59:17.145564 systemd-networkd[881]: lo: Link UP Jan 23 00:59:17.145572 systemd-networkd[881]: lo: Gained carrier Jan 23 00:59:17.146925 systemd-networkd[881]: Enumeration completed Jan 23 00:59:17.147303 systemd-networkd[881]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:59:17.147307 systemd-networkd[881]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:59:17.148179 systemd-networkd[881]: eth0: Link UP Jan 23 00:59:17.148274 systemd-networkd[881]: eth0: Gained carrier Jan 23 00:59:17.148283 systemd-networkd[881]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:59:17.151026 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:59:17.151574 systemd[1]: Reached target network.target - Network. Jan 23 00:59:17.159212 systemd-networkd[881]: eth0: DHCPv4 address 10.0.7.172/25, gateway 10.0.7.129 acquired from 10.0.7.129 Jan 23 00:59:17.177611 ignition[795]: Ignition 2.22.0 Jan 23 00:59:17.177621 ignition[795]: Stage: fetch-offline Jan 23 00:59:17.177645 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:59:17.177651 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 00:59:17.179714 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:59:17.177718 ignition[795]: parsed url from cmdline: "" Jan 23 00:59:17.181200 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 00:59:17.177721 ignition[795]: no config URL provided Jan 23 00:59:17.177725 ignition[795]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:59:17.177730 ignition[795]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:59:17.177734 ignition[795]: failed to fetch config: resource requires networking Jan 23 00:59:17.177843 ignition[795]: Ignition finished successfully Jan 23 00:59:17.209557 ignition[890]: Ignition 2.22.0 Jan 23 00:59:17.209569 ignition[890]: Stage: fetch Jan 23 00:59:17.209683 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:59:17.209691 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 00:59:17.209759 ignition[890]: parsed url from cmdline: "" Jan 23 00:59:17.209762 ignition[890]: no config URL provided Jan 23 00:59:17.209766 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:59:17.209772 ignition[890]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:59:17.209852 ignition[890]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 23 00:59:17.210180 ignition[890]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 00:59:17.210205 ignition[890]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 00:59:18.018497 ignition[890]: GET result: OK Jan 23 00:59:18.018624 ignition[890]: parsing config with SHA512: b640e4275841a8de298b9229ac95f43bce41a41faf0918304dd9af19e1be523144d87cbcecaba54340407eeed502e02c25afd497f1a8a894be0b34d7f9fa0185 Jan 23 00:59:18.023949 unknown[890]: fetched base config from "system" Jan 23 00:59:18.023959 unknown[890]: fetched base config from "system" Jan 23 00:59:18.023968 unknown[890]: fetched user config from "openstack" Jan 23 00:59:18.024884 ignition[890]: fetch: fetch complete Jan 23 00:59:18.024889 ignition[890]: fetch: fetch passed Jan 23 00:59:18.024940 ignition[890]: Ignition finished successfully Jan 23 00:59:18.027017 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 00:59:18.028267 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 00:59:18.054899 ignition[897]: Ignition 2.22.0 Jan 23 00:59:18.054910 ignition[897]: Stage: kargs Jan 23 00:59:18.055022 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:59:18.055034 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 00:59:18.055584 ignition[897]: kargs: kargs passed Jan 23 00:59:18.055615 ignition[897]: Ignition finished successfully Jan 23 00:59:18.057373 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 00:59:18.058547 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 00:59:18.083967 ignition[903]: Ignition 2.22.0 Jan 23 00:59:18.084137 ignition[903]: Stage: disks Jan 23 00:59:18.084250 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:59:18.084256 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 00:59:18.084895 ignition[903]: disks: disks passed Jan 23 00:59:18.084926 ignition[903]: Ignition finished successfully Jan 23 00:59:18.086928 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 00:59:18.087603 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 00:59:18.088128 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 00:59:18.088420 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:59:18.088924 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:59:18.089409 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:59:18.090628 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 00:59:18.119190 systemd-fsck[912]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 00:59:18.121353 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 00:59:18.123081 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 00:59:18.232140 kernel: EXT4-fs (vda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 00:59:18.233079 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 00:59:18.233973 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 00:59:18.235925 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:59:18.238208 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 00:59:18.238864 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 00:59:18.240928 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 23 00:59:18.241464 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 00:59:18.241490 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:59:18.255202 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 00:59:18.257181 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 00:59:18.267150 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Jan 23 00:59:18.270819 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:59:18.270872 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:59:18.276524 kernel: BTRFS info (device vda6): turning on async discard Jan 23 00:59:18.276584 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 00:59:18.279592 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:59:18.323140 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 00:59:18.338325 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 00:59:18.346171 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Jan 23 00:59:18.351988 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 00:59:18.355671 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 00:59:18.435616 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 00:59:18.437339 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 00:59:18.439206 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 00:59:18.457338 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 00:59:18.460137 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:59:18.479390 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 00:59:18.491299 ignition[1037]: INFO : Ignition 2.22.0 Jan 23 00:59:18.493403 ignition[1037]: INFO : Stage: mount Jan 23 00:59:18.493403 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:59:18.493403 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 00:59:18.493403 ignition[1037]: INFO : mount: mount passed Jan 23 00:59:18.493403 ignition[1037]: INFO : Ignition finished successfully Jan 23 00:59:18.496251 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 00:59:18.547294 systemd-networkd[881]: eth0: Gained IPv6LL Jan 23 00:59:19.352154 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 00:59:21.358198 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 00:59:25.364137 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 00:59:25.367204 coreos-metadata[922]: Jan 23 00:59:25.367 WARN failed to locate config-drive, using the metadata service API instead Jan 23 00:59:25.377935 coreos-metadata[922]: Jan 23 00:59:25.377 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 00:59:25.978772 coreos-metadata[922]: Jan 23 00:59:25.978 INFO Fetch successful Jan 23 00:59:25.978772 coreos-metadata[922]: Jan 23 00:59:25.978 INFO wrote hostname ci-4459-2-2-n-6e52943716 to /sysroot/etc/hostname Jan 23 00:59:25.980721 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 23 00:59:25.980810 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 23 00:59:25.982197 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 00:59:25.991741 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:59:26.010574 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1055) Jan 23 00:59:26.010630 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:59:26.011980 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:59:26.017348 kernel: BTRFS info (device vda6): turning on async discard Jan 23 00:59:26.017393 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 00:59:26.019328 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:59:26.045456 ignition[1073]: INFO : Ignition 2.22.0 Jan 23 00:59:26.045456 ignition[1073]: INFO : Stage: files Jan 23 00:59:26.046495 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:59:26.046495 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 00:59:26.046495 ignition[1073]: DEBUG : files: compiled without relabeling support, skipping Jan 23 00:59:26.047450 ignition[1073]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 00:59:26.047450 ignition[1073]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 00:59:26.050682 ignition[1073]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 00:59:26.051107 ignition[1073]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 00:59:26.051609 ignition[1073]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 00:59:26.051413 unknown[1073]: wrote ssh authorized keys file for user: core Jan 23 00:59:26.054664 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 00:59:26.054664 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 00:59:26.104000 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 00:59:26.209351 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 00:59:26.210428 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 00:59:26.210428 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 23 00:59:26.514978 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 00:59:26.763936 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 00:59:26.763936 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 00:59:26.763936 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 00:59:26.763936 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:59:26.763936 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:59:26.763936 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:59:26.763936 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:59:26.763936 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:59:26.763936 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:59:26.768251 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:59:26.768251 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:59:26.768251 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 00:59:26.768251 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 00:59:26.768251 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 00:59:26.768251 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 00:59:27.068754 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 00:59:28.504006 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 00:59:28.504006 ignition[1073]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 00:59:28.505900 ignition[1073]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:59:28.508839 ignition[1073]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:59:28.508839 ignition[1073]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 00:59:28.510089 ignition[1073]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 00:59:28.510089 ignition[1073]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 00:59:28.510089 ignition[1073]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:59:28.510089 ignition[1073]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:59:28.510089 ignition[1073]: INFO : files: files passed Jan 23 00:59:28.510089 ignition[1073]: INFO : Ignition finished successfully Jan 23 00:59:28.510721 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 00:59:28.513236 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 00:59:28.516701 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 00:59:28.526417 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 00:59:28.526893 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 00:59:28.532852 initrd-setup-root-after-ignition[1102]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:59:28.533437 initrd-setup-root-after-ignition[1106]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:59:28.534463 initrd-setup-root-after-ignition[1102]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:59:28.535598 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:59:28.536208 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 00:59:28.537572 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 00:59:28.579010 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 00:59:28.579106 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 00:59:28.580358 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 00:59:28.581038 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 00:59:28.581892 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 00:59:28.583254 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 00:59:28.601041 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:59:28.603204 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 00:59:28.621093 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:59:28.622363 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:59:28.623367 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 00:59:28.624276 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 00:59:28.624780 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:59:28.625762 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 00:59:28.626549 systemd[1]: Stopped target basic.target - Basic System. Jan 23 00:59:28.626956 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 00:59:28.627368 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:59:28.627751 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 00:59:28.629588 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:59:28.630272 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 00:59:28.630651 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:59:28.631330 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 00:59:28.632005 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 00:59:28.632739 systemd[1]: Stopped target swap.target - Swaps. Jan 23 00:59:28.633416 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 00:59:28.633511 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:59:28.634453 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:59:28.635140 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:59:28.635744 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 00:59:28.635819 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:59:28.636453 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 00:59:28.636545 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 00:59:28.637493 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 00:59:28.637576 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:59:28.638201 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 00:59:28.638265 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 00:59:28.640284 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 00:59:28.640687 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 00:59:28.640799 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:59:28.643613 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 00:59:28.645184 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 00:59:28.645647 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:59:28.646402 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 00:59:28.646473 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:59:28.650349 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 00:59:28.652340 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 00:59:28.662106 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 00:59:28.665192 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 00:59:28.665747 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 00:59:28.671369 ignition[1127]: INFO : Ignition 2.22.0 Jan 23 00:59:28.671921 ignition[1127]: INFO : Stage: umount Jan 23 00:59:28.673183 ignition[1127]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:59:28.673183 ignition[1127]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 00:59:28.673183 ignition[1127]: INFO : umount: umount passed Jan 23 00:59:28.673183 ignition[1127]: INFO : Ignition finished successfully Jan 23 00:59:28.674637 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 00:59:28.674723 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 00:59:28.675377 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 00:59:28.675437 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 00:59:28.675957 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 00:59:28.675989 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 00:59:28.676537 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 00:59:28.676569 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 00:59:28.677134 systemd[1]: Stopped target network.target - Network. Jan 23 00:59:28.677739 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 00:59:28.677773 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:59:28.678396 systemd[1]: Stopped target paths.target - Path Units. Jan 23 00:59:28.678973 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 00:59:28.682136 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:59:28.682465 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 00:59:28.683015 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 00:59:28.683584 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 00:59:28.683611 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:59:28.684146 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 00:59:28.684170 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:59:28.684720 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 00:59:28.684754 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 00:59:28.685298 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 00:59:28.685327 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 00:59:28.685838 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 00:59:28.685868 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 00:59:28.686470 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 00:59:28.687141 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 00:59:28.690135 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 00:59:28.690226 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 00:59:28.693194 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 00:59:28.693740 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 00:59:28.694237 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 00:59:28.695721 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 00:59:28.696385 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 00:59:28.696961 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 00:59:28.696999 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:59:28.699206 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 00:59:28.699945 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 00:59:28.700374 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:59:28.701124 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:59:28.701517 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:59:28.702300 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 00:59:28.702680 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 00:59:28.703409 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 00:59:28.703794 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:59:28.704602 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:59:28.706419 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 00:59:28.706467 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:59:28.719354 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 00:59:28.719462 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:59:28.720361 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 00:59:28.720445 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 00:59:28.720981 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 00:59:28.721008 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 00:59:28.721484 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 00:59:28.721506 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:59:28.722062 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 00:59:28.722093 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:59:28.722953 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 00:59:28.722981 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 00:59:28.723858 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 00:59:28.723888 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:59:28.726281 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 00:59:28.726739 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 00:59:28.726780 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:59:28.728821 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 00:59:28.728866 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:59:28.729860 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:59:28.729890 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:59:28.731744 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 00:59:28.731787 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 00:59:28.731815 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:59:28.737091 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 00:59:28.737189 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 00:59:28.737942 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 00:59:28.739047 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 00:59:28.753814 systemd[1]: Switching root. Jan 23 00:59:28.791946 systemd-journald[224]: Journal stopped Jan 23 00:59:29.785011 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jan 23 00:59:29.785097 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 00:59:29.785126 kernel: SELinux: policy capability open_perms=1 Jan 23 00:59:29.785137 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 00:59:29.785146 kernel: SELinux: policy capability always_check_network=0 Jan 23 00:59:29.785161 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 00:59:29.785178 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 00:59:29.785188 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 00:59:29.785198 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 00:59:29.785211 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 00:59:29.785225 kernel: audit: type=1403 audit(1769129968.919:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 00:59:29.785241 systemd[1]: Successfully loaded SELinux policy in 65.939ms. Jan 23 00:59:29.785264 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.783ms. Jan 23 00:59:29.785277 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:59:29.785288 systemd[1]: Detected virtualization kvm. Jan 23 00:59:29.785299 systemd[1]: Detected architecture x86-64. Jan 23 00:59:29.785309 systemd[1]: Detected first boot. Jan 23 00:59:29.785320 systemd[1]: Hostname set to . Jan 23 00:59:29.785333 systemd[1]: Initializing machine ID from VM UUID. Jan 23 00:59:29.785344 zram_generator::config[1170]: No configuration found. Jan 23 00:59:29.785357 kernel: Guest personality initialized and is inactive Jan 23 00:59:29.785368 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 00:59:29.785378 kernel: Initialized host personality Jan 23 00:59:29.785388 kernel: NET: Registered PF_VSOCK protocol family Jan 23 00:59:29.785399 systemd[1]: Populated /etc with preset unit settings. Jan 23 00:59:29.785411 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 00:59:29.785424 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 00:59:29.785435 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 00:59:29.785446 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 00:59:29.785456 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 00:59:29.785467 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 00:59:29.785478 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 00:59:29.785488 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 00:59:29.785498 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 00:59:29.785509 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 00:59:29.785522 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 00:59:29.785537 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 00:59:29.785548 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:59:29.785561 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:59:29.785572 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 00:59:29.785583 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 00:59:29.785595 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 00:59:29.785606 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:59:29.785617 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 00:59:29.785628 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:59:29.785638 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:59:29.785649 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 00:59:29.785660 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 00:59:29.785671 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 00:59:29.785681 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 00:59:29.785694 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:59:29.785707 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:59:29.785718 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:59:29.785730 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:59:29.785741 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 00:59:29.785752 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 00:59:29.785762 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 00:59:29.785773 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:59:29.785783 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:59:29.785796 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:59:29.785807 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 00:59:29.785817 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 00:59:29.785827 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 00:59:29.785841 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 00:59:29.785852 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:59:29.785862 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 00:59:29.785872 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 00:59:29.785884 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 00:59:29.785897 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 00:59:29.785907 systemd[1]: Reached target machines.target - Containers. Jan 23 00:59:29.785918 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 00:59:29.785928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:59:29.785939 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:59:29.785949 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 00:59:29.785959 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:59:29.785970 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:59:29.785982 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:59:29.785993 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 00:59:29.786003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:59:29.786014 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 00:59:29.786024 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 00:59:29.786035 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 00:59:29.786045 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 00:59:29.786060 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 00:59:29.786076 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:59:29.786087 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:59:29.786097 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:59:29.786110 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:59:29.786135 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 00:59:29.786145 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 00:59:29.786156 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:59:29.786167 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 00:59:29.786177 systemd[1]: Stopped verity-setup.service. Jan 23 00:59:29.786188 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:59:29.786199 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 00:59:29.786212 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 00:59:29.786223 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 00:59:29.786232 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 00:59:29.786243 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 00:59:29.786253 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 00:59:29.786263 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:59:29.786274 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 00:59:29.786284 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 00:59:29.786296 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:59:29.786306 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:59:29.786317 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 00:59:29.786327 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:59:29.786338 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:59:29.786349 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:59:29.786360 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 00:59:29.786370 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:59:29.786380 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:59:29.786412 systemd-journald[1244]: Collecting audit messages is disabled. Jan 23 00:59:29.786441 kernel: ACPI: bus type drm_connector registered Jan 23 00:59:29.786451 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 00:59:29.786462 systemd-journald[1244]: Journal started Jan 23 00:59:29.786484 systemd-journald[1244]: Runtime Journal (/run/log/journal/93f0fb9d2ebb4ac7a9e2635376bfeb1e) is 8M, max 78M, 70M free. Jan 23 00:59:29.501287 systemd[1]: Queued start job for default target multi-user.target. Jan 23 00:59:29.526255 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 00:59:29.526641 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 00:59:29.789293 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 00:59:29.790734 kernel: loop: module loaded Jan 23 00:59:29.790755 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:59:29.798131 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 00:59:29.804275 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 00:59:29.808133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:59:29.810133 kernel: fuse: init (API version 7.41) Jan 23 00:59:29.810173 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 00:59:29.816130 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:59:29.819131 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 00:59:29.825140 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:59:29.832603 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 00:59:29.835131 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 00:59:29.839132 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:59:29.840395 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:59:29.841125 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:59:29.843426 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 00:59:29.843563 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 00:59:29.844216 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:59:29.844342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:59:29.845024 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 00:59:29.845696 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 00:59:29.861522 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 00:59:29.862153 kernel: loop0: detected capacity change from 0 to 128560 Jan 23 00:59:29.864916 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 00:59:29.868394 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 00:59:29.871885 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 00:59:29.885251 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 00:59:29.885785 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:59:29.894864 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 00:59:29.909684 systemd-journald[1244]: Time spent on flushing to /var/log/journal/93f0fb9d2ebb4ac7a9e2635376bfeb1e is 30.750ms for 1719 entries. Jan 23 00:59:29.909684 systemd-journald[1244]: System Journal (/var/log/journal/93f0fb9d2ebb4ac7a9e2635376bfeb1e) is 8M, max 584.8M, 576.8M free. Jan 23 00:59:29.958885 systemd-journald[1244]: Received client request to flush runtime journal. Jan 23 00:59:29.958936 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 00:59:29.958960 kernel: loop1: detected capacity change from 0 to 110984 Jan 23 00:59:29.915442 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:59:29.928206 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 00:59:29.935283 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 00:59:29.941222 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:59:29.963986 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 00:59:29.975160 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Jan 23 00:59:29.975432 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Jan 23 00:59:29.979017 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:59:29.982149 kernel: loop2: detected capacity change from 0 to 1640 Jan 23 00:59:29.991912 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:59:30.011147 kernel: loop3: detected capacity change from 0 to 229808 Jan 23 00:59:30.046247 kernel: loop4: detected capacity change from 0 to 128560 Jan 23 00:59:30.063169 kernel: loop5: detected capacity change from 0 to 110984 Jan 23 00:59:30.076142 kernel: loop6: detected capacity change from 0 to 1640 Jan 23 00:59:30.100156 kernel: loop7: detected capacity change from 0 to 229808 Jan 23 00:59:30.125627 (sd-merge)[1321]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-stackit'. Jan 23 00:59:30.126437 (sd-merge)[1321]: Merged extensions into '/usr'. Jan 23 00:59:30.131899 systemd[1]: Reload requested from client PID 1275 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 00:59:30.131915 systemd[1]: Reloading... Jan 23 00:59:30.211142 zram_generator::config[1343]: No configuration found. Jan 23 00:59:30.473417 ldconfig[1271]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 00:59:30.499227 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 00:59:30.499438 systemd[1]: Reloading finished in 367 ms. Jan 23 00:59:30.533842 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 00:59:30.534695 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 00:59:30.535422 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 00:59:30.544145 systemd[1]: Starting ensure-sysext.service... Jan 23 00:59:30.548219 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:59:30.549983 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:59:30.560959 systemd[1]: Reload requested from client PID 1391 ('systemctl') (unit ensure-sysext.service)... Jan 23 00:59:30.560974 systemd[1]: Reloading... Jan 23 00:59:30.575698 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 00:59:30.576465 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 00:59:30.576812 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 00:59:30.577219 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 00:59:30.577913 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 00:59:30.578321 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Jan 23 00:59:30.578410 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Jan 23 00:59:30.585733 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:59:30.585743 systemd-tmpfiles[1392]: Skipping /boot Jan 23 00:59:30.596454 systemd-udevd[1393]: Using default interface naming scheme 'v255'. Jan 23 00:59:30.600687 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:59:30.600776 systemd-tmpfiles[1392]: Skipping /boot Jan 23 00:59:30.613145 zram_generator::config[1419]: No configuration found. Jan 23 00:59:30.871152 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 00:59:30.874459 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 00:59:30.874894 systemd[1]: Reloading finished in 313 ms. Jan 23 00:59:30.880176 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 00:59:30.884273 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:59:30.890319 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:59:30.904142 kernel: ACPI: button: Power Button [PWRF] Jan 23 00:59:30.914305 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:59:30.915671 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:59:30.918981 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 00:59:30.921310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:59:30.923052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:59:30.927172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:59:30.932392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:59:30.933186 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:59:30.933294 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:59:30.936450 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 00:59:30.942137 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:59:30.946437 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:59:30.959854 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 00:59:30.960353 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:59:30.962346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:59:30.962521 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:59:30.963516 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:59:30.964166 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:59:30.970908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 00:59:30.972970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:59:30.973161 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:59:30.979815 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:59:30.980012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:59:30.982597 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:59:30.985282 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:59:30.988627 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:59:30.994831 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:59:30.999387 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 23 00:59:30.999922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:59:31.003178 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 00:59:31.003898 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:59:31.004080 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 00:59:31.008389 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 00:59:31.008952 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:59:31.023620 systemd[1]: Finished ensure-sysext.service. Jan 23 00:59:31.026319 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 00:59:31.029724 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 00:59:31.037709 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 00:59:31.038660 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:59:31.038827 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:59:31.041085 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:59:31.041468 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:59:31.042363 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:59:31.055832 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:59:31.056566 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:59:31.057263 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:59:31.059412 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 00:59:31.060030 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:59:31.060857 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:59:31.061843 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 00:59:31.073758 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 00:59:31.082432 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 00:59:31.082490 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 00:59:31.095139 kernel: PTP clock support registered Jan 23 00:59:31.095208 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 23 00:59:31.096597 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 00:59:31.103777 augenrules[1552]: No rules Jan 23 00:59:31.104710 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:59:31.105058 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:59:31.105880 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 23 00:59:31.106640 kernel: Console: switching to colour dummy device 80x25 Jan 23 00:59:31.106243 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 23 00:59:31.111561 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 23 00:59:31.111772 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 00:59:31.111789 kernel: [drm] features: -context_init Jan 23 00:59:31.115534 kernel: [drm] number of scanouts: 1 Jan 23 00:59:31.115567 kernel: [drm] number of cap sets: 0 Jan 23 00:59:31.115590 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 23 00:59:31.127157 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 23 00:59:31.127247 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 00:59:31.135149 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 00:59:31.135820 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 00:59:31.147966 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 00:59:31.148254 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 00:59:31.148376 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 00:59:31.237132 systemd-resolved[1508]: Positive Trust Anchors: Jan 23 00:59:31.237148 systemd-resolved[1508]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:59:31.237177 systemd-resolved[1508]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:59:31.241893 systemd-resolved[1508]: Using system hostname 'ci-4459-2-2-n-6e52943716'. Jan 23 00:59:31.242884 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:59:31.244053 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:59:31.245705 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:59:31.245825 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 00:59:31.245898 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 00:59:31.245950 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 00:59:31.246217 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 00:59:31.246361 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 00:59:31.246450 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 00:59:31.250606 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 00:59:31.250645 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:59:31.250714 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:59:31.251454 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 00:59:31.252953 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 00:59:31.254742 systemd-networkd[1506]: lo: Link UP Jan 23 00:59:31.255039 systemd-networkd[1506]: lo: Gained carrier Jan 23 00:59:31.255910 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 00:59:31.256297 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 00:59:31.256362 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 00:59:31.256805 systemd-networkd[1506]: Enumeration completed Jan 23 00:59:31.258526 systemd-networkd[1506]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:59:31.258801 systemd-networkd[1506]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:59:31.259750 systemd-networkd[1506]: eth0: Link UP Jan 23 00:59:31.259923 systemd-networkd[1506]: eth0: Gained carrier Jan 23 00:59:31.260170 systemd-networkd[1506]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:59:31.260454 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 00:59:31.262305 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 00:59:31.262950 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:59:31.263560 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 00:59:31.266924 systemd[1]: Reached target network.target - Network. Jan 23 00:59:31.269052 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:59:31.270446 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:59:31.270871 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:59:31.270897 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:59:31.272165 systemd-networkd[1506]: eth0: DHCPv4 address 10.0.7.172/25, gateway 10.0.7.129 acquired from 10.0.7.129 Jan 23 00:59:31.274959 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 00:59:31.278258 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 00:59:31.283753 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 00:59:31.287263 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 00:59:31.295305 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 00:59:31.299182 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 00:59:31.306140 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 00:59:31.304969 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 00:59:31.306230 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 00:59:31.308212 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 00:59:31.317857 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 00:59:31.326073 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 00:59:31.330702 jq[1588]: false Jan 23 00:59:31.337477 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Refreshing passwd entry cache Jan 23 00:59:31.337485 oslogin_cache_refresh[1592]: Refreshing passwd entry cache Jan 23 00:59:31.337966 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 00:59:31.349616 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 00:59:31.357187 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 00:59:31.358865 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Failure getting users, quitting Jan 23 00:59:31.358865 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 00:59:31.358865 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Refreshing group entry cache Jan 23 00:59:31.358703 oslogin_cache_refresh[1592]: Failure getting users, quitting Jan 23 00:59:31.358721 oslogin_cache_refresh[1592]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 00:59:31.358760 oslogin_cache_refresh[1592]: Refreshing group entry cache Jan 23 00:59:31.363295 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 00:59:31.366657 oslogin_cache_refresh[1592]: Failure getting groups, quitting Jan 23 00:59:31.367290 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Failure getting groups, quitting Jan 23 00:59:31.367290 google_oslogin_nss_cache[1592]: oslogin_cache_refresh[1592]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 00:59:31.366668 oslogin_cache_refresh[1592]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 00:59:31.370882 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 00:59:31.373765 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 00:59:31.374797 extend-filesystems[1590]: Found /dev/vda6 Jan 23 00:59:31.375374 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 00:59:31.377020 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 00:59:31.385285 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 00:59:31.387487 extend-filesystems[1590]: Found /dev/vda9 Jan 23 00:59:31.398076 extend-filesystems[1590]: Checking size of /dev/vda9 Jan 23 00:59:31.398166 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 00:59:31.403340 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 00:59:31.406285 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 00:59:31.406584 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 00:59:31.406737 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 00:59:31.411054 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 00:59:31.411550 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 00:59:31.416567 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 00:59:31.416862 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 00:59:31.421284 jq[1613]: true Jan 23 00:59:31.438642 extend-filesystems[1590]: Resized partition /dev/vda9 Jan 23 00:59:31.441216 update_engine[1612]: I20260123 00:59:31.441025 1612 main.cc:92] Flatcar Update Engine starting Jan 23 00:59:31.450892 extend-filesystems[1633]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 00:59:31.452464 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:59:31.461431 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 12499963 blocks Jan 23 00:59:31.466173 jq[1619]: true Jan 23 00:59:31.475725 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 00:59:31.477753 chronyd[1583]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 00:59:31.480149 tar[1618]: linux-amd64/LICENSE Jan 23 00:59:31.480495 tar[1618]: linux-amd64/helm Jan 23 00:59:31.481815 chronyd[1583]: Loaded seccomp filter (level 2) Jan 23 00:59:31.481947 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 00:59:31.485777 (ntainerd)[1632]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 00:59:31.488502 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:59:31.489499 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:59:31.493446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:59:31.510582 dbus-daemon[1586]: [system] SELinux support is enabled Jan 23 00:59:31.510735 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 00:59:31.517604 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 00:59:31.517636 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 00:59:31.518253 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 00:59:31.518268 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 00:59:31.534340 update_engine[1612]: I20260123 00:59:31.533701 1612 update_check_scheduler.cc:74] Next update check in 8m44s Jan 23 00:59:31.544733 systemd[1]: Started update-engine.service - Update Engine. Jan 23 00:59:31.546987 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 00:59:31.550782 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 00:59:31.635819 systemd-logind[1602]: New seat seat0. Jan 23 00:59:31.646240 systemd-logind[1602]: Watching system buttons on /dev/input/event3 (Power Button) Jan 23 00:59:31.646258 systemd-logind[1602]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 00:59:31.646527 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 00:59:31.666380 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:59:31.666570 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:59:31.671091 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:59:31.677187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:59:31.705191 containerd[1632]: time="2026-01-23T00:59:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 00:59:31.711851 bash[1663]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:59:31.713680 containerd[1632]: time="2026-01-23T00:59:31.713650942Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 00:59:31.714157 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 00:59:31.736341 systemd[1]: Starting sshkeys.service... Jan 23 00:59:31.765039 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 00:59:31.768546 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 00:59:31.782149 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 00:59:31.802239 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:59:31.807785 containerd[1632]: time="2026-01-23T00:59:31.807028837Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.17µs" Jan 23 00:59:31.807785 containerd[1632]: time="2026-01-23T00:59:31.807063663Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 00:59:31.807785 containerd[1632]: time="2026-01-23T00:59:31.807080345Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 00:59:31.807785 containerd[1632]: time="2026-01-23T00:59:31.807509516Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 00:59:31.807785 containerd[1632]: time="2026-01-23T00:59:31.807527756Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 00:59:31.807785 containerd[1632]: time="2026-01-23T00:59:31.807549782Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:59:31.807785 containerd[1632]: time="2026-01-23T00:59:31.807588464Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:59:31.807785 containerd[1632]: time="2026-01-23T00:59:31.807597336Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:59:31.808692 containerd[1632]: time="2026-01-23T00:59:31.808670467Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:59:31.808692 containerd[1632]: time="2026-01-23T00:59:31.808690555Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:59:31.808746 containerd[1632]: time="2026-01-23T00:59:31.808703078Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:59:31.808746 containerd[1632]: time="2026-01-23T00:59:31.808710130Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 00:59:31.808780 containerd[1632]: time="2026-01-23T00:59:31.808771692Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 00:59:31.811378 containerd[1632]: time="2026-01-23T00:59:31.811346391Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:59:31.811430 containerd[1632]: time="2026-01-23T00:59:31.811387978Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:59:31.811430 containerd[1632]: time="2026-01-23T00:59:31.811399196Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 00:59:31.811430 containerd[1632]: time="2026-01-23T00:59:31.811427250Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 00:59:31.815503 containerd[1632]: time="2026-01-23T00:59:31.815474013Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 00:59:31.815551 containerd[1632]: time="2026-01-23T00:59:31.815540298Z" level=info msg="metadata content store policy set" policy=shared Jan 23 00:59:31.854201 containerd[1632]: time="2026-01-23T00:59:31.854144088Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 00:59:31.854201 containerd[1632]: time="2026-01-23T00:59:31.854204051Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 00:59:31.854328 containerd[1632]: time="2026-01-23T00:59:31.854218321Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 00:59:31.854328 containerd[1632]: time="2026-01-23T00:59:31.854228972Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 00:59:31.854328 containerd[1632]: time="2026-01-23T00:59:31.854242055Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 00:59:31.854328 containerd[1632]: time="2026-01-23T00:59:31.854251318Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 00:59:31.854328 containerd[1632]: time="2026-01-23T00:59:31.854263404Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 00:59:31.854328 containerd[1632]: time="2026-01-23T00:59:31.854272951Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 00:59:31.854328 containerd[1632]: time="2026-01-23T00:59:31.854286934Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 00:59:31.854328 containerd[1632]: time="2026-01-23T00:59:31.854295846Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 00:59:31.854328 containerd[1632]: time="2026-01-23T00:59:31.854304735Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 00:59:31.854328 containerd[1632]: time="2026-01-23T00:59:31.854323172Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 00:59:31.854481 containerd[1632]: time="2026-01-23T00:59:31.854430375Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 00:59:31.854481 containerd[1632]: time="2026-01-23T00:59:31.854448092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 00:59:31.854481 containerd[1632]: time="2026-01-23T00:59:31.854460903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 00:59:31.854481 containerd[1632]: time="2026-01-23T00:59:31.854470709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 00:59:31.854540 containerd[1632]: time="2026-01-23T00:59:31.854480735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 00:59:31.854540 containerd[1632]: time="2026-01-23T00:59:31.854489877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 00:59:31.854540 containerd[1632]: time="2026-01-23T00:59:31.854499160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 00:59:31.854540 containerd[1632]: time="2026-01-23T00:59:31.854507496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 00:59:31.854540 containerd[1632]: time="2026-01-23T00:59:31.854516474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 00:59:31.854540 containerd[1632]: time="2026-01-23T00:59:31.854525012Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 00:59:31.854540 containerd[1632]: time="2026-01-23T00:59:31.854533163Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 00:59:31.854655 containerd[1632]: time="2026-01-23T00:59:31.854577709Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 00:59:31.854655 containerd[1632]: time="2026-01-23T00:59:31.854590007Z" level=info msg="Start snapshots syncer" Jan 23 00:59:31.854655 containerd[1632]: time="2026-01-23T00:59:31.854607145Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 00:59:31.854884 containerd[1632]: time="2026-01-23T00:59:31.854846271Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 00:59:31.854985 containerd[1632]: time="2026-01-23T00:59:31.854890207Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 00:59:31.854985 containerd[1632]: time="2026-01-23T00:59:31.854939083Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 00:59:31.855050 containerd[1632]: time="2026-01-23T00:59:31.855021022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 00:59:31.855050 containerd[1632]: time="2026-01-23T00:59:31.855040819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 00:59:31.855091 containerd[1632]: time="2026-01-23T00:59:31.855049845Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 00:59:31.855091 containerd[1632]: time="2026-01-23T00:59:31.855058759Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 00:59:31.855091 containerd[1632]: time="2026-01-23T00:59:31.855070197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 00:59:31.855091 containerd[1632]: time="2026-01-23T00:59:31.855078385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 00:59:31.855091 containerd[1632]: time="2026-01-23T00:59:31.855087335Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 00:59:31.855179 containerd[1632]: time="2026-01-23T00:59:31.855107414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 00:59:31.855179 containerd[1632]: time="2026-01-23T00:59:31.855129120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 00:59:31.855179 containerd[1632]: time="2026-01-23T00:59:31.855138846Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 00:59:31.855179 containerd[1632]: time="2026-01-23T00:59:31.855163742Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:59:31.855248 containerd[1632]: time="2026-01-23T00:59:31.855181301Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:59:31.855248 containerd[1632]: time="2026-01-23T00:59:31.855188865Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:59:31.855248 containerd[1632]: time="2026-01-23T00:59:31.855196876Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:59:31.855248 containerd[1632]: time="2026-01-23T00:59:31.855203193Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 00:59:31.855248 containerd[1632]: time="2026-01-23T00:59:31.855210958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 00:59:31.855248 containerd[1632]: time="2026-01-23T00:59:31.855225187Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 00:59:31.855248 containerd[1632]: time="2026-01-23T00:59:31.855238016Z" level=info msg="runtime interface created" Jan 23 00:59:31.855248 containerd[1632]: time="2026-01-23T00:59:31.855242567Z" level=info msg="created NRI interface" Jan 23 00:59:31.855248 containerd[1632]: time="2026-01-23T00:59:31.855249642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 00:59:31.855384 containerd[1632]: time="2026-01-23T00:59:31.855259162Z" level=info msg="Connect containerd service" Jan 23 00:59:31.855384 containerd[1632]: time="2026-01-23T00:59:31.855274934Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 00:59:31.857137 containerd[1632]: time="2026-01-23T00:59:31.855812905Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:59:31.915208 locksmithd[1646]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 00:59:31.918594 kernel: EXT4-fs (vda9): resized filesystem to 12499963 Jan 23 00:59:31.943137 extend-filesystems[1633]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 00:59:31.943137 extend-filesystems[1633]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 23 00:59:31.943137 extend-filesystems[1633]: The filesystem on /dev/vda9 is now 12499963 (4k) blocks long. Jan 23 00:59:31.948272 extend-filesystems[1590]: Resized filesystem in /dev/vda9 Jan 23 00:59:31.943863 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 00:59:31.945612 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 00:59:31.984704 containerd[1632]: time="2026-01-23T00:59:31.982851718Z" level=info msg="Start subscribing containerd event" Jan 23 00:59:31.984704 containerd[1632]: time="2026-01-23T00:59:31.982901510Z" level=info msg="Start recovering state" Jan 23 00:59:31.984704 containerd[1632]: time="2026-01-23T00:59:31.982996114Z" level=info msg="Start event monitor" Jan 23 00:59:31.984704 containerd[1632]: time="2026-01-23T00:59:31.983006514Z" level=info msg="Start cni network conf syncer for default" Jan 23 00:59:31.984704 containerd[1632]: time="2026-01-23T00:59:31.983015489Z" level=info msg="Start streaming server" Jan 23 00:59:31.984704 containerd[1632]: time="2026-01-23T00:59:31.983026779Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 00:59:31.984704 containerd[1632]: time="2026-01-23T00:59:31.983033998Z" level=info msg="runtime interface starting up..." Jan 23 00:59:31.984704 containerd[1632]: time="2026-01-23T00:59:31.983039311Z" level=info msg="starting plugins..." Jan 23 00:59:31.984704 containerd[1632]: time="2026-01-23T00:59:31.983050700Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 00:59:31.984704 containerd[1632]: time="2026-01-23T00:59:31.983049377Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 00:59:31.984704 containerd[1632]: time="2026-01-23T00:59:31.983197929Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 00:59:31.984704 containerd[1632]: time="2026-01-23T00:59:31.983245494Z" level=info msg="containerd successfully booted in 0.278386s" Jan 23 00:59:31.983434 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 00:59:31.991340 sshd_keygen[1620]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 00:59:32.015851 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 00:59:32.021383 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 00:59:32.024458 systemd[1]: Started sshd@0-10.0.7.172:22-20.161.92.111:43210.service - OpenSSH per-connection server daemon (20.161.92.111:43210). Jan 23 00:59:32.042387 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 00:59:32.044183 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 00:59:32.047506 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 00:59:32.074379 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 00:59:32.077444 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 00:59:32.082263 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 00:59:32.084531 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 00:59:32.207636 tar[1618]: linux-amd64/README.md Jan 23 00:59:32.223404 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 00:59:32.653151 sshd[1711]: Accepted publickey for core from 20.161.92.111 port 43210 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 00:59:32.653140 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:32.666164 systemd-logind[1602]: New session 1 of user core. Jan 23 00:59:32.666488 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 00:59:32.667989 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 00:59:32.687865 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 00:59:32.699587 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 00:59:32.715440 (systemd)[1726]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 00:59:32.717751 systemd-logind[1602]: New session c1 of user core. Jan 23 00:59:32.820230 systemd-networkd[1506]: eth0: Gained IPv6LL Jan 23 00:59:32.821593 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 00:59:32.824194 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 00:59:32.829187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:59:32.832058 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 00:59:32.832285 systemd[1726]: Queued start job for default target default.target. Jan 23 00:59:32.839076 systemd[1726]: Created slice app.slice - User Application Slice. Jan 23 00:59:32.839202 systemd[1726]: Reached target paths.target - Paths. Jan 23 00:59:32.839274 systemd[1726]: Reached target timers.target - Timers. Jan 23 00:59:32.841210 systemd[1726]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 00:59:32.852913 systemd[1726]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 00:59:32.853111 systemd[1726]: Reached target sockets.target - Sockets. Jan 23 00:59:32.853213 systemd[1726]: Reached target basic.target - Basic System. Jan 23 00:59:32.853297 systemd[1726]: Reached target default.target - Main User Target. Jan 23 00:59:32.853314 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 00:59:32.853402 systemd[1726]: Startup finished in 130ms. Jan 23 00:59:32.858800 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 00:59:32.861400 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 00:59:32.937697 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 00:59:32.937785 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 00:59:33.296348 systemd[1]: Started sshd@1-10.0.7.172:22-20.161.92.111:43214.service - OpenSSH per-connection server daemon (20.161.92.111:43214). Jan 23 00:59:33.809220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:59:33.816456 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:59:33.910440 sshd[1751]: Accepted publickey for core from 20.161.92.111 port 43214 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 00:59:33.912496 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:33.918329 systemd-logind[1602]: New session 2 of user core. Jan 23 00:59:33.923298 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 00:59:34.338088 sshd[1763]: Connection closed by 20.161.92.111 port 43214 Jan 23 00:59:34.339301 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:34.343182 systemd[1]: sshd@1-10.0.7.172:22-20.161.92.111:43214.service: Deactivated successfully. Jan 23 00:59:34.345519 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 00:59:34.348726 systemd-logind[1602]: Session 2 logged out. Waiting for processes to exit. Jan 23 00:59:34.350213 systemd-logind[1602]: Removed session 2. Jan 23 00:59:34.447528 systemd[1]: Started sshd@2-10.0.7.172:22-20.161.92.111:43224.service - OpenSSH per-connection server daemon (20.161.92.111:43224). Jan 23 00:59:34.476144 kubelet[1759]: E0123 00:59:34.475976 1759 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:59:34.477833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:59:34.477934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:59:34.478521 systemd[1]: kubelet.service: Consumed 963ms CPU time, 269.4M memory peak. Jan 23 00:59:34.950134 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 00:59:34.950207 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 00:59:35.052618 sshd[1771]: Accepted publickey for core from 20.161.92.111 port 43224 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 00:59:35.053765 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:35.057340 systemd-logind[1602]: New session 3 of user core. Jan 23 00:59:35.064265 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 00:59:35.476674 sshd[1777]: Connection closed by 20.161.92.111 port 43224 Jan 23 00:59:35.475979 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:35.478648 systemd[1]: sshd@2-10.0.7.172:22-20.161.92.111:43224.service: Deactivated successfully. Jan 23 00:59:35.480275 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 00:59:35.482039 systemd-logind[1602]: Session 3 logged out. Waiting for processes to exit. Jan 23 00:59:35.482660 systemd-logind[1602]: Removed session 3. Jan 23 00:59:38.967044 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 00:59:38.967124 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 00:59:38.971703 coreos-metadata[1585]: Jan 23 00:59:38.971 WARN failed to locate config-drive, using the metadata service API instead Jan 23 00:59:38.972939 coreos-metadata[1678]: Jan 23 00:59:38.972 WARN failed to locate config-drive, using the metadata service API instead Jan 23 00:59:38.990081 coreos-metadata[1678]: Jan 23 00:59:38.990 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 23 00:59:38.990802 coreos-metadata[1585]: Jan 23 00:59:38.990 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 23 00:59:40.269612 coreos-metadata[1678]: Jan 23 00:59:40.269 INFO Fetch successful Jan 23 00:59:40.269612 coreos-metadata[1678]: Jan 23 00:59:40.269 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 00:59:40.813712 coreos-metadata[1585]: Jan 23 00:59:40.813 INFO Fetch successful Jan 23 00:59:40.813712 coreos-metadata[1585]: Jan 23 00:59:40.813 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 00:59:41.399228 coreos-metadata[1678]: Jan 23 00:59:41.399 INFO Fetch successful Jan 23 00:59:41.402543 unknown[1678]: wrote ssh authorized keys file for user: core Jan 23 00:59:41.405282 coreos-metadata[1585]: Jan 23 00:59:41.405 INFO Fetch successful Jan 23 00:59:41.405282 coreos-metadata[1585]: Jan 23 00:59:41.405 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 23 00:59:41.426667 update-ssh-keys[1791]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:59:41.427689 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 00:59:41.429166 systemd[1]: Finished sshkeys.service. Jan 23 00:59:41.980015 coreos-metadata[1585]: Jan 23 00:59:41.979 INFO Fetch successful Jan 23 00:59:41.980015 coreos-metadata[1585]: Jan 23 00:59:41.980 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 23 00:59:42.561373 coreos-metadata[1585]: Jan 23 00:59:42.561 INFO Fetch successful Jan 23 00:59:42.561373 coreos-metadata[1585]: Jan 23 00:59:42.561 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 23 00:59:43.157882 coreos-metadata[1585]: Jan 23 00:59:43.157 INFO Fetch successful Jan 23 00:59:43.157882 coreos-metadata[1585]: Jan 23 00:59:43.157 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 23 00:59:44.692603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 00:59:44.694035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:59:44.815317 coreos-metadata[1585]: Jan 23 00:59:44.815 INFO Fetch successful Jan 23 00:59:44.833246 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 00:59:44.833528 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 00:59:44.835663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:59:44.836594 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 00:59:44.836899 systemd[1]: Startup finished in 3.525s (kernel) + 14.297s (initrd) + 15.981s (userspace) = 33.803s. Jan 23 00:59:44.842317 (kubelet)[1806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:59:44.871446 kubelet[1806]: E0123 00:59:44.871403 1806 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:59:44.874824 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:59:44.875029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:59:44.875345 systemd[1]: kubelet.service: Consumed 140ms CPU time, 108.4M memory peak. Jan 23 00:59:45.585421 systemd[1]: Started sshd@3-10.0.7.172:22-20.161.92.111:42312.service - OpenSSH per-connection server daemon (20.161.92.111:42312). Jan 23 00:59:46.193162 sshd[1815]: Accepted publickey for core from 20.161.92.111 port 42312 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 00:59:46.193990 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:46.198920 systemd-logind[1602]: New session 4 of user core. Jan 23 00:59:46.203306 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 00:59:46.617738 sshd[1818]: Connection closed by 20.161.92.111 port 42312 Jan 23 00:59:46.618356 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:46.621970 systemd[1]: sshd@3-10.0.7.172:22-20.161.92.111:42312.service: Deactivated successfully. Jan 23 00:59:46.623814 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 00:59:46.624627 systemd-logind[1602]: Session 4 logged out. Waiting for processes to exit. Jan 23 00:59:46.625868 systemd-logind[1602]: Removed session 4. Jan 23 00:59:46.734587 systemd[1]: Started sshd@4-10.0.7.172:22-20.161.92.111:42316.service - OpenSSH per-connection server daemon (20.161.92.111:42316). Jan 23 00:59:47.345166 sshd[1824]: Accepted publickey for core from 20.161.92.111 port 42316 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 00:59:47.346080 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:47.350184 systemd-logind[1602]: New session 5 of user core. Jan 23 00:59:47.357380 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 00:59:47.765981 sshd[1827]: Connection closed by 20.161.92.111 port 42316 Jan 23 00:59:47.766485 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:47.769402 systemd[1]: sshd@4-10.0.7.172:22-20.161.92.111:42316.service: Deactivated successfully. Jan 23 00:59:47.770595 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 00:59:47.771065 systemd-logind[1602]: Session 5 logged out. Waiting for processes to exit. Jan 23 00:59:47.771791 systemd-logind[1602]: Removed session 5. Jan 23 00:59:47.874939 systemd[1]: Started sshd@5-10.0.7.172:22-20.161.92.111:42320.service - OpenSSH per-connection server daemon (20.161.92.111:42320). Jan 23 00:59:48.477923 sshd[1833]: Accepted publickey for core from 20.161.92.111 port 42320 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 00:59:48.478994 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:48.482426 systemd-logind[1602]: New session 6 of user core. Jan 23 00:59:48.493382 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 00:59:48.903434 sshd[1836]: Connection closed by 20.161.92.111 port 42320 Jan 23 00:59:48.902825 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:48.906446 systemd[1]: sshd@5-10.0.7.172:22-20.161.92.111:42320.service: Deactivated successfully. Jan 23 00:59:48.907957 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 00:59:48.908853 systemd-logind[1602]: Session 6 logged out. Waiting for processes to exit. Jan 23 00:59:48.909746 systemd-logind[1602]: Removed session 6. Jan 23 00:59:49.006740 systemd[1]: Started sshd@6-10.0.7.172:22-20.161.92.111:42322.service - OpenSSH per-connection server daemon (20.161.92.111:42322). Jan 23 00:59:49.606745 sshd[1842]: Accepted publickey for core from 20.161.92.111 port 42322 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 00:59:49.608018 sshd-session[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:49.612405 systemd-logind[1602]: New session 7 of user core. Jan 23 00:59:49.627502 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 00:59:49.952469 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 00:59:49.952677 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:59:49.962065 sudo[1846]: pam_unix(sudo:session): session closed for user root Jan 23 00:59:50.058361 sshd[1845]: Connection closed by 20.161.92.111 port 42322 Jan 23 00:59:50.057648 sshd-session[1842]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:50.061368 systemd[1]: sshd@6-10.0.7.172:22-20.161.92.111:42322.service: Deactivated successfully. Jan 23 00:59:50.062911 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 00:59:50.063586 systemd-logind[1602]: Session 7 logged out. Waiting for processes to exit. Jan 23 00:59:50.064887 systemd-logind[1602]: Removed session 7. Jan 23 00:59:50.169084 systemd[1]: Started sshd@7-10.0.7.172:22-20.161.92.111:42324.service - OpenSSH per-connection server daemon (20.161.92.111:42324). Jan 23 00:59:50.798205 sshd[1852]: Accepted publickey for core from 20.161.92.111 port 42324 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 00:59:50.799193 sshd-session[1852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:50.803049 systemd-logind[1602]: New session 8 of user core. Jan 23 00:59:50.808268 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 00:59:51.135794 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 00:59:51.136447 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:59:51.140740 sudo[1857]: pam_unix(sudo:session): session closed for user root Jan 23 00:59:51.144996 sudo[1856]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 00:59:51.145225 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:59:51.153772 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:59:51.190303 augenrules[1879]: No rules Jan 23 00:59:51.191211 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:59:51.191529 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:59:51.192744 sudo[1856]: pam_unix(sudo:session): session closed for user root Jan 23 00:59:51.291646 sshd[1855]: Connection closed by 20.161.92.111 port 42324 Jan 23 00:59:51.291514 sshd-session[1852]: pam_unix(sshd:session): session closed for user core Jan 23 00:59:51.295399 systemd[1]: sshd@7-10.0.7.172:22-20.161.92.111:42324.service: Deactivated successfully. Jan 23 00:59:51.296710 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 00:59:51.297380 systemd-logind[1602]: Session 8 logged out. Waiting for processes to exit. Jan 23 00:59:51.298432 systemd-logind[1602]: Removed session 8. Jan 23 00:59:51.396025 systemd[1]: Started sshd@8-10.0.7.172:22-20.161.92.111:42340.service - OpenSSH per-connection server daemon (20.161.92.111:42340). Jan 23 00:59:52.002565 sshd[1888]: Accepted publickey for core from 20.161.92.111 port 42340 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 00:59:52.003594 sshd-session[1888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:59:52.007121 systemd-logind[1602]: New session 9 of user core. Jan 23 00:59:52.017275 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 00:59:52.331644 sudo[1892]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 00:59:52.332274 sudo[1892]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:59:52.651415 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 00:59:52.663441 (dockerd)[1909]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 00:59:52.908756 dockerd[1909]: time="2026-01-23T00:59:52.908297663Z" level=info msg="Starting up" Jan 23 00:59:52.909870 dockerd[1909]: time="2026-01-23T00:59:52.909585478Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 00:59:52.920348 dockerd[1909]: time="2026-01-23T00:59:52.920316680Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 00:59:52.938012 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3213967301-merged.mount: Deactivated successfully. Jan 23 00:59:52.971408 dockerd[1909]: time="2026-01-23T00:59:52.971357242Z" level=info msg="Loading containers: start." Jan 23 00:59:52.982163 kernel: Initializing XFRM netlink socket Jan 23 00:59:53.196638 systemd-networkd[1506]: docker0: Link UP Jan 23 00:59:53.202034 dockerd[1909]: time="2026-01-23T00:59:53.202005619Z" level=info msg="Loading containers: done." Jan 23 00:59:53.211817 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2233118869-merged.mount: Deactivated successfully. Jan 23 00:59:53.213640 dockerd[1909]: time="2026-01-23T00:59:53.213609701Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 00:59:53.213703 dockerd[1909]: time="2026-01-23T00:59:53.213671942Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 00:59:53.213735 dockerd[1909]: time="2026-01-23T00:59:53.213724104Z" level=info msg="Initializing buildkit" Jan 23 00:59:53.232776 dockerd[1909]: time="2026-01-23T00:59:53.232701377Z" level=info msg="Completed buildkit initialization" Jan 23 00:59:53.239314 dockerd[1909]: time="2026-01-23T00:59:53.239281863Z" level=info msg="Daemon has completed initialization" Jan 23 00:59:53.239387 dockerd[1909]: time="2026-01-23T00:59:53.239320315Z" level=info msg="API listen on /run/docker.sock" Jan 23 00:59:53.239856 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 00:59:54.344185 containerd[1632]: time="2026-01-23T00:59:54.344148460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 00:59:54.942343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 00:59:54.946156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:59:54.987593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount136880176.mount: Deactivated successfully. Jan 23 00:59:55.078621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:59:55.087619 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:59:55.128967 kubelet[2139]: E0123 00:59:55.128926 2139 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:59:55.132015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:59:55.132388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:59:55.132980 systemd[1]: kubelet.service: Consumed 130ms CPU time, 110.2M memory peak. Jan 23 00:59:55.267848 chronyd[1583]: Selected source PHC0 Jan 23 00:59:56.088154 containerd[1632]: time="2026-01-23T00:59:56.088015531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:56.090016 containerd[1632]: time="2026-01-23T00:59:56.089989012Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114810" Jan 23 00:59:56.090958 containerd[1632]: time="2026-01-23T00:59:56.090924043Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:56.093937 containerd[1632]: time="2026-01-23T00:59:56.093903695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:56.094760 containerd[1632]: time="2026-01-23T00:59:56.094641462Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.750462331s" Jan 23 00:59:56.094760 containerd[1632]: time="2026-01-23T00:59:56.094667036Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 00:59:56.095511 containerd[1632]: time="2026-01-23T00:59:56.095498882Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 00:59:57.427409 containerd[1632]: time="2026-01-23T00:59:57.427349446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:57.429579 containerd[1632]: time="2026-01-23T00:59:57.429408359Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016801" Jan 23 00:59:57.430332 containerd[1632]: time="2026-01-23T00:59:57.430307863Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:57.433258 containerd[1632]: time="2026-01-23T00:59:57.433228373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:57.433993 containerd[1632]: time="2026-01-23T00:59:57.433958212Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.338393682s" Jan 23 00:59:57.434023 containerd[1632]: time="2026-01-23T00:59:57.433991832Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 00:59:57.434622 containerd[1632]: time="2026-01-23T00:59:57.434595207Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 00:59:58.673246 containerd[1632]: time="2026-01-23T00:59:58.672963707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:58.674280 containerd[1632]: time="2026-01-23T00:59:58.674187608Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158122" Jan 23 00:59:58.674965 containerd[1632]: time="2026-01-23T00:59:58.674905551Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:58.677836 containerd[1632]: time="2026-01-23T00:59:58.677803032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:58.680415 containerd[1632]: time="2026-01-23T00:59:58.679743320Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.245099747s" Jan 23 00:59:58.680415 containerd[1632]: time="2026-01-23T00:59:58.679791829Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 00:59:58.680950 containerd[1632]: time="2026-01-23T00:59:58.680928144Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 00:59:59.715269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3477097545.mount: Deactivated successfully. Jan 23 01:00:00.142207 containerd[1632]: time="2026-01-23T01:00:00.142002017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:00.148923 containerd[1632]: time="2026-01-23T01:00:00.148892548Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930122" Jan 23 01:00:00.150440 containerd[1632]: time="2026-01-23T01:00:00.150400404Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:00.159197 containerd[1632]: time="2026-01-23T01:00:00.158678155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:00.159197 containerd[1632]: time="2026-01-23T01:00:00.159147820Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.47818908s" Jan 23 01:00:00.159197 containerd[1632]: time="2026-01-23T01:00:00.159167575Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 01:00:00.159672 containerd[1632]: time="2026-01-23T01:00:00.159642206Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 01:00:00.831519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3139946264.mount: Deactivated successfully. Jan 23 01:00:01.738267 containerd[1632]: time="2026-01-23T01:00:01.737546936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:01.739721 containerd[1632]: time="2026-01-23T01:00:01.739655524Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942330" Jan 23 01:00:01.747460 containerd[1632]: time="2026-01-23T01:00:01.747384332Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:01.754442 containerd[1632]: time="2026-01-23T01:00:01.754367917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:01.755466 containerd[1632]: time="2026-01-23T01:00:01.754987228Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.595195938s" Jan 23 01:00:01.755466 containerd[1632]: time="2026-01-23T01:00:01.755024082Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 01:00:01.755597 containerd[1632]: time="2026-01-23T01:00:01.755576613Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 01:00:02.331695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2383653591.mount: Deactivated successfully. Jan 23 01:00:02.345726 containerd[1632]: time="2026-01-23T01:00:02.345662357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:00:02.349553 containerd[1632]: time="2026-01-23T01:00:02.349388898Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321158" Jan 23 01:00:02.351498 containerd[1632]: time="2026-01-23T01:00:02.351466445Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:00:02.355175 containerd[1632]: time="2026-01-23T01:00:02.354628243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:00:02.355175 containerd[1632]: time="2026-01-23T01:00:02.355135455Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 599.535599ms" Jan 23 01:00:02.355285 containerd[1632]: time="2026-01-23T01:00:02.355272281Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 01:00:02.355871 containerd[1632]: time="2026-01-23T01:00:02.355846029Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 01:00:02.930773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132669601.mount: Deactivated successfully. Jan 23 01:00:04.727191 containerd[1632]: time="2026-01-23T01:00:04.726423949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:04.728905 containerd[1632]: time="2026-01-23T01:00:04.728888161Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926289" Jan 23 01:00:04.730773 containerd[1632]: time="2026-01-23T01:00:04.730758333Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:04.733419 containerd[1632]: time="2026-01-23T01:00:04.733392842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:04.734271 containerd[1632]: time="2026-01-23T01:00:04.734251805Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.378315874s" Jan 23 01:00:04.734323 containerd[1632]: time="2026-01-23T01:00:04.734276317Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 01:00:05.194357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 01:00:05.195982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:05.324464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:05.328430 (kubelet)[2331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:00:05.367522 kubelet[2331]: E0123 01:00:05.367478 2331 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:00:05.369688 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:00:05.369815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:00:05.370302 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.5M memory peak. Jan 23 01:00:08.134679 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:08.134804 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.5M memory peak. Jan 23 01:00:08.137077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:08.163123 systemd[1]: Reload requested from client PID 2360 ('systemctl') (unit session-9.scope)... Jan 23 01:00:08.163135 systemd[1]: Reloading... Jan 23 01:00:08.254146 zram_generator::config[2403]: No configuration found. Jan 23 01:00:08.425058 systemd[1]: Reloading finished in 261 ms. Jan 23 01:00:08.481465 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:00:08.481623 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:00:08.481863 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:08.481939 systemd[1]: kubelet.service: Consumed 81ms CPU time, 98.4M memory peak. Jan 23 01:00:08.483027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:08.599384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:08.607361 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:00:08.638565 kubelet[2457]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:00:08.638852 kubelet[2457]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:00:08.638885 kubelet[2457]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:00:08.638978 kubelet[2457]: I0123 01:00:08.638960 2457 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:00:09.372166 kubelet[2457]: I0123 01:00:09.371394 2457 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 01:00:09.372166 kubelet[2457]: I0123 01:00:09.371426 2457 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:00:09.372166 kubelet[2457]: I0123 01:00:09.371996 2457 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:00:09.408478 kubelet[2457]: I0123 01:00:09.408447 2457 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:00:09.408781 kubelet[2457]: E0123 01:00:09.408761 2457 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.7.172:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.7.172:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:00:09.416245 kubelet[2457]: I0123 01:00:09.416226 2457 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:00:09.419127 kubelet[2457]: I0123 01:00:09.419078 2457 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:00:09.419373 kubelet[2457]: I0123 01:00:09.419358 2457 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:00:09.419517 kubelet[2457]: I0123 01:00:09.419405 2457 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-n-6e52943716","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:00:09.419618 kubelet[2457]: I0123 01:00:09.419613 2457 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:00:09.419646 kubelet[2457]: I0123 01:00:09.419643 2457 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 01:00:09.419771 kubelet[2457]: I0123 01:00:09.419765 2457 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:00:09.422578 kubelet[2457]: I0123 01:00:09.422568 2457 kubelet.go:480] "Attempting to sync node with API server" Jan 23 01:00:09.422639 kubelet[2457]: I0123 01:00:09.422633 2457 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:00:09.422682 kubelet[2457]: I0123 01:00:09.422679 2457 kubelet.go:386] "Adding apiserver pod source" Jan 23 01:00:09.422716 kubelet[2457]: I0123 01:00:09.422712 2457 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:00:09.428872 kubelet[2457]: E0123 01:00:09.428478 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.7.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-n-6e52943716&limit=500&resourceVersion=0\": dial tcp 10.0.7.172:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:00:09.428872 kubelet[2457]: E0123 01:00:09.428758 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.7.172:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.7.172:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:00:09.429212 kubelet[2457]: I0123 01:00:09.429197 2457 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:00:09.430155 kubelet[2457]: I0123 01:00:09.429554 2457 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:00:09.430561 kubelet[2457]: W0123 01:00:09.430550 2457 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:00:09.433363 kubelet[2457]: I0123 01:00:09.433352 2457 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:00:09.433459 kubelet[2457]: I0123 01:00:09.433453 2457 server.go:1289] "Started kubelet" Jan 23 01:00:09.437477 kubelet[2457]: I0123 01:00:09.437464 2457 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:00:09.441577 kubelet[2457]: I0123 01:00:09.441549 2457 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:00:09.441961 kubelet[2457]: I0123 01:00:09.441949 2457 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:00:09.444658 kubelet[2457]: I0123 01:00:09.444649 2457 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:00:09.444890 kubelet[2457]: E0123 01:00:09.444880 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-n-6e52943716\" not found" Jan 23 01:00:09.445411 kubelet[2457]: I0123 01:00:09.445402 2457 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:00:09.445518 kubelet[2457]: I0123 01:00:09.445513 2457 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:00:09.445811 kubelet[2457]: I0123 01:00:09.445772 2457 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:00:09.446070 kubelet[2457]: I0123 01:00:09.446062 2457 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:00:09.448262 kubelet[2457]: E0123 01:00:09.448242 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.7.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.7.172:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:00:09.448432 kubelet[2457]: E0123 01:00:09.448412 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.7.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-n-6e52943716?timeout=10s\": dial tcp 10.0.7.172:6443: connect: connection refused" interval="200ms" Jan 23 01:00:09.452101 kubelet[2457]: I0123 01:00:09.451520 2457 server.go:317] "Adding debug handlers to kubelet server" Jan 23 01:00:09.452101 kubelet[2457]: E0123 01:00:09.448701 2457 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.7.172:6443/api/v1/namespaces/default/events\": dial tcp 10.0.7.172:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-2-n-6e52943716.188d365ecdc35be7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-n-6e52943716,UID:ci-4459-2-2-n-6e52943716,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-n-6e52943716,},FirstTimestamp:2026-01-23 01:00:09.433431015 +0000 UTC m=+0.822732216,LastTimestamp:2026-01-23 01:00:09.433431015 +0000 UTC m=+0.822732216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-n-6e52943716,}" Jan 23 01:00:09.453054 kubelet[2457]: I0123 01:00:09.453040 2457 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:00:09.454572 kubelet[2457]: I0123 01:00:09.454557 2457 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:00:09.456237 kubelet[2457]: I0123 01:00:09.456226 2457 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:00:09.465535 kubelet[2457]: I0123 01:00:09.465282 2457 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 01:00:09.466092 kubelet[2457]: I0123 01:00:09.466069 2457 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 01:00:09.466092 kubelet[2457]: I0123 01:00:09.466092 2457 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 01:00:09.466150 kubelet[2457]: I0123 01:00:09.466107 2457 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:00:09.466150 kubelet[2457]: I0123 01:00:09.466128 2457 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 01:00:09.466188 kubelet[2457]: E0123 01:00:09.466158 2457 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:00:09.473464 kubelet[2457]: E0123 01:00:09.473231 2457 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:00:09.473464 kubelet[2457]: E0123 01:00:09.473370 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.7.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.7.172:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:00:09.477808 kubelet[2457]: I0123 01:00:09.477793 2457 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:00:09.477808 kubelet[2457]: I0123 01:00:09.477803 2457 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:00:09.477917 kubelet[2457]: I0123 01:00:09.477815 2457 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:00:09.481128 kubelet[2457]: I0123 01:00:09.481097 2457 policy_none.go:49] "None policy: Start" Jan 23 01:00:09.481188 kubelet[2457]: I0123 01:00:09.481139 2457 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:00:09.481188 kubelet[2457]: I0123 01:00:09.481149 2457 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:00:09.486183 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:00:09.495368 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:00:09.497745 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:00:09.502762 kubelet[2457]: E0123 01:00:09.502740 2457 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:00:09.502894 kubelet[2457]: I0123 01:00:09.502884 2457 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:00:09.503342 kubelet[2457]: I0123 01:00:09.502905 2457 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:00:09.503342 kubelet[2457]: I0123 01:00:09.503174 2457 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:00:09.505240 kubelet[2457]: E0123 01:00:09.505226 2457 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:00:09.505348 kubelet[2457]: E0123 01:00:09.505325 2457 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-2-n-6e52943716\" not found" Jan 23 01:00:09.575161 systemd[1]: Created slice kubepods-burstable-pod425e8997713e272c7ba57c9b39853339.slice - libcontainer container kubepods-burstable-pod425e8997713e272c7ba57c9b39853339.slice. Jan 23 01:00:09.581951 kubelet[2457]: E0123 01:00:09.581660 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-6e52943716\" not found" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.586025 systemd[1]: Created slice kubepods-burstable-pod7f52289656aa79db4ff53b2df9f9cdc7.slice - libcontainer container kubepods-burstable-pod7f52289656aa79db4ff53b2df9f9cdc7.slice. Jan 23 01:00:09.588131 kubelet[2457]: E0123 01:00:09.588009 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-6e52943716\" not found" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.590374 systemd[1]: Created slice kubepods-burstable-podd5f038dd288aa4b540d863768f6e6f7e.slice - libcontainer container kubepods-burstable-podd5f038dd288aa4b540d863768f6e6f7e.slice. Jan 23 01:00:09.591661 kubelet[2457]: E0123 01:00:09.591638 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-6e52943716\" not found" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.605334 kubelet[2457]: I0123 01:00:09.605022 2457 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.605334 kubelet[2457]: E0123 01:00:09.605302 2457 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.7.172:6443/api/v1/nodes\": dial tcp 10.0.7.172:6443: connect: connection refused" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.646641 kubelet[2457]: I0123 01:00:09.646546 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f52289656aa79db4ff53b2df9f9cdc7-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-6e52943716\" (UID: \"7f52289656aa79db4ff53b2df9f9cdc7\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.647815 kubelet[2457]: I0123 01:00:09.647798 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/425e8997713e272c7ba57c9b39853339-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-n-6e52943716\" (UID: \"425e8997713e272c7ba57c9b39853339\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.647936 kubelet[2457]: I0123 01:00:09.647923 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/425e8997713e272c7ba57c9b39853339-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-n-6e52943716\" (UID: \"425e8997713e272c7ba57c9b39853339\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.647994 kubelet[2457]: I0123 01:00:09.647985 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/425e8997713e272c7ba57c9b39853339-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-n-6e52943716\" (UID: \"425e8997713e272c7ba57c9b39853339\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.648051 kubelet[2457]: I0123 01:00:09.648044 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f52289656aa79db4ff53b2df9f9cdc7-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-6e52943716\" (UID: \"7f52289656aa79db4ff53b2df9f9cdc7\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.648218 kubelet[2457]: I0123 01:00:09.648209 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f52289656aa79db4ff53b2df9f9cdc7-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-n-6e52943716\" (UID: \"7f52289656aa79db4ff53b2df9f9cdc7\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.650016 kubelet[2457]: E0123 01:00:09.649989 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.7.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-n-6e52943716?timeout=10s\": dial tcp 10.0.7.172:6443: connect: connection refused" interval="400ms" Jan 23 01:00:09.748765 kubelet[2457]: I0123 01:00:09.748667 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f52289656aa79db4ff53b2df9f9cdc7-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-n-6e52943716\" (UID: \"7f52289656aa79db4ff53b2df9f9cdc7\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.748765 kubelet[2457]: I0123 01:00:09.748705 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f52289656aa79db4ff53b2df9f9cdc7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-n-6e52943716\" (UID: \"7f52289656aa79db4ff53b2df9f9cdc7\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.748765 kubelet[2457]: I0123 01:00:09.748724 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5f038dd288aa4b540d863768f6e6f7e-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-n-6e52943716\" (UID: \"d5f038dd288aa4b540d863768f6e6f7e\") " pod="kube-system/kube-scheduler-ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.807088 kubelet[2457]: I0123 01:00:09.807057 2457 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.807402 kubelet[2457]: E0123 01:00:09.807380 2457 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.7.172:6443/api/v1/nodes\": dial tcp 10.0.7.172:6443: connect: connection refused" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:09.884045 containerd[1632]: time="2026-01-23T01:00:09.883494338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-n-6e52943716,Uid:425e8997713e272c7ba57c9b39853339,Namespace:kube-system,Attempt:0,}" Jan 23 01:00:09.888844 containerd[1632]: time="2026-01-23T01:00:09.888821259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-n-6e52943716,Uid:7f52289656aa79db4ff53b2df9f9cdc7,Namespace:kube-system,Attempt:0,}" Jan 23 01:00:09.892542 containerd[1632]: time="2026-01-23T01:00:09.892424050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-n-6e52943716,Uid:d5f038dd288aa4b540d863768f6e6f7e,Namespace:kube-system,Attempt:0,}" Jan 23 01:00:09.934947 containerd[1632]: time="2026-01-23T01:00:09.934878625Z" level=info msg="connecting to shim 4ce1f8a48ed5ffb45ba91499d31d2de8b0f7113f9aad17414bb21f453a3fb5d4" address="unix:///run/containerd/s/652e54625ba14c6a3141f4610a554a682df20a9c5e30032af709bb43d6f46b89" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:00:09.938476 containerd[1632]: time="2026-01-23T01:00:09.938166582Z" level=info msg="connecting to shim 113c632f74b4adc2f9fbaf52ba40a9167efa38262ef91db0bdd94520a458f171" address="unix:///run/containerd/s/54bf270d0f40c0258c488bf472c6c80091db2f03fdd114661c877a487365c996" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:00:09.945505 containerd[1632]: time="2026-01-23T01:00:09.945443502Z" level=info msg="connecting to shim 3fd93cedb0f47da64875220e3e2c24c171c8a6c116a748a6fb8da24ebd9f7991" address="unix:///run/containerd/s/b37a963dda18a708dc6429a4718549f3aa30afebe272b939dfd2ac159c48df34" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:00:09.974268 systemd[1]: Started cri-containerd-4ce1f8a48ed5ffb45ba91499d31d2de8b0f7113f9aad17414bb21f453a3fb5d4.scope - libcontainer container 4ce1f8a48ed5ffb45ba91499d31d2de8b0f7113f9aad17414bb21f453a3fb5d4. Jan 23 01:00:09.977588 systemd[1]: Started cri-containerd-113c632f74b4adc2f9fbaf52ba40a9167efa38262ef91db0bdd94520a458f171.scope - libcontainer container 113c632f74b4adc2f9fbaf52ba40a9167efa38262ef91db0bdd94520a458f171. Jan 23 01:00:09.980052 systemd[1]: Started cri-containerd-3fd93cedb0f47da64875220e3e2c24c171c8a6c116a748a6fb8da24ebd9f7991.scope - libcontainer container 3fd93cedb0f47da64875220e3e2c24c171c8a6c116a748a6fb8da24ebd9f7991. Jan 23 01:00:10.031603 containerd[1632]: time="2026-01-23T01:00:10.031564623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-n-6e52943716,Uid:d5f038dd288aa4b540d863768f6e6f7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ce1f8a48ed5ffb45ba91499d31d2de8b0f7113f9aad17414bb21f453a3fb5d4\"" Jan 23 01:00:10.040730 containerd[1632]: time="2026-01-23T01:00:10.040691051Z" level=info msg="CreateContainer within sandbox \"4ce1f8a48ed5ffb45ba91499d31d2de8b0f7113f9aad17414bb21f453a3fb5d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 01:00:10.050706 kubelet[2457]: E0123 01:00:10.050584 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.7.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-n-6e52943716?timeout=10s\": dial tcp 10.0.7.172:6443: connect: connection refused" interval="800ms" Jan 23 01:00:10.058712 containerd[1632]: time="2026-01-23T01:00:10.058670273Z" level=info msg="Container 1fb94f9409489dd9c0504933bea9b80e4de3bbb87403d3a5981a598095bc2032: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:10.065367 containerd[1632]: time="2026-01-23T01:00:10.065237000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-n-6e52943716,Uid:425e8997713e272c7ba57c9b39853339,Namespace:kube-system,Attempt:0,} returns sandbox id \"113c632f74b4adc2f9fbaf52ba40a9167efa38262ef91db0bdd94520a458f171\"" Jan 23 01:00:10.069555 containerd[1632]: time="2026-01-23T01:00:10.069526482Z" level=info msg="CreateContainer within sandbox \"113c632f74b4adc2f9fbaf52ba40a9167efa38262ef91db0bdd94520a458f171\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 01:00:10.069818 containerd[1632]: time="2026-01-23T01:00:10.069753928Z" level=info msg="CreateContainer within sandbox \"4ce1f8a48ed5ffb45ba91499d31d2de8b0f7113f9aad17414bb21f453a3fb5d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1fb94f9409489dd9c0504933bea9b80e4de3bbb87403d3a5981a598095bc2032\"" Jan 23 01:00:10.070527 containerd[1632]: time="2026-01-23T01:00:10.070504809Z" level=info msg="StartContainer for \"1fb94f9409489dd9c0504933bea9b80e4de3bbb87403d3a5981a598095bc2032\"" Jan 23 01:00:10.071578 containerd[1632]: time="2026-01-23T01:00:10.071548180Z" level=info msg="connecting to shim 1fb94f9409489dd9c0504933bea9b80e4de3bbb87403d3a5981a598095bc2032" address="unix:///run/containerd/s/652e54625ba14c6a3141f4610a554a682df20a9c5e30032af709bb43d6f46b89" protocol=ttrpc version=3 Jan 23 01:00:10.076195 containerd[1632]: time="2026-01-23T01:00:10.076133887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-n-6e52943716,Uid:7f52289656aa79db4ff53b2df9f9cdc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fd93cedb0f47da64875220e3e2c24c171c8a6c116a748a6fb8da24ebd9f7991\"" Jan 23 01:00:10.080412 containerd[1632]: time="2026-01-23T01:00:10.080388190Z" level=info msg="CreateContainer within sandbox \"3fd93cedb0f47da64875220e3e2c24c171c8a6c116a748a6fb8da24ebd9f7991\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 01:00:10.083063 containerd[1632]: time="2026-01-23T01:00:10.083026410Z" level=info msg="Container 047113ea13e3c747e9dcf69a2751309a8eb98d3bc6e83bacd845e897da519d18: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:10.093799 containerd[1632]: time="2026-01-23T01:00:10.093759927Z" level=info msg="Container 61b2495c0a208264200afff0891f4810a86ea3d8470308e9a0327533df761463: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:10.095322 systemd[1]: Started cri-containerd-1fb94f9409489dd9c0504933bea9b80e4de3bbb87403d3a5981a598095bc2032.scope - libcontainer container 1fb94f9409489dd9c0504933bea9b80e4de3bbb87403d3a5981a598095bc2032. Jan 23 01:00:10.099671 containerd[1632]: time="2026-01-23T01:00:10.098939178Z" level=info msg="CreateContainer within sandbox \"113c632f74b4adc2f9fbaf52ba40a9167efa38262ef91db0bdd94520a458f171\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"047113ea13e3c747e9dcf69a2751309a8eb98d3bc6e83bacd845e897da519d18\"" Jan 23 01:00:10.099912 containerd[1632]: time="2026-01-23T01:00:10.099890361Z" level=info msg="StartContainer for \"047113ea13e3c747e9dcf69a2751309a8eb98d3bc6e83bacd845e897da519d18\"" Jan 23 01:00:10.101048 containerd[1632]: time="2026-01-23T01:00:10.101019167Z" level=info msg="CreateContainer within sandbox \"3fd93cedb0f47da64875220e3e2c24c171c8a6c116a748a6fb8da24ebd9f7991\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61b2495c0a208264200afff0891f4810a86ea3d8470308e9a0327533df761463\"" Jan 23 01:00:10.101449 containerd[1632]: time="2026-01-23T01:00:10.101377507Z" level=info msg="StartContainer for \"61b2495c0a208264200afff0891f4810a86ea3d8470308e9a0327533df761463\"" Jan 23 01:00:10.101968 containerd[1632]: time="2026-01-23T01:00:10.101937535Z" level=info msg="connecting to shim 047113ea13e3c747e9dcf69a2751309a8eb98d3bc6e83bacd845e897da519d18" address="unix:///run/containerd/s/54bf270d0f40c0258c488bf472c6c80091db2f03fdd114661c877a487365c996" protocol=ttrpc version=3 Jan 23 01:00:10.102535 containerd[1632]: time="2026-01-23T01:00:10.102512353Z" level=info msg="connecting to shim 61b2495c0a208264200afff0891f4810a86ea3d8470308e9a0327533df761463" address="unix:///run/containerd/s/b37a963dda18a708dc6429a4718549f3aa30afebe272b939dfd2ac159c48df34" protocol=ttrpc version=3 Jan 23 01:00:10.126348 systemd[1]: Started cri-containerd-61b2495c0a208264200afff0891f4810a86ea3d8470308e9a0327533df761463.scope - libcontainer container 61b2495c0a208264200afff0891f4810a86ea3d8470308e9a0327533df761463. Jan 23 01:00:10.134250 systemd[1]: Started cri-containerd-047113ea13e3c747e9dcf69a2751309a8eb98d3bc6e83bacd845e897da519d18.scope - libcontainer container 047113ea13e3c747e9dcf69a2751309a8eb98d3bc6e83bacd845e897da519d18. Jan 23 01:00:10.205579 containerd[1632]: time="2026-01-23T01:00:10.204237290Z" level=info msg="StartContainer for \"1fb94f9409489dd9c0504933bea9b80e4de3bbb87403d3a5981a598095bc2032\" returns successfully" Jan 23 01:00:10.211438 kubelet[2457]: I0123 01:00:10.211254 2457 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:10.211522 kubelet[2457]: E0123 01:00:10.211496 2457 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.7.172:6443/api/v1/nodes\": dial tcp 10.0.7.172:6443: connect: connection refused" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:10.218691 containerd[1632]: time="2026-01-23T01:00:10.218652280Z" level=info msg="StartContainer for \"047113ea13e3c747e9dcf69a2751309a8eb98d3bc6e83bacd845e897da519d18\" returns successfully" Jan 23 01:00:10.220914 containerd[1632]: time="2026-01-23T01:00:10.220887395Z" level=info msg="StartContainer for \"61b2495c0a208264200afff0891f4810a86ea3d8470308e9a0327533df761463\" returns successfully" Jan 23 01:00:10.483578 kubelet[2457]: E0123 01:00:10.483505 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-6e52943716\" not found" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:10.483901 kubelet[2457]: E0123 01:00:10.483732 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-6e52943716\" not found" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:10.485344 kubelet[2457]: E0123 01:00:10.485327 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-6e52943716\" not found" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:11.015158 kubelet[2457]: I0123 01:00:11.014550 2457 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:11.488686 kubelet[2457]: E0123 01:00:11.488663 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-6e52943716\" not found" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:11.488937 kubelet[2457]: E0123 01:00:11.488926 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-6e52943716\" not found" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:11.885272 kubelet[2457]: E0123 01:00:11.885181 2457 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-2-n-6e52943716\" not found" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:12.096160 kubelet[2457]: I0123 01:00:12.096125 2457 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:12.096160 kubelet[2457]: E0123 01:00:12.096157 2457 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-2-2-n-6e52943716\": node \"ci-4459-2-2-n-6e52943716\" not found" Jan 23 01:00:12.145778 kubelet[2457]: I0123 01:00:12.145542 2457 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" Jan 23 01:00:12.151602 kubelet[2457]: E0123 01:00:12.151481 2457 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-n-6e52943716\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" Jan 23 01:00:12.151602 kubelet[2457]: I0123 01:00:12.151521 2457 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-6e52943716" Jan 23 01:00:12.154255 kubelet[2457]: E0123 01:00:12.154228 2457 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-n-6e52943716\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-6e52943716" Jan 23 01:00:12.154255 kubelet[2457]: I0123 01:00:12.154248 2457 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-n-6e52943716" Jan 23 01:00:12.156736 kubelet[2457]: E0123 01:00:12.156720 2457 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-n-6e52943716\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-n-6e52943716" Jan 23 01:00:12.431650 kubelet[2457]: I0123 01:00:12.430441 2457 apiserver.go:52] "Watching apiserver" Jan 23 01:00:12.445851 kubelet[2457]: I0123 01:00:12.445820 2457 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:00:13.051728 kubelet[2457]: I0123 01:00:13.051697 2457 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" Jan 23 01:00:13.560556 systemd[1]: Reload requested from client PID 2735 ('systemctl') (unit session-9.scope)... Jan 23 01:00:13.560602 systemd[1]: Reloading... Jan 23 01:00:13.637912 zram_generator::config[2787]: No configuration found. Jan 23 01:00:13.807363 systemd[1]: Reloading finished in 246 ms. Jan 23 01:00:13.830727 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:13.845551 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:00:13.845747 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:13.845791 systemd[1]: kubelet.service: Consumed 1.084s CPU time, 128.4M memory peak. Jan 23 01:00:13.847940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:13.973439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:13.984425 (kubelet)[2829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:00:14.019986 kubelet[2829]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:00:14.019986 kubelet[2829]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:00:14.019986 kubelet[2829]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:00:14.020314 kubelet[2829]: I0123 01:00:14.020036 2829 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:00:14.027099 kubelet[2829]: I0123 01:00:14.027059 2829 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 01:00:14.027099 kubelet[2829]: I0123 01:00:14.027081 2829 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:00:14.027279 kubelet[2829]: I0123 01:00:14.027265 2829 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:00:14.028253 kubelet[2829]: I0123 01:00:14.028241 2829 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 01:00:14.030163 kubelet[2829]: I0123 01:00:14.029939 2829 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:00:14.033061 kubelet[2829]: I0123 01:00:14.033049 2829 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:00:14.036504 kubelet[2829]: I0123 01:00:14.036490 2829 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:00:14.036780 kubelet[2829]: I0123 01:00:14.036764 2829 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:00:14.037066 kubelet[2829]: I0123 01:00:14.036849 2829 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-n-6e52943716","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:00:14.037197 kubelet[2829]: I0123 01:00:14.037189 2829 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:00:14.037241 kubelet[2829]: I0123 01:00:14.037236 2829 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 01:00:14.037335 kubelet[2829]: I0123 01:00:14.037329 2829 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:00:14.037511 kubelet[2829]: I0123 01:00:14.037504 2829 kubelet.go:480] "Attempting to sync node with API server" Jan 23 01:00:14.037563 kubelet[2829]: I0123 01:00:14.037558 2829 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:00:14.037612 kubelet[2829]: I0123 01:00:14.037608 2829 kubelet.go:386] "Adding apiserver pod source" Jan 23 01:00:14.037656 kubelet[2829]: I0123 01:00:14.037651 2829 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:00:14.043970 kubelet[2829]: I0123 01:00:14.043271 2829 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:00:14.043970 kubelet[2829]: I0123 01:00:14.043649 2829 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:00:14.047901 kubelet[2829]: I0123 01:00:14.047659 2829 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:00:14.048129 kubelet[2829]: I0123 01:00:14.048074 2829 server.go:1289] "Started kubelet" Jan 23 01:00:14.049902 kubelet[2829]: I0123 01:00:14.049527 2829 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:00:14.049902 kubelet[2829]: I0123 01:00:14.049742 2829 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:00:14.049902 kubelet[2829]: I0123 01:00:14.049777 2829 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:00:14.051617 kubelet[2829]: I0123 01:00:14.051025 2829 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:00:14.054664 kubelet[2829]: I0123 01:00:14.054646 2829 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:00:14.058925 kubelet[2829]: I0123 01:00:14.051520 2829 server.go:317] "Adding debug handlers to kubelet server" Jan 23 01:00:14.059543 kubelet[2829]: I0123 01:00:14.059531 2829 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:00:14.059613 kubelet[2829]: I0123 01:00:14.059605 2829 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:00:14.059685 kubelet[2829]: I0123 01:00:14.059677 2829 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:00:14.061769 kubelet[2829]: I0123 01:00:14.061543 2829 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:00:14.061769 kubelet[2829]: I0123 01:00:14.061624 2829 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:00:14.062757 kubelet[2829]: E0123 01:00:14.062615 2829 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:00:14.062757 kubelet[2829]: I0123 01:00:14.063093 2829 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:00:14.066959 kubelet[2829]: I0123 01:00:14.066936 2829 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 01:00:14.068223 kubelet[2829]: I0123 01:00:14.067955 2829 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 01:00:14.068223 kubelet[2829]: I0123 01:00:14.067967 2829 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 01:00:14.068223 kubelet[2829]: I0123 01:00:14.067984 2829 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:00:14.068223 kubelet[2829]: I0123 01:00:14.067990 2829 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 01:00:14.068223 kubelet[2829]: E0123 01:00:14.068020 2829 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:00:14.100945 kubelet[2829]: I0123 01:00:14.100871 2829 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:00:14.101091 kubelet[2829]: I0123 01:00:14.101079 2829 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:00:14.101290 kubelet[2829]: I0123 01:00:14.101241 2829 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:00:14.101521 kubelet[2829]: I0123 01:00:14.101499 2829 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 01:00:14.101617 kubelet[2829]: I0123 01:00:14.101589 2829 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 01:00:14.101674 kubelet[2829]: I0123 01:00:14.101659 2829 policy_none.go:49] "None policy: Start" Jan 23 01:00:14.101725 kubelet[2829]: I0123 01:00:14.101721 2829 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:00:14.101755 kubelet[2829]: I0123 01:00:14.101751 2829 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:00:14.101854 kubelet[2829]: I0123 01:00:14.101848 2829 state_mem.go:75] "Updated machine memory state" Jan 23 01:00:14.104591 kubelet[2829]: E0123 01:00:14.104574 2829 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:00:14.104699 kubelet[2829]: I0123 01:00:14.104691 2829 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:00:14.104725 kubelet[2829]: I0123 01:00:14.104701 2829 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:00:14.104844 kubelet[2829]: I0123 01:00:14.104835 2829 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:00:14.107084 kubelet[2829]: E0123 01:00:14.106703 2829 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:00:14.169293 kubelet[2829]: I0123 01:00:14.169263 2829 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.169479 kubelet[2829]: I0123 01:00:14.169447 2829 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.169601 kubelet[2829]: I0123 01:00:14.169322 2829 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.174907 kubelet[2829]: E0123 01:00:14.174863 2829 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-n-6e52943716\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.212259 kubelet[2829]: I0123 01:00:14.211253 2829 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.217054 kubelet[2829]: I0123 01:00:14.216877 2829 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.217157 kubelet[2829]: I0123 01:00:14.217147 2829 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.261019 kubelet[2829]: I0123 01:00:14.260824 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5f038dd288aa4b540d863768f6e6f7e-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-n-6e52943716\" (UID: \"d5f038dd288aa4b540d863768f6e6f7e\") " pod="kube-system/kube-scheduler-ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.261019 kubelet[2829]: I0123 01:00:14.260860 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f52289656aa79db4ff53b2df9f9cdc7-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-6e52943716\" (UID: \"7f52289656aa79db4ff53b2df9f9cdc7\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.261019 kubelet[2829]: I0123 01:00:14.260873 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f52289656aa79db4ff53b2df9f9cdc7-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-6e52943716\" (UID: \"7f52289656aa79db4ff53b2df9f9cdc7\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.261019 kubelet[2829]: I0123 01:00:14.260890 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f52289656aa79db4ff53b2df9f9cdc7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-n-6e52943716\" (UID: \"7f52289656aa79db4ff53b2df9f9cdc7\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.261019 kubelet[2829]: I0123 01:00:14.260905 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/425e8997713e272c7ba57c9b39853339-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-n-6e52943716\" (UID: \"425e8997713e272c7ba57c9b39853339\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.261216 kubelet[2829]: I0123 01:00:14.260917 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/425e8997713e272c7ba57c9b39853339-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-n-6e52943716\" (UID: \"425e8997713e272c7ba57c9b39853339\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.261216 kubelet[2829]: I0123 01:00:14.260950 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/425e8997713e272c7ba57c9b39853339-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-n-6e52943716\" (UID: \"425e8997713e272c7ba57c9b39853339\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.261216 kubelet[2829]: I0123 01:00:14.260963 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f52289656aa79db4ff53b2df9f9cdc7-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-n-6e52943716\" (UID: \"7f52289656aa79db4ff53b2df9f9cdc7\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.261216 kubelet[2829]: I0123 01:00:14.260975 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f52289656aa79db4ff53b2df9f9cdc7-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-n-6e52943716\" (UID: \"7f52289656aa79db4ff53b2df9f9cdc7\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-6e52943716" Jan 23 01:00:14.558563 sudo[2865]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 01:00:14.558798 sudo[2865]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 01:00:14.885234 sudo[2865]: pam_unix(sudo:session): session closed for user root Jan 23 01:00:15.043295 kubelet[2829]: I0123 01:00:15.043246 2829 apiserver.go:52] "Watching apiserver" Jan 23 01:00:15.060701 kubelet[2829]: I0123 01:00:15.060660 2829 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:00:15.090938 kubelet[2829]: I0123 01:00:15.090907 2829 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" Jan 23 01:00:15.091751 kubelet[2829]: I0123 01:00:15.091735 2829 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-n-6e52943716" Jan 23 01:00:15.100449 kubelet[2829]: E0123 01:00:15.100427 2829 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-n-6e52943716\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-2-n-6e52943716" Jan 23 01:00:15.100928 kubelet[2829]: E0123 01:00:15.100916 2829 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-n-6e52943716\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" Jan 23 01:00:15.123099 kubelet[2829]: I0123 01:00:15.123016 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-2-n-6e52943716" podStartSLOduration=1.122999694 podStartE2EDuration="1.122999694s" podCreationTimestamp="2026-01-23 01:00:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:00:15.122194635 +0000 UTC m=+1.133635330" watchObservedRunningTime="2026-01-23 01:00:15.122999694 +0000 UTC m=+1.134440381" Jan 23 01:00:15.141713 kubelet[2829]: I0123 01:00:15.140455 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-6e52943716" podStartSLOduration=1.14043658 podStartE2EDuration="1.14043658s" podCreationTimestamp="2026-01-23 01:00:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:00:15.131711708 +0000 UTC m=+1.143152401" watchObservedRunningTime="2026-01-23 01:00:15.14043658 +0000 UTC m=+1.151877272" Jan 23 01:00:16.331801 sudo[1892]: pam_unix(sudo:session): session closed for user root Jan 23 01:00:16.427324 sshd[1891]: Connection closed by 20.161.92.111 port 42340 Jan 23 01:00:16.427767 sshd-session[1888]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:16.431029 systemd-logind[1602]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:00:16.431569 systemd[1]: sshd@8-10.0.7.172:22-20.161.92.111:42340.service: Deactivated successfully. Jan 23 01:00:16.434940 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:00:16.435250 systemd[1]: session-9.scope: Consumed 4.365s CPU time, 272.7M memory peak. Jan 23 01:00:16.436882 systemd-logind[1602]: Removed session 9. Jan 23 01:00:17.034011 update_engine[1612]: I20260123 01:00:17.033504 1612 update_attempter.cc:509] Updating boot flags... Jan 23 01:00:19.153833 kubelet[2829]: I0123 01:00:19.153703 2829 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 01:00:19.154146 containerd[1632]: time="2026-01-23T01:00:19.153918642Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:00:19.154529 kubelet[2829]: I0123 01:00:19.154376 2829 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 01:00:19.849026 kubelet[2829]: I0123 01:00:19.848977 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" podStartSLOduration=6.848930948 podStartE2EDuration="6.848930948s" podCreationTimestamp="2026-01-23 01:00:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:00:15.140636006 +0000 UTC m=+1.152076769" watchObservedRunningTime="2026-01-23 01:00:19.848930948 +0000 UTC m=+5.860371619" Jan 23 01:00:19.859659 systemd[1]: Created slice kubepods-besteffort-pode1e5eaf6_eb8f_40ab_a978_e49dc8617461.slice - libcontainer container kubepods-besteffort-pode1e5eaf6_eb8f_40ab_a978_e49dc8617461.slice. Jan 23 01:00:19.873586 systemd[1]: Created slice kubepods-burstable-pod0282ca70_24a7_41b6_ad85_5835a877cab2.slice - libcontainer container kubepods-burstable-pod0282ca70_24a7_41b6_ad85_5835a877cab2.slice. Jan 23 01:00:19.895270 kubelet[2829]: I0123 01:00:19.895241 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e1e5eaf6-eb8f-40ab-a978-e49dc8617461-kube-proxy\") pod \"kube-proxy-894rb\" (UID: \"e1e5eaf6-eb8f-40ab-a978-e49dc8617461\") " pod="kube-system/kube-proxy-894rb" Jan 23 01:00:19.895386 kubelet[2829]: I0123 01:00:19.895303 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-xtables-lock\") pod \"cilium-fgqtf\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " pod="kube-system/cilium-fgqtf" Jan 23 01:00:19.895386 kubelet[2829]: I0123 01:00:19.895319 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-host-proc-sys-net\") pod \"cilium-fgqtf\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " pod="kube-system/cilium-fgqtf" Jan 23 01:00:19.895386 kubelet[2829]: I0123 01:00:19.895332 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0282ca70-24a7-41b6-ad85-5835a877cab2-hubble-tls\") pod \"cilium-fgqtf\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " pod="kube-system/cilium-fgqtf" Jan 23 01:00:19.895386 kubelet[2829]: I0123 01:00:19.895345 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-cilium-run\") pod \"cilium-fgqtf\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " pod="kube-system/cilium-fgqtf" Jan 23 01:00:19.895467 kubelet[2829]: I0123 01:00:19.895387 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0282ca70-24a7-41b6-ad85-5835a877cab2-clustermesh-secrets\") pod \"cilium-fgqtf\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " pod="kube-system/cilium-fgqtf" Jan 23 01:00:19.895467 kubelet[2829]: I0123 01:00:19.895402 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-host-proc-sys-kernel\") pod \"cilium-fgqtf\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " pod="kube-system/cilium-fgqtf" Jan 23 01:00:19.895467 kubelet[2829]: I0123 01:00:19.895443 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trgnj\" (UniqueName: \"kubernetes.io/projected/0282ca70-24a7-41b6-ad85-5835a877cab2-kube-api-access-trgnj\") pod \"cilium-fgqtf\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " pod="kube-system/cilium-fgqtf" Jan 23 01:00:19.895467 kubelet[2829]: I0123 01:00:19.895456 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l2kk\" (UniqueName: \"kubernetes.io/projected/e1e5eaf6-eb8f-40ab-a978-e49dc8617461-kube-api-access-6l2kk\") pod \"kube-proxy-894rb\" (UID: \"e1e5eaf6-eb8f-40ab-a978-e49dc8617461\") " pod="kube-system/kube-proxy-894rb" Jan 23 01:00:19.895537 kubelet[2829]: I0123 01:00:19.895469 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1e5eaf6-eb8f-40ab-a978-e49dc8617461-lib-modules\") pod \"kube-proxy-894rb\" (UID: \"e1e5eaf6-eb8f-40ab-a978-e49dc8617461\") " pod="kube-system/kube-proxy-894rb" Jan 23 01:00:19.895537 kubelet[2829]: I0123 01:00:19.895480 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-cni-path\") pod \"cilium-fgqtf\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " pod="kube-system/cilium-fgqtf" Jan 23 01:00:19.895537 kubelet[2829]: I0123 01:00:19.895490 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-etc-cni-netd\") pod \"cilium-fgqtf\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " pod="kube-system/cilium-fgqtf" Jan 23 01:00:19.895537 kubelet[2829]: I0123 01:00:19.895520 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-lib-modules\") pod \"cilium-fgqtf\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " pod="kube-system/cilium-fgqtf" Jan 23 01:00:19.895537 kubelet[2829]: I0123 01:00:19.895530 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0282ca70-24a7-41b6-ad85-5835a877cab2-cilium-config-path\") pod \"cilium-fgqtf\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " pod="kube-system/cilium-fgqtf" Jan 23 01:00:19.895621 kubelet[2829]: I0123 01:00:19.895542 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1e5eaf6-eb8f-40ab-a978-e49dc8617461-xtables-lock\") pod \"kube-proxy-894rb\" (UID: \"e1e5eaf6-eb8f-40ab-a978-e49dc8617461\") " pod="kube-system/kube-proxy-894rb" Jan 23 01:00:19.895621 kubelet[2829]: I0123 01:00:19.895552 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-bpf-maps\") pod \"cilium-fgqtf\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " pod="kube-system/cilium-fgqtf" Jan 23 01:00:19.895621 kubelet[2829]: I0123 01:00:19.895607 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-hostproc\") pod \"cilium-fgqtf\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " pod="kube-system/cilium-fgqtf" Jan 23 01:00:19.895621 kubelet[2829]: I0123 01:00:19.895620 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-cilium-cgroup\") pod \"cilium-fgqtf\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " pod="kube-system/cilium-fgqtf" Jan 23 01:00:20.106238 systemd[1]: Created slice kubepods-besteffort-pod191b6cea_9683_44ff_8d80_cd9c896e80cd.slice - libcontainer container kubepods-besteffort-pod191b6cea_9683_44ff_8d80_cd9c896e80cd.slice. Jan 23 01:00:20.166850 containerd[1632]: time="2026-01-23T01:00:20.166816371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-894rb,Uid:e1e5eaf6-eb8f-40ab-a978-e49dc8617461,Namespace:kube-system,Attempt:0,}" Jan 23 01:00:20.178741 containerd[1632]: time="2026-01-23T01:00:20.178717498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fgqtf,Uid:0282ca70-24a7-41b6-ad85-5835a877cab2,Namespace:kube-system,Attempt:0,}" Jan 23 01:00:20.194604 containerd[1632]: time="2026-01-23T01:00:20.194380866Z" level=info msg="connecting to shim b80e5d33d3b157187a960315c3ebc5ffd655a0388ff2f12c9247596765692818" address="unix:///run/containerd/s/5f5259552ba1e7ae1da0538196e71d45ead7476cea7387e6a3d7a997dfee598f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:00:20.197452 containerd[1632]: time="2026-01-23T01:00:20.197429386Z" level=info msg="connecting to shim 7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061" address="unix:///run/containerd/s/949f3b37fedb6837b3766d3659ec10722e1956bac57954129c1d3d543a435720" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:00:20.199314 kubelet[2829]: I0123 01:00:20.199291 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/191b6cea-9683-44ff-8d80-cd9c896e80cd-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-mc92p\" (UID: \"191b6cea-9683-44ff-8d80-cd9c896e80cd\") " pod="kube-system/cilium-operator-6c4d7847fc-mc92p" Jan 23 01:00:20.199761 kubelet[2829]: I0123 01:00:20.199589 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qspq\" (UniqueName: \"kubernetes.io/projected/191b6cea-9683-44ff-8d80-cd9c896e80cd-kube-api-access-6qspq\") pod \"cilium-operator-6c4d7847fc-mc92p\" (UID: \"191b6cea-9683-44ff-8d80-cd9c896e80cd\") " pod="kube-system/cilium-operator-6c4d7847fc-mc92p" Jan 23 01:00:20.213254 systemd[1]: Started cri-containerd-b80e5d33d3b157187a960315c3ebc5ffd655a0388ff2f12c9247596765692818.scope - libcontainer container b80e5d33d3b157187a960315c3ebc5ffd655a0388ff2f12c9247596765692818. Jan 23 01:00:20.219806 systemd[1]: Started cri-containerd-7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061.scope - libcontainer container 7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061. Jan 23 01:00:20.245424 containerd[1632]: time="2026-01-23T01:00:20.245382156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-894rb,Uid:e1e5eaf6-eb8f-40ab-a978-e49dc8617461,Namespace:kube-system,Attempt:0,} returns sandbox id \"b80e5d33d3b157187a960315c3ebc5ffd655a0388ff2f12c9247596765692818\"" Jan 23 01:00:20.250358 containerd[1632]: time="2026-01-23T01:00:20.250324561Z" level=info msg="CreateContainer within sandbox \"b80e5d33d3b157187a960315c3ebc5ffd655a0388ff2f12c9247596765692818\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:00:20.252129 containerd[1632]: time="2026-01-23T01:00:20.252094149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fgqtf,Uid:0282ca70-24a7-41b6-ad85-5835a877cab2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\"" Jan 23 01:00:20.255323 containerd[1632]: time="2026-01-23T01:00:20.255135004Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 01:00:20.261714 containerd[1632]: time="2026-01-23T01:00:20.261695412Z" level=info msg="Container 8e1d7cdd32b8b1eb42538c81aaee89471856214dd083561bea1002e10166383a: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:20.269007 containerd[1632]: time="2026-01-23T01:00:20.268984405Z" level=info msg="CreateContainer within sandbox \"b80e5d33d3b157187a960315c3ebc5ffd655a0388ff2f12c9247596765692818\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8e1d7cdd32b8b1eb42538c81aaee89471856214dd083561bea1002e10166383a\"" Jan 23 01:00:20.269499 containerd[1632]: time="2026-01-23T01:00:20.269477188Z" level=info msg="StartContainer for \"8e1d7cdd32b8b1eb42538c81aaee89471856214dd083561bea1002e10166383a\"" Jan 23 01:00:20.271038 containerd[1632]: time="2026-01-23T01:00:20.271010628Z" level=info msg="connecting to shim 8e1d7cdd32b8b1eb42538c81aaee89471856214dd083561bea1002e10166383a" address="unix:///run/containerd/s/5f5259552ba1e7ae1da0538196e71d45ead7476cea7387e6a3d7a997dfee598f" protocol=ttrpc version=3 Jan 23 01:00:20.289243 systemd[1]: Started cri-containerd-8e1d7cdd32b8b1eb42538c81aaee89471856214dd083561bea1002e10166383a.scope - libcontainer container 8e1d7cdd32b8b1eb42538c81aaee89471856214dd083561bea1002e10166383a. Jan 23 01:00:20.349527 containerd[1632]: time="2026-01-23T01:00:20.349497560Z" level=info msg="StartContainer for \"8e1d7cdd32b8b1eb42538c81aaee89471856214dd083561bea1002e10166383a\" returns successfully" Jan 23 01:00:20.410060 containerd[1632]: time="2026-01-23T01:00:20.409973535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mc92p,Uid:191b6cea-9683-44ff-8d80-cd9c896e80cd,Namespace:kube-system,Attempt:0,}" Jan 23 01:00:20.429274 containerd[1632]: time="2026-01-23T01:00:20.429221818Z" level=info msg="connecting to shim 9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2" address="unix:///run/containerd/s/ac4f34d4107671940508bfae7f293ed1dbeeaa47379dd8ece9e29ad615df621f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:00:20.452570 systemd[1]: Started cri-containerd-9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2.scope - libcontainer container 9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2. Jan 23 01:00:20.498920 containerd[1632]: time="2026-01-23T01:00:20.498888954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mc92p,Uid:191b6cea-9683-44ff-8d80-cd9c896e80cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\"" Jan 23 01:00:21.111524 kubelet[2829]: I0123 01:00:21.111381 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-894rb" podStartSLOduration=2.111367111 podStartE2EDuration="2.111367111s" podCreationTimestamp="2026-01-23 01:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:00:21.111296933 +0000 UTC m=+7.122737625" watchObservedRunningTime="2026-01-23 01:00:21.111367111 +0000 UTC m=+7.122807800" Jan 23 01:00:25.232716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870143281.mount: Deactivated successfully. Jan 23 01:00:30.569439 containerd[1632]: time="2026-01-23T01:00:30.569070397Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:30.572674 containerd[1632]: time="2026-01-23T01:00:30.572645789Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 01:00:30.574838 containerd[1632]: time="2026-01-23T01:00:30.574767241Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:30.576504 containerd[1632]: time="2026-01-23T01:00:30.576469273Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.321306003s" Jan 23 01:00:30.576608 containerd[1632]: time="2026-01-23T01:00:30.576510635Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 01:00:30.579856 containerd[1632]: time="2026-01-23T01:00:30.579305034Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 01:00:30.580680 containerd[1632]: time="2026-01-23T01:00:30.580639131Z" level=info msg="CreateContainer within sandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:00:30.596844 containerd[1632]: time="2026-01-23T01:00:30.596249980Z" level=info msg="Container 5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:30.600162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3710177049.mount: Deactivated successfully. Jan 23 01:00:30.603829 containerd[1632]: time="2026-01-23T01:00:30.603778248Z" level=info msg="CreateContainer within sandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54\"" Jan 23 01:00:30.604632 containerd[1632]: time="2026-01-23T01:00:30.604523760Z" level=info msg="StartContainer for \"5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54\"" Jan 23 01:00:30.606572 containerd[1632]: time="2026-01-23T01:00:30.606526445Z" level=info msg="connecting to shim 5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54" address="unix:///run/containerd/s/949f3b37fedb6837b3766d3659ec10722e1956bac57954129c1d3d543a435720" protocol=ttrpc version=3 Jan 23 01:00:30.624305 systemd[1]: Started cri-containerd-5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54.scope - libcontainer container 5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54. Jan 23 01:00:30.660130 containerd[1632]: time="2026-01-23T01:00:30.660030827Z" level=info msg="StartContainer for \"5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54\" returns successfully" Jan 23 01:00:30.662062 systemd[1]: cri-containerd-5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54.scope: Deactivated successfully. Jan 23 01:00:30.666691 containerd[1632]: time="2026-01-23T01:00:30.666663195Z" level=info msg="received container exit event container_id:\"5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54\" id:\"5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54\" pid:3267 exited_at:{seconds:1769130030 nanos:666258542}" Jan 23 01:00:30.686595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54-rootfs.mount: Deactivated successfully. Jan 23 01:00:31.128284 containerd[1632]: time="2026-01-23T01:00:31.127536038Z" level=info msg="CreateContainer within sandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:00:31.138366 containerd[1632]: time="2026-01-23T01:00:31.138325834Z" level=info msg="Container 5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:31.147296 containerd[1632]: time="2026-01-23T01:00:31.147266270Z" level=info msg="CreateContainer within sandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d\"" Jan 23 01:00:31.147858 containerd[1632]: time="2026-01-23T01:00:31.147836704Z" level=info msg="StartContainer for \"5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d\"" Jan 23 01:00:31.148585 containerd[1632]: time="2026-01-23T01:00:31.148566848Z" level=info msg="connecting to shim 5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d" address="unix:///run/containerd/s/949f3b37fedb6837b3766d3659ec10722e1956bac57954129c1d3d543a435720" protocol=ttrpc version=3 Jan 23 01:00:31.169317 systemd[1]: Started cri-containerd-5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d.scope - libcontainer container 5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d. Jan 23 01:00:31.194703 containerd[1632]: time="2026-01-23T01:00:31.194610037Z" level=info msg="StartContainer for \"5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d\" returns successfully" Jan 23 01:00:31.204775 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:00:31.204958 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:00:31.205334 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:00:31.207215 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:00:31.210105 systemd[1]: cri-containerd-5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d.scope: Deactivated successfully. Jan 23 01:00:31.212156 containerd[1632]: time="2026-01-23T01:00:31.212061206Z" level=info msg="received container exit event container_id:\"5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d\" id:\"5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d\" pid:3314 exited_at:{seconds:1769130031 nanos:211687494}" Jan 23 01:00:31.226910 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:00:32.021617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634529375.mount: Deactivated successfully. Jan 23 01:00:32.131239 containerd[1632]: time="2026-01-23T01:00:32.131190867Z" level=info msg="CreateContainer within sandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:00:32.147859 containerd[1632]: time="2026-01-23T01:00:32.146694842Z" level=info msg="Container 0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:32.149620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount390958359.mount: Deactivated successfully. Jan 23 01:00:32.155940 containerd[1632]: time="2026-01-23T01:00:32.155876274Z" level=info msg="CreateContainer within sandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b\"" Jan 23 01:00:32.156567 containerd[1632]: time="2026-01-23T01:00:32.156549581Z" level=info msg="StartContainer for \"0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b\"" Jan 23 01:00:32.158209 containerd[1632]: time="2026-01-23T01:00:32.158144830Z" level=info msg="connecting to shim 0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b" address="unix:///run/containerd/s/949f3b37fedb6837b3766d3659ec10722e1956bac57954129c1d3d543a435720" protocol=ttrpc version=3 Jan 23 01:00:32.176307 systemd[1]: Started cri-containerd-0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b.scope - libcontainer container 0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b. Jan 23 01:00:32.244945 systemd[1]: cri-containerd-0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b.scope: Deactivated successfully. Jan 23 01:00:32.247583 containerd[1632]: time="2026-01-23T01:00:32.247303483Z" level=info msg="received container exit event container_id:\"0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b\" id:\"0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b\" pid:3365 exited_at:{seconds:1769130032 nanos:245716425}" Jan 23 01:00:32.248291 containerd[1632]: time="2026-01-23T01:00:32.248105783Z" level=info msg="StartContainer for \"0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b\" returns successfully" Jan 23 01:00:32.794056 containerd[1632]: time="2026-01-23T01:00:32.793424611Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:32.794980 containerd[1632]: time="2026-01-23T01:00:32.794963115Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 01:00:32.796171 containerd[1632]: time="2026-01-23T01:00:32.796156256Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:32.797215 containerd[1632]: time="2026-01-23T01:00:32.797197024Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.217210583s" Jan 23 01:00:32.797288 containerd[1632]: time="2026-01-23T01:00:32.797277307Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 01:00:32.803153 containerd[1632]: time="2026-01-23T01:00:32.803109079Z" level=info msg="CreateContainer within sandbox \"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 01:00:32.813555 containerd[1632]: time="2026-01-23T01:00:32.813525518Z" level=info msg="Container 214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:32.825241 containerd[1632]: time="2026-01-23T01:00:32.825201325Z" level=info msg="CreateContainer within sandbox \"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\"" Jan 23 01:00:32.825792 containerd[1632]: time="2026-01-23T01:00:32.825774357Z" level=info msg="StartContainer for \"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\"" Jan 23 01:00:32.826653 containerd[1632]: time="2026-01-23T01:00:32.826575162Z" level=info msg="connecting to shim 214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7" address="unix:///run/containerd/s/ac4f34d4107671940508bfae7f293ed1dbeeaa47379dd8ece9e29ad615df621f" protocol=ttrpc version=3 Jan 23 01:00:32.849312 systemd[1]: Started cri-containerd-214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7.scope - libcontainer container 214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7. Jan 23 01:00:32.874344 containerd[1632]: time="2026-01-23T01:00:32.874299709Z" level=info msg="StartContainer for \"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\" returns successfully" Jan 23 01:00:33.144253 containerd[1632]: time="2026-01-23T01:00:33.143321681Z" level=info msg="CreateContainer within sandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:00:33.157423 containerd[1632]: time="2026-01-23T01:00:33.156789062Z" level=info msg="Container 245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:33.165326 kubelet[2829]: I0123 01:00:33.165105 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-mc92p" podStartSLOduration=0.867222727 podStartE2EDuration="13.165091003s" podCreationTimestamp="2026-01-23 01:00:20 +0000 UTC" firstStartedPulling="2026-01-23 01:00:20.500064839 +0000 UTC m=+6.511505508" lastFinishedPulling="2026-01-23 01:00:32.797933105 +0000 UTC m=+18.809373784" observedRunningTime="2026-01-23 01:00:33.164437784 +0000 UTC m=+19.175878457" watchObservedRunningTime="2026-01-23 01:00:33.165091003 +0000 UTC m=+19.176531696" Jan 23 01:00:33.169604 containerd[1632]: time="2026-01-23T01:00:33.169578488Z" level=info msg="CreateContainer within sandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb\"" Jan 23 01:00:33.171314 containerd[1632]: time="2026-01-23T01:00:33.171298724Z" level=info msg="StartContainer for \"245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb\"" Jan 23 01:00:33.172065 containerd[1632]: time="2026-01-23T01:00:33.172017018Z" level=info msg="connecting to shim 245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb" address="unix:///run/containerd/s/949f3b37fedb6837b3766d3659ec10722e1956bac57954129c1d3d543a435720" protocol=ttrpc version=3 Jan 23 01:00:33.205251 systemd[1]: Started cri-containerd-245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb.scope - libcontainer container 245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb. Jan 23 01:00:33.243404 systemd[1]: cri-containerd-245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb.scope: Deactivated successfully. Jan 23 01:00:33.248555 containerd[1632]: time="2026-01-23T01:00:33.248419638Z" level=info msg="received container exit event container_id:\"245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb\" id:\"245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb\" pid:3453 exited_at:{seconds:1769130033 nanos:243579580}" Jan 23 01:00:33.264928 containerd[1632]: time="2026-01-23T01:00:33.264816754Z" level=info msg="StartContainer for \"245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb\" returns successfully" Jan 23 01:00:34.154385 containerd[1632]: time="2026-01-23T01:00:34.154237742Z" level=info msg="CreateContainer within sandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:00:34.172994 containerd[1632]: time="2026-01-23T01:00:34.170091096Z" level=info msg="Container ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:34.180598 containerd[1632]: time="2026-01-23T01:00:34.180576084Z" level=info msg="CreateContainer within sandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\"" Jan 23 01:00:34.180981 containerd[1632]: time="2026-01-23T01:00:34.180966035Z" level=info msg="StartContainer for \"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\"" Jan 23 01:00:34.182049 containerd[1632]: time="2026-01-23T01:00:34.181758800Z" level=info msg="connecting to shim ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192" address="unix:///run/containerd/s/949f3b37fedb6837b3766d3659ec10722e1956bac57954129c1d3d543a435720" protocol=ttrpc version=3 Jan 23 01:00:34.201228 systemd[1]: Started cri-containerd-ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192.scope - libcontainer container ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192. Jan 23 01:00:34.237056 containerd[1632]: time="2026-01-23T01:00:34.237025308Z" level=info msg="StartContainer for \"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\" returns successfully" Jan 23 01:00:34.383770 kubelet[2829]: I0123 01:00:34.383150 2829 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 01:00:34.421578 systemd[1]: Created slice kubepods-burstable-pod6b8ed446_5aa7_4852_b624_1c63f6a13eb0.slice - libcontainer container kubepods-burstable-pod6b8ed446_5aa7_4852_b624_1c63f6a13eb0.slice. Jan 23 01:00:34.427857 systemd[1]: Created slice kubepods-burstable-pod22982912_0e1e_49a3_bb83_29864b05cb8f.slice - libcontainer container kubepods-burstable-pod22982912_0e1e_49a3_bb83_29864b05cb8f.slice. Jan 23 01:00:34.500038 kubelet[2829]: I0123 01:00:34.500006 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b8ed446-5aa7-4852-b624-1c63f6a13eb0-config-volume\") pod \"coredns-674b8bbfcf-bdkgt\" (UID: \"6b8ed446-5aa7-4852-b624-1c63f6a13eb0\") " pod="kube-system/coredns-674b8bbfcf-bdkgt" Jan 23 01:00:34.500038 kubelet[2829]: I0123 01:00:34.500041 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-287wt\" (UniqueName: \"kubernetes.io/projected/6b8ed446-5aa7-4852-b624-1c63f6a13eb0-kube-api-access-287wt\") pod \"coredns-674b8bbfcf-bdkgt\" (UID: \"6b8ed446-5aa7-4852-b624-1c63f6a13eb0\") " pod="kube-system/coredns-674b8bbfcf-bdkgt" Jan 23 01:00:34.500202 kubelet[2829]: I0123 01:00:34.500057 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22982912-0e1e-49a3-bb83-29864b05cb8f-config-volume\") pod \"coredns-674b8bbfcf-jrc4c\" (UID: \"22982912-0e1e-49a3-bb83-29864b05cb8f\") " pod="kube-system/coredns-674b8bbfcf-jrc4c" Jan 23 01:00:34.500202 kubelet[2829]: I0123 01:00:34.500071 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-996ml\" (UniqueName: \"kubernetes.io/projected/22982912-0e1e-49a3-bb83-29864b05cb8f-kube-api-access-996ml\") pod \"coredns-674b8bbfcf-jrc4c\" (UID: \"22982912-0e1e-49a3-bb83-29864b05cb8f\") " pod="kube-system/coredns-674b8bbfcf-jrc4c" Jan 23 01:00:34.726800 containerd[1632]: time="2026-01-23T01:00:34.726767839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bdkgt,Uid:6b8ed446-5aa7-4852-b624-1c63f6a13eb0,Namespace:kube-system,Attempt:0,}" Jan 23 01:00:34.733438 containerd[1632]: time="2026-01-23T01:00:34.733408861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jrc4c,Uid:22982912-0e1e-49a3-bb83-29864b05cb8f,Namespace:kube-system,Attempt:0,}" Jan 23 01:00:37.005903 systemd-networkd[1506]: cilium_host: Link UP Jan 23 01:00:37.007168 systemd-networkd[1506]: cilium_net: Link UP Jan 23 01:00:37.007469 systemd-networkd[1506]: cilium_net: Gained carrier Jan 23 01:00:37.008270 systemd-networkd[1506]: cilium_host: Gained carrier Jan 23 01:00:37.098821 systemd-networkd[1506]: cilium_vxlan: Link UP Jan 23 01:00:37.099335 systemd-networkd[1506]: cilium_vxlan: Gained carrier Jan 23 01:00:37.290142 kernel: NET: Registered PF_ALG protocol family Jan 23 01:00:37.651885 systemd-networkd[1506]: cilium_net: Gained IPv6LL Jan 23 01:00:37.780179 systemd-networkd[1506]: cilium_host: Gained IPv6LL Jan 23 01:00:37.801340 systemd-networkd[1506]: lxc_health: Link UP Jan 23 01:00:37.802145 systemd-networkd[1506]: lxc_health: Gained carrier Jan 23 01:00:38.195999 kubelet[2829]: I0123 01:00:38.195682 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fgqtf" podStartSLOduration=8.872714014 podStartE2EDuration="19.195670124s" podCreationTimestamp="2026-01-23 01:00:19 +0000 UTC" firstStartedPulling="2026-01-23 01:00:20.254806296 +0000 UTC m=+6.266246964" lastFinishedPulling="2026-01-23 01:00:30.577762403 +0000 UTC m=+16.589203074" observedRunningTime="2026-01-23 01:00:35.166055351 +0000 UTC m=+21.177496039" watchObservedRunningTime="2026-01-23 01:00:38.195670124 +0000 UTC m=+24.207110811" Jan 23 01:00:38.273152 kernel: eth0: renamed from tmp78034 Jan 23 01:00:38.274436 systemd-networkd[1506]: lxcb6ec12884ae0: Link UP Jan 23 01:00:38.274659 systemd-networkd[1506]: lxcdd8b70a758d1: Link UP Jan 23 01:00:38.283175 kernel: eth0: renamed from tmp185a7 Jan 23 01:00:38.285467 systemd-networkd[1506]: lxcb6ec12884ae0: Gained carrier Jan 23 01:00:38.286237 systemd-networkd[1506]: lxcdd8b70a758d1: Gained carrier Jan 23 01:00:38.803298 systemd-networkd[1506]: cilium_vxlan: Gained IPv6LL Jan 23 01:00:39.187238 systemd-networkd[1506]: lxc_health: Gained IPv6LL Jan 23 01:00:39.699555 systemd-networkd[1506]: lxcdd8b70a758d1: Gained IPv6LL Jan 23 01:00:40.275333 systemd-networkd[1506]: lxcb6ec12884ae0: Gained IPv6LL Jan 23 01:00:41.409446 containerd[1632]: time="2026-01-23T01:00:41.409412592Z" level=info msg="connecting to shim 185a72282c986dc288acf816e8375b7674360976a85bb891e1160bafb98457cd" address="unix:///run/containerd/s/d8d44a8d2b05e250b49b9cfb33249e8a9259f9883615373dad9d4a1d518b2d70" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:00:41.434130 containerd[1632]: time="2026-01-23T01:00:41.433865253Z" level=info msg="connecting to shim 78034f078bf70fbb313c0e174cbec4d276660923f80f21b241a8ca60a0059d72" address="unix:///run/containerd/s/d469859c4ee437b329511f8593251c57403c5a9950fc6e71838776bf8e74e389" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:00:41.453798 systemd[1]: Started cri-containerd-185a72282c986dc288acf816e8375b7674360976a85bb891e1160bafb98457cd.scope - libcontainer container 185a72282c986dc288acf816e8375b7674360976a85bb891e1160bafb98457cd. Jan 23 01:00:41.471393 systemd[1]: Started cri-containerd-78034f078bf70fbb313c0e174cbec4d276660923f80f21b241a8ca60a0059d72.scope - libcontainer container 78034f078bf70fbb313c0e174cbec4d276660923f80f21b241a8ca60a0059d72. Jan 23 01:00:41.524235 containerd[1632]: time="2026-01-23T01:00:41.524188676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bdkgt,Uid:6b8ed446-5aa7-4852-b624-1c63f6a13eb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"78034f078bf70fbb313c0e174cbec4d276660923f80f21b241a8ca60a0059d72\"" Jan 23 01:00:41.531897 containerd[1632]: time="2026-01-23T01:00:41.531596100Z" level=info msg="CreateContainer within sandbox \"78034f078bf70fbb313c0e174cbec4d276660923f80f21b241a8ca60a0059d72\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:00:41.546026 containerd[1632]: time="2026-01-23T01:00:41.546000884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jrc4c,Uid:22982912-0e1e-49a3-bb83-29864b05cb8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"185a72282c986dc288acf816e8375b7674360976a85bb891e1160bafb98457cd\"" Jan 23 01:00:41.551001 containerd[1632]: time="2026-01-23T01:00:41.550973920Z" level=info msg="Container e0da2d127443558a4d319001534ba76aa7d6b4461200f65841cbd233f71ae052: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:41.553134 containerd[1632]: time="2026-01-23T01:00:41.552961939Z" level=info msg="CreateContainer within sandbox \"185a72282c986dc288acf816e8375b7674360976a85bb891e1160bafb98457cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:00:41.566636 containerd[1632]: time="2026-01-23T01:00:41.566614160Z" level=info msg="Container d8f016bceb9fd369f8e34d444ac13f168fc6b470e674f3859a08fa38a4ebc6c1: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:41.568691 containerd[1632]: time="2026-01-23T01:00:41.568672504Z" level=info msg="CreateContainer within sandbox \"78034f078bf70fbb313c0e174cbec4d276660923f80f21b241a8ca60a0059d72\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e0da2d127443558a4d319001534ba76aa7d6b4461200f65841cbd233f71ae052\"" Jan 23 01:00:41.569367 containerd[1632]: time="2026-01-23T01:00:41.569347050Z" level=info msg="StartContainer for \"e0da2d127443558a4d319001534ba76aa7d6b4461200f65841cbd233f71ae052\"" Jan 23 01:00:41.570998 containerd[1632]: time="2026-01-23T01:00:41.570977276Z" level=info msg="connecting to shim e0da2d127443558a4d319001534ba76aa7d6b4461200f65841cbd233f71ae052" address="unix:///run/containerd/s/d469859c4ee437b329511f8593251c57403c5a9950fc6e71838776bf8e74e389" protocol=ttrpc version=3 Jan 23 01:00:41.589843 containerd[1632]: time="2026-01-23T01:00:41.589768342Z" level=info msg="CreateContainer within sandbox \"185a72282c986dc288acf816e8375b7674360976a85bb891e1160bafb98457cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d8f016bceb9fd369f8e34d444ac13f168fc6b470e674f3859a08fa38a4ebc6c1\"" Jan 23 01:00:41.590360 systemd[1]: Started cri-containerd-e0da2d127443558a4d319001534ba76aa7d6b4461200f65841cbd233f71ae052.scope - libcontainer container e0da2d127443558a4d319001534ba76aa7d6b4461200f65841cbd233f71ae052. Jan 23 01:00:41.591231 containerd[1632]: time="2026-01-23T01:00:41.590565708Z" level=info msg="StartContainer for \"d8f016bceb9fd369f8e34d444ac13f168fc6b470e674f3859a08fa38a4ebc6c1\"" Jan 23 01:00:41.591447 containerd[1632]: time="2026-01-23T01:00:41.591375909Z" level=info msg="connecting to shim d8f016bceb9fd369f8e34d444ac13f168fc6b470e674f3859a08fa38a4ebc6c1" address="unix:///run/containerd/s/d8d44a8d2b05e250b49b9cfb33249e8a9259f9883615373dad9d4a1d518b2d70" protocol=ttrpc version=3 Jan 23 01:00:41.610243 systemd[1]: Started cri-containerd-d8f016bceb9fd369f8e34d444ac13f168fc6b470e674f3859a08fa38a4ebc6c1.scope - libcontainer container d8f016bceb9fd369f8e34d444ac13f168fc6b470e674f3859a08fa38a4ebc6c1. Jan 23 01:00:41.632598 containerd[1632]: time="2026-01-23T01:00:41.632357985Z" level=info msg="StartContainer for \"e0da2d127443558a4d319001534ba76aa7d6b4461200f65841cbd233f71ae052\" returns successfully" Jan 23 01:00:41.641776 containerd[1632]: time="2026-01-23T01:00:41.641748796Z" level=info msg="StartContainer for \"d8f016bceb9fd369f8e34d444ac13f168fc6b470e674f3859a08fa38a4ebc6c1\" returns successfully" Jan 23 01:00:42.187394 kubelet[2829]: I0123 01:00:42.187349 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bdkgt" podStartSLOduration=22.18733553 podStartE2EDuration="22.18733553s" podCreationTimestamp="2026-01-23 01:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:00:42.17668476 +0000 UTC m=+28.188125450" watchObservedRunningTime="2026-01-23 01:00:42.18733553 +0000 UTC m=+28.198776222" Jan 23 01:00:42.203132 kubelet[2829]: I0123 01:00:42.202283 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jrc4c" podStartSLOduration=22.202268029 podStartE2EDuration="22.202268029s" podCreationTimestamp="2026-01-23 01:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:00:42.199500609 +0000 UTC m=+28.210941301" watchObservedRunningTime="2026-01-23 01:00:42.202268029 +0000 UTC m=+28.213708698" Jan 23 01:00:42.402197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2918055347.mount: Deactivated successfully. Jan 23 01:02:23.158718 systemd[1]: Started sshd@9-10.0.7.172:22-20.161.92.111:33720.service - OpenSSH per-connection server daemon (20.161.92.111:33720). Jan 23 01:02:23.767518 sshd[4171]: Accepted publickey for core from 20.161.92.111 port 33720 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:02:23.768615 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:23.771895 systemd-logind[1602]: New session 10 of user core. Jan 23 01:02:23.778403 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:02:24.264601 sshd[4174]: Connection closed by 20.161.92.111 port 33720 Jan 23 01:02:24.265251 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:24.268636 systemd[1]: sshd@9-10.0.7.172:22-20.161.92.111:33720.service: Deactivated successfully. Jan 23 01:02:24.270403 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:02:24.271947 systemd-logind[1602]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:02:24.272753 systemd-logind[1602]: Removed session 10. Jan 23 01:02:29.373973 systemd[1]: Started sshd@10-10.0.7.172:22-20.161.92.111:33728.service - OpenSSH per-connection server daemon (20.161.92.111:33728). Jan 23 01:02:29.984410 sshd[4188]: Accepted publickey for core from 20.161.92.111 port 33728 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:02:29.985655 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:29.989715 systemd-logind[1602]: New session 11 of user core. Jan 23 01:02:29.994253 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:02:30.461192 sshd[4191]: Connection closed by 20.161.92.111 port 33728 Jan 23 01:02:30.460626 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:30.463338 systemd[1]: sshd@10-10.0.7.172:22-20.161.92.111:33728.service: Deactivated successfully. Jan 23 01:02:30.465046 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:02:30.466241 systemd-logind[1602]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:02:30.467199 systemd-logind[1602]: Removed session 11. Jan 23 01:02:35.566216 systemd[1]: Started sshd@11-10.0.7.172:22-20.161.92.111:54958.service - OpenSSH per-connection server daemon (20.161.92.111:54958). Jan 23 01:02:36.168001 sshd[4203]: Accepted publickey for core from 20.161.92.111 port 54958 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:02:36.168383 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:36.171725 systemd-logind[1602]: New session 12 of user core. Jan 23 01:02:36.180249 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:02:36.634257 sshd[4206]: Connection closed by 20.161.92.111 port 54958 Jan 23 01:02:36.634179 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:36.638381 systemd[1]: sshd@11-10.0.7.172:22-20.161.92.111:54958.service: Deactivated successfully. Jan 23 01:02:36.639981 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:02:36.640822 systemd-logind[1602]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:02:36.641957 systemd-logind[1602]: Removed session 12. Jan 23 01:02:36.740359 systemd[1]: Started sshd@12-10.0.7.172:22-20.161.92.111:54972.service - OpenSSH per-connection server daemon (20.161.92.111:54972). Jan 23 01:02:37.342830 sshd[4219]: Accepted publickey for core from 20.161.92.111 port 54972 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:02:37.343851 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:37.347408 systemd-logind[1602]: New session 13 of user core. Jan 23 01:02:37.353396 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:02:37.846428 sshd[4222]: Connection closed by 20.161.92.111 port 54972 Jan 23 01:02:37.846900 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:37.851148 systemd[1]: sshd@12-10.0.7.172:22-20.161.92.111:54972.service: Deactivated successfully. Jan 23 01:02:37.852876 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:02:37.853709 systemd-logind[1602]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:02:37.854801 systemd-logind[1602]: Removed session 13. Jan 23 01:02:37.950744 systemd[1]: Started sshd@13-10.0.7.172:22-20.161.92.111:54984.service - OpenSSH per-connection server daemon (20.161.92.111:54984). Jan 23 01:02:38.558168 sshd[4232]: Accepted publickey for core from 20.161.92.111 port 54984 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:02:38.558894 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:38.562778 systemd-logind[1602]: New session 14 of user core. Jan 23 01:02:38.571274 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:02:39.035353 sshd[4235]: Connection closed by 20.161.92.111 port 54984 Jan 23 01:02:39.035229 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:39.039243 systemd[1]: sshd@13-10.0.7.172:22-20.161.92.111:54984.service: Deactivated successfully. Jan 23 01:02:39.040976 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:02:39.041834 systemd-logind[1602]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:02:39.043455 systemd-logind[1602]: Removed session 14. Jan 23 01:02:44.143249 systemd[1]: Started sshd@14-10.0.7.172:22-20.161.92.111:56010.service - OpenSSH per-connection server daemon (20.161.92.111:56010). Jan 23 01:02:44.764154 sshd[4250]: Accepted publickey for core from 20.161.92.111 port 56010 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:02:44.765214 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:44.768823 systemd-logind[1602]: New session 15 of user core. Jan 23 01:02:44.779234 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:02:45.251158 sshd[4253]: Connection closed by 20.161.92.111 port 56010 Jan 23 01:02:45.252205 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:45.255216 systemd[1]: sshd@14-10.0.7.172:22-20.161.92.111:56010.service: Deactivated successfully. Jan 23 01:02:45.256847 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:02:45.257541 systemd-logind[1602]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:02:45.258671 systemd-logind[1602]: Removed session 15. Jan 23 01:02:45.357042 systemd[1]: Started sshd@15-10.0.7.172:22-20.161.92.111:56014.service - OpenSSH per-connection server daemon (20.161.92.111:56014). Jan 23 01:02:45.967252 sshd[4265]: Accepted publickey for core from 20.161.92.111 port 56014 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:02:45.968678 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:45.972319 systemd-logind[1602]: New session 16 of user core. Jan 23 01:02:45.977311 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:02:46.479347 sshd[4268]: Connection closed by 20.161.92.111 port 56014 Jan 23 01:02:46.479265 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:46.483635 systemd[1]: sshd@15-10.0.7.172:22-20.161.92.111:56014.service: Deactivated successfully. Jan 23 01:02:46.485519 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:02:46.486307 systemd-logind[1602]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:02:46.487542 systemd-logind[1602]: Removed session 16. Jan 23 01:02:46.588865 systemd[1]: Started sshd@16-10.0.7.172:22-20.161.92.111:56024.service - OpenSSH per-connection server daemon (20.161.92.111:56024). Jan 23 01:02:47.193242 sshd[4278]: Accepted publickey for core from 20.161.92.111 port 56024 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:02:47.194372 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:47.198832 systemd-logind[1602]: New session 17 of user core. Jan 23 01:02:47.206348 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:02:48.199867 sshd[4281]: Connection closed by 20.161.92.111 port 56024 Jan 23 01:02:48.200186 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:48.203924 systemd[1]: sshd@16-10.0.7.172:22-20.161.92.111:56024.service: Deactivated successfully. Jan 23 01:02:48.205681 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:02:48.206449 systemd-logind[1602]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:02:48.207865 systemd-logind[1602]: Removed session 17. Jan 23 01:02:48.312864 systemd[1]: Started sshd@17-10.0.7.172:22-20.161.92.111:56040.service - OpenSSH per-connection server daemon (20.161.92.111:56040). Jan 23 01:02:48.920942 sshd[4298]: Accepted publickey for core from 20.161.92.111 port 56040 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:02:48.921323 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:48.925172 systemd-logind[1602]: New session 18 of user core. Jan 23 01:02:48.934273 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:02:49.482217 sshd[4301]: Connection closed by 20.161.92.111 port 56040 Jan 23 01:02:49.483516 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:49.486594 systemd[1]: sshd@17-10.0.7.172:22-20.161.92.111:56040.service: Deactivated successfully. Jan 23 01:02:49.488548 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:02:49.489489 systemd-logind[1602]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:02:49.490605 systemd-logind[1602]: Removed session 18. Jan 23 01:02:49.588313 systemd[1]: Started sshd@18-10.0.7.172:22-20.161.92.111:56046.service - OpenSSH per-connection server daemon (20.161.92.111:56046). Jan 23 01:02:50.192826 sshd[4311]: Accepted publickey for core from 20.161.92.111 port 56046 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:02:50.193835 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:50.197956 systemd-logind[1602]: New session 19 of user core. Jan 23 01:02:50.207385 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:02:50.663212 sshd[4314]: Connection closed by 20.161.92.111 port 56046 Jan 23 01:02:50.664149 sshd-session[4311]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:50.666923 systemd[1]: sshd@18-10.0.7.172:22-20.161.92.111:56046.service: Deactivated successfully. Jan 23 01:02:50.668882 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:02:50.669771 systemd-logind[1602]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:02:50.671139 systemd-logind[1602]: Removed session 19. Jan 23 01:02:55.772695 systemd[1]: Started sshd@19-10.0.7.172:22-20.161.92.111:37570.service - OpenSSH per-connection server daemon (20.161.92.111:37570). Jan 23 01:02:56.374425 sshd[4329]: Accepted publickey for core from 20.161.92.111 port 37570 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:02:56.375487 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:56.380106 systemd-logind[1602]: New session 20 of user core. Jan 23 01:02:56.388673 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:02:56.848141 sshd[4332]: Connection closed by 20.161.92.111 port 37570 Jan 23 01:02:56.848081 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:56.852368 systemd[1]: sshd@19-10.0.7.172:22-20.161.92.111:37570.service: Deactivated successfully. Jan 23 01:02:56.854026 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:02:56.855072 systemd-logind[1602]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:02:56.856714 systemd-logind[1602]: Removed session 20. Jan 23 01:03:01.953838 systemd[1]: Started sshd@20-10.0.7.172:22-20.161.92.111:37572.service - OpenSSH per-connection server daemon (20.161.92.111:37572). Jan 23 01:03:02.556890 sshd[4343]: Accepted publickey for core from 20.161.92.111 port 37572 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:03:02.557872 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:02.561964 systemd-logind[1602]: New session 21 of user core. Jan 23 01:03:02.570256 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:03:03.031350 sshd[4346]: Connection closed by 20.161.92.111 port 37572 Jan 23 01:03:03.031870 sshd-session[4343]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:03.035075 systemd[1]: sshd@20-10.0.7.172:22-20.161.92.111:37572.service: Deactivated successfully. Jan 23 01:03:03.037434 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:03:03.037467 systemd-logind[1602]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:03:03.038938 systemd-logind[1602]: Removed session 21. Jan 23 01:03:03.138349 systemd[1]: Started sshd@21-10.0.7.172:22-20.161.92.111:58990.service - OpenSSH per-connection server daemon (20.161.92.111:58990). Jan 23 01:03:03.740584 sshd[4358]: Accepted publickey for core from 20.161.92.111 port 58990 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:03:03.741701 sshd-session[4358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:03.745256 systemd-logind[1602]: New session 22 of user core. Jan 23 01:03:03.752415 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:03:05.339577 containerd[1632]: time="2026-01-23T01:03:05.339469332Z" level=info msg="StopContainer for \"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\" with timeout 30 (s)" Jan 23 01:03:05.341163 containerd[1632]: time="2026-01-23T01:03:05.340658387Z" level=info msg="Stop container \"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\" with signal terminated" Jan 23 01:03:05.354657 systemd[1]: cri-containerd-214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7.scope: Deactivated successfully. Jan 23 01:03:05.356722 containerd[1632]: time="2026-01-23T01:03:05.356689440Z" level=info msg="received container exit event container_id:\"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\" id:\"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\" pid:3416 exited_at:{seconds:1769130185 nanos:356442287}" Jan 23 01:03:05.358684 containerd[1632]: time="2026-01-23T01:03:05.358664006Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:03:05.365754 containerd[1632]: time="2026-01-23T01:03:05.365726410Z" level=info msg="StopContainer for \"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\" with timeout 2 (s)" Jan 23 01:03:05.365982 containerd[1632]: time="2026-01-23T01:03:05.365967778Z" level=info msg="Stop container \"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\" with signal terminated" Jan 23 01:03:05.373122 systemd-networkd[1506]: lxc_health: Link DOWN Jan 23 01:03:05.373129 systemd-networkd[1506]: lxc_health: Lost carrier Jan 23 01:03:05.394617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7-rootfs.mount: Deactivated successfully. Jan 23 01:03:05.397456 systemd[1]: cri-containerd-ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192.scope: Deactivated successfully. Jan 23 01:03:05.398174 systemd[1]: cri-containerd-ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192.scope: Consumed 5.516s CPU time, 123.5M memory peak, 112K read from disk, 13.3M written to disk. Jan 23 01:03:05.401144 containerd[1632]: time="2026-01-23T01:03:05.401106725Z" level=info msg="received container exit event container_id:\"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\" id:\"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\" pid:3492 exited_at:{seconds:1769130185 nanos:397619196}" Jan 23 01:03:05.418591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192-rootfs.mount: Deactivated successfully. Jan 23 01:03:05.433156 containerd[1632]: time="2026-01-23T01:03:05.433047396Z" level=info msg="StopContainer for \"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\" returns successfully" Jan 23 01:03:05.433861 containerd[1632]: time="2026-01-23T01:03:05.433821601Z" level=info msg="StopPodSandbox for \"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\"" Jan 23 01:03:05.435193 containerd[1632]: time="2026-01-23T01:03:05.435170353Z" level=info msg="Container to stop \"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:03:05.435540 containerd[1632]: time="2026-01-23T01:03:05.435516348Z" level=info msg="StopContainer for \"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\" returns successfully" Jan 23 01:03:05.436247 containerd[1632]: time="2026-01-23T01:03:05.436229907Z" level=info msg="StopPodSandbox for \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\"" Jan 23 01:03:05.436385 containerd[1632]: time="2026-01-23T01:03:05.436269493Z" level=info msg="Container to stop \"0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:03:05.436385 containerd[1632]: time="2026-01-23T01:03:05.436287475Z" level=info msg="Container to stop \"245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:03:05.436385 containerd[1632]: time="2026-01-23T01:03:05.436295547Z" level=info msg="Container to stop \"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:03:05.436385 containerd[1632]: time="2026-01-23T01:03:05.436302522Z" level=info msg="Container to stop \"5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:03:05.436385 containerd[1632]: time="2026-01-23T01:03:05.436312235Z" level=info msg="Container to stop \"5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:03:05.441185 systemd[1]: cri-containerd-9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2.scope: Deactivated successfully. Jan 23 01:03:05.442411 systemd[1]: cri-containerd-7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061.scope: Deactivated successfully. Jan 23 01:03:05.443638 containerd[1632]: time="2026-01-23T01:03:05.443619108Z" level=info msg="received sandbox exit event container_id:\"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" id:\"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" exit_status:137 exited_at:{seconds:1769130185 nanos:443248755}" monitor_name=podsandbox Jan 23 01:03:05.449475 containerd[1632]: time="2026-01-23T01:03:05.449405272Z" level=info msg="received sandbox exit event container_id:\"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\" id:\"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\" exit_status:137 exited_at:{seconds:1769130185 nanos:449251277}" monitor_name=podsandbox Jan 23 01:03:05.467140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061-rootfs.mount: Deactivated successfully. Jan 23 01:03:05.472638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2-rootfs.mount: Deactivated successfully. Jan 23 01:03:05.479928 containerd[1632]: time="2026-01-23T01:03:05.479771862Z" level=info msg="shim disconnected" id=7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061 namespace=k8s.io Jan 23 01:03:05.479928 containerd[1632]: time="2026-01-23T01:03:05.479796072Z" level=warning msg="cleaning up after shim disconnected" id=7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061 namespace=k8s.io Jan 23 01:03:05.479928 containerd[1632]: time="2026-01-23T01:03:05.479803072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:03:05.480094 containerd[1632]: time="2026-01-23T01:03:05.480032757Z" level=info msg="shim disconnected" id=9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2 namespace=k8s.io Jan 23 01:03:05.480094 containerd[1632]: time="2026-01-23T01:03:05.480054282Z" level=warning msg="cleaning up after shim disconnected" id=9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2 namespace=k8s.io Jan 23 01:03:05.480094 containerd[1632]: time="2026-01-23T01:03:05.480061210Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:03:05.491448 containerd[1632]: time="2026-01-23T01:03:05.491417331Z" level=info msg="received sandbox container exit event sandbox_id:\"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\" exit_status:137 exited_at:{seconds:1769130185 nanos:449251277}" monitor_name=criService Jan 23 01:03:05.493160 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2-shm.mount: Deactivated successfully. Jan 23 01:03:05.493706 containerd[1632]: time="2026-01-23T01:03:05.493678954Z" level=info msg="received sandbox container exit event sandbox_id:\"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" exit_status:137 exited_at:{seconds:1769130185 nanos:443248755}" monitor_name=criService Jan 23 01:03:05.494532 containerd[1632]: time="2026-01-23T01:03:05.494507408Z" level=info msg="TearDown network for sandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" successfully" Jan 23 01:03:05.494589 containerd[1632]: time="2026-01-23T01:03:05.494581796Z" level=info msg="StopPodSandbox for \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" returns successfully" Jan 23 01:03:05.494831 containerd[1632]: time="2026-01-23T01:03:05.494811219Z" level=info msg="TearDown network for sandbox \"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\" successfully" Jan 23 01:03:05.495006 containerd[1632]: time="2026-01-23T01:03:05.494995152Z" level=info msg="StopPodSandbox for \"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\" returns successfully" Jan 23 01:03:05.614440 kubelet[2829]: I0123 01:03:05.614336 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-xtables-lock\") pod \"0282ca70-24a7-41b6-ad85-5835a877cab2\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " Jan 23 01:03:05.615337 kubelet[2829]: I0123 01:03:05.614968 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-bpf-maps\") pod \"0282ca70-24a7-41b6-ad85-5835a877cab2\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " Jan 23 01:03:05.615337 kubelet[2829]: I0123 01:03:05.614988 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-cilium-cgroup\") pod \"0282ca70-24a7-41b6-ad85-5835a877cab2\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " Jan 23 01:03:05.615337 kubelet[2829]: I0123 01:03:05.615009 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trgnj\" (UniqueName: \"kubernetes.io/projected/0282ca70-24a7-41b6-ad85-5835a877cab2-kube-api-access-trgnj\") pod \"0282ca70-24a7-41b6-ad85-5835a877cab2\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " Jan 23 01:03:05.615337 kubelet[2829]: I0123 01:03:05.615027 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-host-proc-sys-net\") pod \"0282ca70-24a7-41b6-ad85-5835a877cab2\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " Jan 23 01:03:05.615337 kubelet[2829]: I0123 01:03:05.615047 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-etc-cni-netd\") pod \"0282ca70-24a7-41b6-ad85-5835a877cab2\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " Jan 23 01:03:05.615337 kubelet[2829]: I0123 01:03:05.615063 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-host-proc-sys-kernel\") pod \"0282ca70-24a7-41b6-ad85-5835a877cab2\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " Jan 23 01:03:05.615523 kubelet[2829]: I0123 01:03:05.615076 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0282ca70-24a7-41b6-ad85-5835a877cab2-hubble-tls\") pod \"0282ca70-24a7-41b6-ad85-5835a877cab2\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " Jan 23 01:03:05.615523 kubelet[2829]: I0123 01:03:05.615091 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-lib-modules\") pod \"0282ca70-24a7-41b6-ad85-5835a877cab2\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " Jan 23 01:03:05.615523 kubelet[2829]: I0123 01:03:05.615106 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/191b6cea-9683-44ff-8d80-cd9c896e80cd-cilium-config-path\") pod \"191b6cea-9683-44ff-8d80-cd9c896e80cd\" (UID: \"191b6cea-9683-44ff-8d80-cd9c896e80cd\") " Jan 23 01:03:05.615523 kubelet[2829]: I0123 01:03:05.615131 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-cni-path\") pod \"0282ca70-24a7-41b6-ad85-5835a877cab2\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " Jan 23 01:03:05.615523 kubelet[2829]: I0123 01:03:05.615144 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0282ca70-24a7-41b6-ad85-5835a877cab2-cilium-config-path\") pod \"0282ca70-24a7-41b6-ad85-5835a877cab2\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " Jan 23 01:03:05.615523 kubelet[2829]: I0123 01:03:05.615158 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-hostproc\") pod \"0282ca70-24a7-41b6-ad85-5835a877cab2\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " Jan 23 01:03:05.615652 kubelet[2829]: I0123 01:03:05.615172 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qspq\" (UniqueName: \"kubernetes.io/projected/191b6cea-9683-44ff-8d80-cd9c896e80cd-kube-api-access-6qspq\") pod \"191b6cea-9683-44ff-8d80-cd9c896e80cd\" (UID: \"191b6cea-9683-44ff-8d80-cd9c896e80cd\") " Jan 23 01:03:05.615652 kubelet[2829]: I0123 01:03:05.615191 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0282ca70-24a7-41b6-ad85-5835a877cab2-clustermesh-secrets\") pod \"0282ca70-24a7-41b6-ad85-5835a877cab2\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " Jan 23 01:03:05.615652 kubelet[2829]: I0123 01:03:05.615211 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-cilium-run\") pod \"0282ca70-24a7-41b6-ad85-5835a877cab2\" (UID: \"0282ca70-24a7-41b6-ad85-5835a877cab2\") " Jan 23 01:03:05.615652 kubelet[2829]: I0123 01:03:05.614443 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0282ca70-24a7-41b6-ad85-5835a877cab2" (UID: "0282ca70-24a7-41b6-ad85-5835a877cab2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:03:05.615652 kubelet[2829]: I0123 01:03:05.615253 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0282ca70-24a7-41b6-ad85-5835a877cab2" (UID: "0282ca70-24a7-41b6-ad85-5835a877cab2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:03:05.615753 kubelet[2829]: I0123 01:03:05.615285 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0282ca70-24a7-41b6-ad85-5835a877cab2" (UID: "0282ca70-24a7-41b6-ad85-5835a877cab2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:03:05.615753 kubelet[2829]: I0123 01:03:05.615297 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0282ca70-24a7-41b6-ad85-5835a877cab2" (UID: "0282ca70-24a7-41b6-ad85-5835a877cab2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:03:05.618949 kubelet[2829]: I0123 01:03:05.617327 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-cni-path" (OuterVolumeSpecName: "cni-path") pod "0282ca70-24a7-41b6-ad85-5835a877cab2" (UID: "0282ca70-24a7-41b6-ad85-5835a877cab2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:03:05.618949 kubelet[2829]: I0123 01:03:05.617998 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-hostproc" (OuterVolumeSpecName: "hostproc") pod "0282ca70-24a7-41b6-ad85-5835a877cab2" (UID: "0282ca70-24a7-41b6-ad85-5835a877cab2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:03:05.618949 kubelet[2829]: I0123 01:03:05.618715 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/191b6cea-9683-44ff-8d80-cd9c896e80cd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "191b6cea-9683-44ff-8d80-cd9c896e80cd" (UID: "191b6cea-9683-44ff-8d80-cd9c896e80cd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:03:05.618949 kubelet[2829]: I0123 01:03:05.618751 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0282ca70-24a7-41b6-ad85-5835a877cab2" (UID: "0282ca70-24a7-41b6-ad85-5835a877cab2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:03:05.618949 kubelet[2829]: I0123 01:03:05.618764 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0282ca70-24a7-41b6-ad85-5835a877cab2" (UID: "0282ca70-24a7-41b6-ad85-5835a877cab2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:03:05.619163 kubelet[2829]: I0123 01:03:05.618776 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0282ca70-24a7-41b6-ad85-5835a877cab2" (UID: "0282ca70-24a7-41b6-ad85-5835a877cab2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:03:05.620894 kubelet[2829]: I0123 01:03:05.620874 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0282ca70-24a7-41b6-ad85-5835a877cab2" (UID: "0282ca70-24a7-41b6-ad85-5835a877cab2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:03:05.621033 kubelet[2829]: I0123 01:03:05.621022 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0282ca70-24a7-41b6-ad85-5835a877cab2-kube-api-access-trgnj" (OuterVolumeSpecName: "kube-api-access-trgnj") pod "0282ca70-24a7-41b6-ad85-5835a877cab2" (UID: "0282ca70-24a7-41b6-ad85-5835a877cab2"). InnerVolumeSpecName "kube-api-access-trgnj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:03:05.621864 kubelet[2829]: I0123 01:03:05.621848 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0282ca70-24a7-41b6-ad85-5835a877cab2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0282ca70-24a7-41b6-ad85-5835a877cab2" (UID: "0282ca70-24a7-41b6-ad85-5835a877cab2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:03:05.622159 kubelet[2829]: I0123 01:03:05.622131 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0282ca70-24a7-41b6-ad85-5835a877cab2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0282ca70-24a7-41b6-ad85-5835a877cab2" (UID: "0282ca70-24a7-41b6-ad85-5835a877cab2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:03:05.622347 kubelet[2829]: I0123 01:03:05.622313 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/191b6cea-9683-44ff-8d80-cd9c896e80cd-kube-api-access-6qspq" (OuterVolumeSpecName: "kube-api-access-6qspq") pod "191b6cea-9683-44ff-8d80-cd9c896e80cd" (UID: "191b6cea-9683-44ff-8d80-cd9c896e80cd"). InnerVolumeSpecName "kube-api-access-6qspq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:03:05.622773 kubelet[2829]: I0123 01:03:05.622754 2829 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0282ca70-24a7-41b6-ad85-5835a877cab2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0282ca70-24a7-41b6-ad85-5835a877cab2" (UID: "0282ca70-24a7-41b6-ad85-5835a877cab2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:03:05.715903 kubelet[2829]: I0123 01:03:05.715721 2829 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-xtables-lock\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.715903 kubelet[2829]: I0123 01:03:05.715760 2829 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-bpf-maps\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.715903 kubelet[2829]: I0123 01:03:05.715769 2829 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-cilium-cgroup\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.715903 kubelet[2829]: I0123 01:03:05.715778 2829 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-trgnj\" (UniqueName: \"kubernetes.io/projected/0282ca70-24a7-41b6-ad85-5835a877cab2-kube-api-access-trgnj\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.715903 kubelet[2829]: I0123 01:03:05.715788 2829 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-host-proc-sys-net\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.715903 kubelet[2829]: I0123 01:03:05.715796 2829 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-etc-cni-netd\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.715903 kubelet[2829]: I0123 01:03:05.715804 2829 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-host-proc-sys-kernel\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.715903 kubelet[2829]: I0123 01:03:05.715811 2829 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0282ca70-24a7-41b6-ad85-5835a877cab2-hubble-tls\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.716188 kubelet[2829]: I0123 01:03:05.715820 2829 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-lib-modules\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.716188 kubelet[2829]: I0123 01:03:05.715828 2829 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/191b6cea-9683-44ff-8d80-cd9c896e80cd-cilium-config-path\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.716188 kubelet[2829]: I0123 01:03:05.715836 2829 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-cni-path\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.716188 kubelet[2829]: I0123 01:03:05.715844 2829 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0282ca70-24a7-41b6-ad85-5835a877cab2-cilium-config-path\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.716188 kubelet[2829]: I0123 01:03:05.715852 2829 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-hostproc\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.716188 kubelet[2829]: I0123 01:03:05.715860 2829 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6qspq\" (UniqueName: \"kubernetes.io/projected/191b6cea-9683-44ff-8d80-cd9c896e80cd-kube-api-access-6qspq\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.716188 kubelet[2829]: I0123 01:03:05.715867 2829 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0282ca70-24a7-41b6-ad85-5835a877cab2-clustermesh-secrets\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:05.716188 kubelet[2829]: I0123 01:03:05.715878 2829 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0282ca70-24a7-41b6-ad85-5835a877cab2-cilium-run\") on node \"ci-4459-2-2-n-6e52943716\" DevicePath \"\"" Jan 23 01:03:06.075615 systemd[1]: Removed slice kubepods-burstable-pod0282ca70_24a7_41b6_ad85_5835a877cab2.slice - libcontainer container kubepods-burstable-pod0282ca70_24a7_41b6_ad85_5835a877cab2.slice. Jan 23 01:03:06.075920 systemd[1]: kubepods-burstable-pod0282ca70_24a7_41b6_ad85_5835a877cab2.slice: Consumed 5.593s CPU time, 123.9M memory peak, 112K read from disk, 13.3M written to disk. Jan 23 01:03:06.077835 systemd[1]: Removed slice kubepods-besteffort-pod191b6cea_9683_44ff_8d80_cd9c896e80cd.slice - libcontainer container kubepods-besteffort-pod191b6cea_9683_44ff_8d80_cd9c896e80cd.slice. Jan 23 01:03:06.395076 systemd[1]: var-lib-kubelet-pods-191b6cea\x2d9683\x2d44ff\x2d8d80\x2dcd9c896e80cd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6qspq.mount: Deactivated successfully. Jan 23 01:03:06.395594 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061-shm.mount: Deactivated successfully. Jan 23 01:03:06.395661 systemd[1]: var-lib-kubelet-pods-0282ca70\x2d24a7\x2d41b6\x2dad85\x2d5835a877cab2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtrgnj.mount: Deactivated successfully. Jan 23 01:03:06.395726 systemd[1]: var-lib-kubelet-pods-0282ca70\x2d24a7\x2d41b6\x2dad85\x2d5835a877cab2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 01:03:06.395781 systemd[1]: var-lib-kubelet-pods-0282ca70\x2d24a7\x2d41b6\x2dad85\x2d5835a877cab2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 01:03:06.408208 kubelet[2829]: I0123 01:03:06.407531 2829 scope.go:117] "RemoveContainer" containerID="214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7" Jan 23 01:03:06.410831 containerd[1632]: time="2026-01-23T01:03:06.410682588Z" level=info msg="RemoveContainer for \"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\"" Jan 23 01:03:06.417097 containerd[1632]: time="2026-01-23T01:03:06.417070184Z" level=info msg="RemoveContainer for \"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\" returns successfully" Jan 23 01:03:06.417616 kubelet[2829]: I0123 01:03:06.417600 2829 scope.go:117] "RemoveContainer" containerID="214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7" Jan 23 01:03:06.418371 containerd[1632]: time="2026-01-23T01:03:06.418323393Z" level=error msg="ContainerStatus for \"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\": not found" Jan 23 01:03:06.418493 kubelet[2829]: E0123 01:03:06.418473 2829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\": not found" containerID="214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7" Jan 23 01:03:06.418542 kubelet[2829]: I0123 01:03:06.418500 2829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7"} err="failed to get container status \"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"214c64b0c7f678d9dc407d1f4e3d0d9a3958a5e4a5f0583db5e2d46866a719c7\": not found" Jan 23 01:03:06.418567 kubelet[2829]: I0123 01:03:06.418546 2829 scope.go:117] "RemoveContainer" containerID="ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192" Jan 23 01:03:06.423419 containerd[1632]: time="2026-01-23T01:03:06.423185145Z" level=info msg="RemoveContainer for \"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\"" Jan 23 01:03:06.429474 containerd[1632]: time="2026-01-23T01:03:06.429111960Z" level=info msg="RemoveContainer for \"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\" returns successfully" Jan 23 01:03:06.429740 kubelet[2829]: I0123 01:03:06.429627 2829 scope.go:117] "RemoveContainer" containerID="245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb" Jan 23 01:03:06.431282 containerd[1632]: time="2026-01-23T01:03:06.431264353Z" level=info msg="RemoveContainer for \"245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb\"" Jan 23 01:03:06.435342 containerd[1632]: time="2026-01-23T01:03:06.435321262Z" level=info msg="RemoveContainer for \"245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb\" returns successfully" Jan 23 01:03:06.435747 kubelet[2829]: I0123 01:03:06.435718 2829 scope.go:117] "RemoveContainer" containerID="0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b" Jan 23 01:03:06.444135 containerd[1632]: time="2026-01-23T01:03:06.443872347Z" level=info msg="RemoveContainer for \"0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b\"" Jan 23 01:03:06.448331 containerd[1632]: time="2026-01-23T01:03:06.448308290Z" level=info msg="RemoveContainer for \"0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b\" returns successfully" Jan 23 01:03:06.448588 kubelet[2829]: I0123 01:03:06.448451 2829 scope.go:117] "RemoveContainer" containerID="5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d" Jan 23 01:03:06.449830 containerd[1632]: time="2026-01-23T01:03:06.449814513Z" level=info msg="RemoveContainer for \"5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d\"" Jan 23 01:03:06.454692 containerd[1632]: time="2026-01-23T01:03:06.454509538Z" level=info msg="RemoveContainer for \"5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d\" returns successfully" Jan 23 01:03:06.454750 kubelet[2829]: I0123 01:03:06.454625 2829 scope.go:117] "RemoveContainer" containerID="5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54" Jan 23 01:03:06.455770 containerd[1632]: time="2026-01-23T01:03:06.455701899Z" level=info msg="RemoveContainer for \"5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54\"" Jan 23 01:03:06.458606 containerd[1632]: time="2026-01-23T01:03:06.458588511Z" level=info msg="RemoveContainer for \"5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54\" returns successfully" Jan 23 01:03:06.458824 kubelet[2829]: I0123 01:03:06.458766 2829 scope.go:117] "RemoveContainer" containerID="ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192" Jan 23 01:03:06.458974 containerd[1632]: time="2026-01-23T01:03:06.458952780Z" level=error msg="ContainerStatus for \"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\": not found" Jan 23 01:03:06.459175 kubelet[2829]: E0123 01:03:06.459161 2829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\": not found" containerID="ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192" Jan 23 01:03:06.459260 kubelet[2829]: I0123 01:03:06.459242 2829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192"} err="failed to get container status \"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca1077544ec4422dbf5105603f33adcb5fc18c2b53736781e7c45278bab0e192\": not found" Jan 23 01:03:06.459302 kubelet[2829]: I0123 01:03:06.459297 2829 scope.go:117] "RemoveContainer" containerID="245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb" Jan 23 01:03:06.459532 containerd[1632]: time="2026-01-23T01:03:06.459490098Z" level=error msg="ContainerStatus for \"245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb\": not found" Jan 23 01:03:06.459712 kubelet[2829]: E0123 01:03:06.459652 2829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb\": not found" containerID="245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb" Jan 23 01:03:06.459712 kubelet[2829]: I0123 01:03:06.459669 2829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb"} err="failed to get container status \"245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb\": rpc error: code = NotFound desc = an error occurred when try to find container \"245e5386ef1ab0f49b8b9c6624a510267a3c9be89feedd0d50ffb57308002feb\": not found" Jan 23 01:03:06.459712 kubelet[2829]: I0123 01:03:06.459687 2829 scope.go:117] "RemoveContainer" containerID="0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b" Jan 23 01:03:06.459908 containerd[1632]: time="2026-01-23T01:03:06.459890379Z" level=error msg="ContainerStatus for \"0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b\": not found" Jan 23 01:03:06.460035 kubelet[2829]: E0123 01:03:06.460018 2829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b\": not found" containerID="0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b" Jan 23 01:03:06.460072 kubelet[2829]: I0123 01:03:06.460043 2829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b"} err="failed to get container status \"0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b\": rpc error: code = NotFound desc = an error occurred when try to find container \"0656b347387ff10bfe9155fe6eae0b6d77a98d90a88fb0c5de9a21f09abebe4b\": not found" Jan 23 01:03:06.460072 kubelet[2829]: I0123 01:03:06.460067 2829 scope.go:117] "RemoveContainer" containerID="5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d" Jan 23 01:03:06.460225 containerd[1632]: time="2026-01-23T01:03:06.460203193Z" level=error msg="ContainerStatus for \"5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d\": not found" Jan 23 01:03:06.460375 kubelet[2829]: E0123 01:03:06.460308 2829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d\": not found" containerID="5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d" Jan 23 01:03:06.460375 kubelet[2829]: I0123 01:03:06.460324 2829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d"} err="failed to get container status \"5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5800f5cb4b66067526bfd7cb9fb6ef6d754fea50662aae0361fcb8f8b013792d\": not found" Jan 23 01:03:06.460375 kubelet[2829]: I0123 01:03:06.460337 2829 scope.go:117] "RemoveContainer" containerID="5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54" Jan 23 01:03:06.460620 containerd[1632]: time="2026-01-23T01:03:06.460603926Z" level=error msg="ContainerStatus for \"5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54\": not found" Jan 23 01:03:06.460727 kubelet[2829]: E0123 01:03:06.460716 2829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54\": not found" containerID="5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54" Jan 23 01:03:06.460788 kubelet[2829]: I0123 01:03:06.460777 2829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54"} err="failed to get container status \"5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54\": rpc error: code = NotFound desc = an error occurred when try to find container \"5067b91d0a1748154aac5f93069c948a58220d61088bf5052a99ab7273552c54\": not found" Jan 23 01:03:07.398195 sshd[4361]: Connection closed by 20.161.92.111 port 58990 Jan 23 01:03:07.398913 sshd-session[4358]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:07.403310 systemd[1]: sshd@21-10.0.7.172:22-20.161.92.111:58990.service: Deactivated successfully. Jan 23 01:03:07.405296 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:03:07.406105 systemd-logind[1602]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:03:07.407363 systemd-logind[1602]: Removed session 22. Jan 23 01:03:07.507511 systemd[1]: Started sshd@22-10.0.7.172:22-20.161.92.111:58998.service - OpenSSH per-connection server daemon (20.161.92.111:58998). Jan 23 01:03:08.071246 kubelet[2829]: I0123 01:03:08.070575 2829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0282ca70-24a7-41b6-ad85-5835a877cab2" path="/var/lib/kubelet/pods/0282ca70-24a7-41b6-ad85-5835a877cab2/volumes" Jan 23 01:03:08.071246 kubelet[2829]: I0123 01:03:08.071027 2829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="191b6cea-9683-44ff-8d80-cd9c896e80cd" path="/var/lib/kubelet/pods/191b6cea-9683-44ff-8d80-cd9c896e80cd/volumes" Jan 23 01:03:08.107949 sshd[4510]: Accepted publickey for core from 20.161.92.111 port 58998 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:03:08.109388 sshd-session[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:08.113241 systemd-logind[1602]: New session 23 of user core. Jan 23 01:03:08.117251 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:03:08.888093 systemd[1]: Created slice kubepods-burstable-pod4b7f4a26_1cec_4fbd_8b38_d3bf599a97a2.slice - libcontainer container kubepods-burstable-pod4b7f4a26_1cec_4fbd_8b38_d3bf599a97a2.slice. Jan 23 01:03:08.935579 kubelet[2829]: I0123 01:03:08.935219 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-etc-cni-netd\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.935579 kubelet[2829]: I0123 01:03:08.935263 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-lib-modules\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.935579 kubelet[2829]: I0123 01:03:08.935283 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-clustermesh-secrets\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.935579 kubelet[2829]: I0123 01:03:08.935300 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-cilium-ipsec-secrets\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.935579 kubelet[2829]: I0123 01:03:08.935316 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-hostproc\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.935579 kubelet[2829]: I0123 01:03:08.935330 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-host-proc-sys-net\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.935866 kubelet[2829]: I0123 01:03:08.935368 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-hubble-tls\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.935866 kubelet[2829]: I0123 01:03:08.935392 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-xtables-lock\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.935866 kubelet[2829]: I0123 01:03:08.935409 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-cilium-run\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.935866 kubelet[2829]: I0123 01:03:08.935435 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-cilium-cgroup\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.935866 kubelet[2829]: I0123 01:03:08.935452 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-cni-path\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.935866 kubelet[2829]: I0123 01:03:08.935470 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-cilium-config-path\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.935992 kubelet[2829]: I0123 01:03:08.935487 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-host-proc-sys-kernel\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.935992 kubelet[2829]: I0123 01:03:08.935503 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp475\" (UniqueName: \"kubernetes.io/projected/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-kube-api-access-cp475\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.935992 kubelet[2829]: I0123 01:03:08.935530 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2-bpf-maps\") pod \"cilium-kwjh4\" (UID: \"4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2\") " pod="kube-system/cilium-kwjh4" Jan 23 01:03:08.992147 sshd[4513]: Connection closed by 20.161.92.111 port 58998 Jan 23 01:03:08.992996 sshd-session[4510]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:08.997159 systemd-logind[1602]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:03:08.997640 systemd[1]: sshd@22-10.0.7.172:22-20.161.92.111:58998.service: Deactivated successfully. Jan 23 01:03:08.999976 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:03:09.001943 systemd-logind[1602]: Removed session 23. Jan 23 01:03:09.099251 systemd[1]: Started sshd@23-10.0.7.172:22-20.161.92.111:59010.service - OpenSSH per-connection server daemon (20.161.92.111:59010). Jan 23 01:03:09.145370 kubelet[2829]: E0123 01:03:09.145240 2829 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 01:03:09.192765 containerd[1632]: time="2026-01-23T01:03:09.192732232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kwjh4,Uid:4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2,Namespace:kube-system,Attempt:0,}" Jan 23 01:03:09.218647 containerd[1632]: time="2026-01-23T01:03:09.218594736Z" level=info msg="connecting to shim 29cf3730a2d7da6a6e0525e3c2f7302d5dc44d5224f937565871ee8498fb92a3" address="unix:///run/containerd/s/f29464b98066d54ab42c815761547708b66ea23ef6c8302704c7a1173fca4096" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:03:09.245384 systemd[1]: Started cri-containerd-29cf3730a2d7da6a6e0525e3c2f7302d5dc44d5224f937565871ee8498fb92a3.scope - libcontainer container 29cf3730a2d7da6a6e0525e3c2f7302d5dc44d5224f937565871ee8498fb92a3. Jan 23 01:03:09.269804 containerd[1632]: time="2026-01-23T01:03:09.269764321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kwjh4,Uid:4b7f4a26-1cec-4fbd-8b38-d3bf599a97a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"29cf3730a2d7da6a6e0525e3c2f7302d5dc44d5224f937565871ee8498fb92a3\"" Jan 23 01:03:09.278867 containerd[1632]: time="2026-01-23T01:03:09.278424795Z" level=info msg="CreateContainer within sandbox \"29cf3730a2d7da6a6e0525e3c2f7302d5dc44d5224f937565871ee8498fb92a3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:03:09.285944 containerd[1632]: time="2026-01-23T01:03:09.285888767Z" level=info msg="Container b14458a44560ab32d9128678e23d3f4414272e60818eb215772b57631538cccb: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:09.292403 containerd[1632]: time="2026-01-23T01:03:09.292364350Z" level=info msg="CreateContainer within sandbox \"29cf3730a2d7da6a6e0525e3c2f7302d5dc44d5224f937565871ee8498fb92a3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b14458a44560ab32d9128678e23d3f4414272e60818eb215772b57631538cccb\"" Jan 23 01:03:09.293023 containerd[1632]: time="2026-01-23T01:03:09.292782422Z" level=info msg="StartContainer for \"b14458a44560ab32d9128678e23d3f4414272e60818eb215772b57631538cccb\"" Jan 23 01:03:09.293904 containerd[1632]: time="2026-01-23T01:03:09.293844788Z" level=info msg="connecting to shim b14458a44560ab32d9128678e23d3f4414272e60818eb215772b57631538cccb" address="unix:///run/containerd/s/f29464b98066d54ab42c815761547708b66ea23ef6c8302704c7a1173fca4096" protocol=ttrpc version=3 Jan 23 01:03:09.312052 systemd[1]: Started cri-containerd-b14458a44560ab32d9128678e23d3f4414272e60818eb215772b57631538cccb.scope - libcontainer container b14458a44560ab32d9128678e23d3f4414272e60818eb215772b57631538cccb. Jan 23 01:03:09.342756 containerd[1632]: time="2026-01-23T01:03:09.342719670Z" level=info msg="StartContainer for \"b14458a44560ab32d9128678e23d3f4414272e60818eb215772b57631538cccb\" returns successfully" Jan 23 01:03:09.349286 systemd[1]: cri-containerd-b14458a44560ab32d9128678e23d3f4414272e60818eb215772b57631538cccb.scope: Deactivated successfully. Jan 23 01:03:09.351744 containerd[1632]: time="2026-01-23T01:03:09.351702541Z" level=info msg="received container exit event container_id:\"b14458a44560ab32d9128678e23d3f4414272e60818eb215772b57631538cccb\" id:\"b14458a44560ab32d9128678e23d3f4414272e60818eb215772b57631538cccb\" pid:4589 exited_at:{seconds:1769130189 nanos:351309477}" Jan 23 01:03:09.429511 containerd[1632]: time="2026-01-23T01:03:09.429425654Z" level=info msg="CreateContainer within sandbox \"29cf3730a2d7da6a6e0525e3c2f7302d5dc44d5224f937565871ee8498fb92a3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:03:09.438503 containerd[1632]: time="2026-01-23T01:03:09.438469353Z" level=info msg="Container 4df1d2fcbbee02c6f577231a740a6fbb5eb4cf95f87a9851a1b4ace5ce65b925: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:09.443555 containerd[1632]: time="2026-01-23T01:03:09.443523187Z" level=info msg="CreateContainer within sandbox \"29cf3730a2d7da6a6e0525e3c2f7302d5dc44d5224f937565871ee8498fb92a3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4df1d2fcbbee02c6f577231a740a6fbb5eb4cf95f87a9851a1b4ace5ce65b925\"" Jan 23 01:03:09.444803 containerd[1632]: time="2026-01-23T01:03:09.444061391Z" level=info msg="StartContainer for \"4df1d2fcbbee02c6f577231a740a6fbb5eb4cf95f87a9851a1b4ace5ce65b925\"" Jan 23 01:03:09.444803 containerd[1632]: time="2026-01-23T01:03:09.444730529Z" level=info msg="connecting to shim 4df1d2fcbbee02c6f577231a740a6fbb5eb4cf95f87a9851a1b4ace5ce65b925" address="unix:///run/containerd/s/f29464b98066d54ab42c815761547708b66ea23ef6c8302704c7a1173fca4096" protocol=ttrpc version=3 Jan 23 01:03:09.462256 systemd[1]: Started cri-containerd-4df1d2fcbbee02c6f577231a740a6fbb5eb4cf95f87a9851a1b4ace5ce65b925.scope - libcontainer container 4df1d2fcbbee02c6f577231a740a6fbb5eb4cf95f87a9851a1b4ace5ce65b925. Jan 23 01:03:09.487775 containerd[1632]: time="2026-01-23T01:03:09.487744281Z" level=info msg="StartContainer for \"4df1d2fcbbee02c6f577231a740a6fbb5eb4cf95f87a9851a1b4ace5ce65b925\" returns successfully" Jan 23 01:03:09.491823 systemd[1]: cri-containerd-4df1d2fcbbee02c6f577231a740a6fbb5eb4cf95f87a9851a1b4ace5ce65b925.scope: Deactivated successfully. Jan 23 01:03:09.493072 containerd[1632]: time="2026-01-23T01:03:09.493047523Z" level=info msg="received container exit event container_id:\"4df1d2fcbbee02c6f577231a740a6fbb5eb4cf95f87a9851a1b4ace5ce65b925\" id:\"4df1d2fcbbee02c6f577231a740a6fbb5eb4cf95f87a9851a1b4ace5ce65b925\" pid:4635 exited_at:{seconds:1769130189 nanos:492828828}" Jan 23 01:03:09.705262 sshd[4528]: Accepted publickey for core from 20.161.92.111 port 59010 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:03:09.706766 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:09.711812 systemd-logind[1602]: New session 24 of user core. Jan 23 01:03:09.719254 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 01:03:10.126867 sshd[4667]: Connection closed by 20.161.92.111 port 59010 Jan 23 01:03:10.127474 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:10.131722 systemd[1]: sshd@23-10.0.7.172:22-20.161.92.111:59010.service: Deactivated successfully. Jan 23 01:03:10.133759 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 01:03:10.134580 systemd-logind[1602]: Session 24 logged out. Waiting for processes to exit. Jan 23 01:03:10.135846 systemd-logind[1602]: Removed session 24. Jan 23 01:03:10.235998 systemd[1]: Started sshd@24-10.0.7.172:22-20.161.92.111:59026.service - OpenSSH per-connection server daemon (20.161.92.111:59026). Jan 23 01:03:10.431498 containerd[1632]: time="2026-01-23T01:03:10.431320863Z" level=info msg="CreateContainer within sandbox \"29cf3730a2d7da6a6e0525e3c2f7302d5dc44d5224f937565871ee8498fb92a3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:03:10.447337 containerd[1632]: time="2026-01-23T01:03:10.447284514Z" level=info msg="Container 9811b32b7acec52749badb432f283eb967e1272aa3b8bf76e492d9ac1de147e0: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:10.460979 containerd[1632]: time="2026-01-23T01:03:10.460922960Z" level=info msg="CreateContainer within sandbox \"29cf3730a2d7da6a6e0525e3c2f7302d5dc44d5224f937565871ee8498fb92a3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9811b32b7acec52749badb432f283eb967e1272aa3b8bf76e492d9ac1de147e0\"" Jan 23 01:03:10.462379 containerd[1632]: time="2026-01-23T01:03:10.462357529Z" level=info msg="StartContainer for \"9811b32b7acec52749badb432f283eb967e1272aa3b8bf76e492d9ac1de147e0\"" Jan 23 01:03:10.463918 containerd[1632]: time="2026-01-23T01:03:10.463855581Z" level=info msg="connecting to shim 9811b32b7acec52749badb432f283eb967e1272aa3b8bf76e492d9ac1de147e0" address="unix:///run/containerd/s/f29464b98066d54ab42c815761547708b66ea23ef6c8302704c7a1173fca4096" protocol=ttrpc version=3 Jan 23 01:03:10.488313 systemd[1]: Started cri-containerd-9811b32b7acec52749badb432f283eb967e1272aa3b8bf76e492d9ac1de147e0.scope - libcontainer container 9811b32b7acec52749badb432f283eb967e1272aa3b8bf76e492d9ac1de147e0. Jan 23 01:03:10.548180 containerd[1632]: time="2026-01-23T01:03:10.548034440Z" level=info msg="StartContainer for \"9811b32b7acec52749badb432f283eb967e1272aa3b8bf76e492d9ac1de147e0\" returns successfully" Jan 23 01:03:10.549880 systemd[1]: cri-containerd-9811b32b7acec52749badb432f283eb967e1272aa3b8bf76e492d9ac1de147e0.scope: Deactivated successfully. Jan 23 01:03:10.553620 containerd[1632]: time="2026-01-23T01:03:10.553589255Z" level=info msg="received container exit event container_id:\"9811b32b7acec52749badb432f283eb967e1272aa3b8bf76e492d9ac1de147e0\" id:\"9811b32b7acec52749badb432f283eb967e1272aa3b8bf76e492d9ac1de147e0\" pid:4690 exited_at:{seconds:1769130190 nanos:553239384}" Jan 23 01:03:10.577286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9811b32b7acec52749badb432f283eb967e1272aa3b8bf76e492d9ac1de147e0-rootfs.mount: Deactivated successfully. Jan 23 01:03:10.846206 sshd[4674]: Accepted publickey for core from 20.161.92.111 port 59026 ssh2: RSA SHA256:tQIJN5HlXk0+c/kUIMdsIlPUXB6L6udcPrUheN99J8w Jan 23 01:03:10.847270 sshd-session[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:10.851898 systemd-logind[1602]: New session 25 of user core. Jan 23 01:03:10.860270 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 01:03:11.434968 containerd[1632]: time="2026-01-23T01:03:11.434936896Z" level=info msg="CreateContainer within sandbox \"29cf3730a2d7da6a6e0525e3c2f7302d5dc44d5224f937565871ee8498fb92a3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:03:11.447400 containerd[1632]: time="2026-01-23T01:03:11.447364186Z" level=info msg="Container dc93ddfa353199d0007b09ecaaae701c7527e1bc91e2e86220c194bd568af219: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:11.456995 containerd[1632]: time="2026-01-23T01:03:11.456961042Z" level=info msg="CreateContainer within sandbox \"29cf3730a2d7da6a6e0525e3c2f7302d5dc44d5224f937565871ee8498fb92a3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dc93ddfa353199d0007b09ecaaae701c7527e1bc91e2e86220c194bd568af219\"" Jan 23 01:03:11.457938 containerd[1632]: time="2026-01-23T01:03:11.457675003Z" level=info msg="StartContainer for \"dc93ddfa353199d0007b09ecaaae701c7527e1bc91e2e86220c194bd568af219\"" Jan 23 01:03:11.458981 containerd[1632]: time="2026-01-23T01:03:11.458957630Z" level=info msg="connecting to shim dc93ddfa353199d0007b09ecaaae701c7527e1bc91e2e86220c194bd568af219" address="unix:///run/containerd/s/f29464b98066d54ab42c815761547708b66ea23ef6c8302704c7a1173fca4096" protocol=ttrpc version=3 Jan 23 01:03:11.478276 systemd[1]: Started cri-containerd-dc93ddfa353199d0007b09ecaaae701c7527e1bc91e2e86220c194bd568af219.scope - libcontainer container dc93ddfa353199d0007b09ecaaae701c7527e1bc91e2e86220c194bd568af219. Jan 23 01:03:11.498837 systemd[1]: cri-containerd-dc93ddfa353199d0007b09ecaaae701c7527e1bc91e2e86220c194bd568af219.scope: Deactivated successfully. Jan 23 01:03:11.500975 containerd[1632]: time="2026-01-23T01:03:11.500876059Z" level=info msg="received container exit event container_id:\"dc93ddfa353199d0007b09ecaaae701c7527e1bc91e2e86220c194bd568af219\" id:\"dc93ddfa353199d0007b09ecaaae701c7527e1bc91e2e86220c194bd568af219\" pid:4736 exited_at:{seconds:1769130191 nanos:499607413}" Jan 23 01:03:11.507687 containerd[1632]: time="2026-01-23T01:03:11.507620249Z" level=info msg="StartContainer for \"dc93ddfa353199d0007b09ecaaae701c7527e1bc91e2e86220c194bd568af219\" returns successfully" Jan 23 01:03:11.518670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc93ddfa353199d0007b09ecaaae701c7527e1bc91e2e86220c194bd568af219-rootfs.mount: Deactivated successfully. Jan 23 01:03:12.439276 containerd[1632]: time="2026-01-23T01:03:12.439243056Z" level=info msg="CreateContainer within sandbox \"29cf3730a2d7da6a6e0525e3c2f7302d5dc44d5224f937565871ee8498fb92a3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:03:12.454926 containerd[1632]: time="2026-01-23T01:03:12.454362253Z" level=info msg="Container 927fd7d849755392f5b728e32c3a1299e5cb76f6fb144923becdebd6f261522c: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:12.464948 containerd[1632]: time="2026-01-23T01:03:12.464915460Z" level=info msg="CreateContainer within sandbox \"29cf3730a2d7da6a6e0525e3c2f7302d5dc44d5224f937565871ee8498fb92a3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"927fd7d849755392f5b728e32c3a1299e5cb76f6fb144923becdebd6f261522c\"" Jan 23 01:03:12.465313 containerd[1632]: time="2026-01-23T01:03:12.465296371Z" level=info msg="StartContainer for \"927fd7d849755392f5b728e32c3a1299e5cb76f6fb144923becdebd6f261522c\"" Jan 23 01:03:12.466094 containerd[1632]: time="2026-01-23T01:03:12.466069078Z" level=info msg="connecting to shim 927fd7d849755392f5b728e32c3a1299e5cb76f6fb144923becdebd6f261522c" address="unix:///run/containerd/s/f29464b98066d54ab42c815761547708b66ea23ef6c8302704c7a1173fca4096" protocol=ttrpc version=3 Jan 23 01:03:12.485248 systemd[1]: Started cri-containerd-927fd7d849755392f5b728e32c3a1299e5cb76f6fb144923becdebd6f261522c.scope - libcontainer container 927fd7d849755392f5b728e32c3a1299e5cb76f6fb144923becdebd6f261522c. Jan 23 01:03:12.534515 containerd[1632]: time="2026-01-23T01:03:12.534473998Z" level=info msg="StartContainer for \"927fd7d849755392f5b728e32c3a1299e5cb76f6fb144923becdebd6f261522c\" returns successfully" Jan 23 01:03:12.851149 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_256)) Jan 23 01:03:13.451290 kubelet[2829]: I0123 01:03:13.451224 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kwjh4" podStartSLOduration=5.451198692 podStartE2EDuration="5.451198692s" podCreationTimestamp="2026-01-23 01:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:03:13.449848934 +0000 UTC m=+179.461289625" watchObservedRunningTime="2026-01-23 01:03:13.451198692 +0000 UTC m=+179.462639383" Jan 23 01:03:14.065619 containerd[1632]: time="2026-01-23T01:03:14.065508437Z" level=info msg="StopPodSandbox for \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\"" Jan 23 01:03:14.066252 containerd[1632]: time="2026-01-23T01:03:14.066014978Z" level=info msg="TearDown network for sandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" successfully" Jan 23 01:03:14.066252 containerd[1632]: time="2026-01-23T01:03:14.066042626Z" level=info msg="StopPodSandbox for \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" returns successfully" Jan 23 01:03:14.066363 containerd[1632]: time="2026-01-23T01:03:14.066325465Z" level=info msg="RemovePodSandbox for \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\"" Jan 23 01:03:14.066363 containerd[1632]: time="2026-01-23T01:03:14.066347281Z" level=info msg="Forcibly stopping sandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\"" Jan 23 01:03:14.066424 containerd[1632]: time="2026-01-23T01:03:14.066416081Z" level=info msg="TearDown network for sandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" successfully" Jan 23 01:03:14.067480 containerd[1632]: time="2026-01-23T01:03:14.067464195Z" level=info msg="Ensure that sandbox 7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061 in task-service has been cleanup successfully" Jan 23 01:03:14.071069 containerd[1632]: time="2026-01-23T01:03:14.071048632Z" level=info msg="RemovePodSandbox \"7f505de429fd72a48791596c46ef60571b70714492a35e511cb11654bec1f061\" returns successfully" Jan 23 01:03:14.071371 containerd[1632]: time="2026-01-23T01:03:14.071357230Z" level=info msg="StopPodSandbox for \"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\"" Jan 23 01:03:14.071564 containerd[1632]: time="2026-01-23T01:03:14.071551264Z" level=info msg="TearDown network for sandbox \"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\" successfully" Jan 23 01:03:14.071660 containerd[1632]: time="2026-01-23T01:03:14.071628590Z" level=info msg="StopPodSandbox for \"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\" returns successfully" Jan 23 01:03:14.071891 containerd[1632]: time="2026-01-23T01:03:14.071878873Z" level=info msg="RemovePodSandbox for \"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\"" Jan 23 01:03:14.071953 containerd[1632]: time="2026-01-23T01:03:14.071944695Z" level=info msg="Forcibly stopping sandbox \"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\"" Jan 23 01:03:14.072038 containerd[1632]: time="2026-01-23T01:03:14.072030066Z" level=info msg="TearDown network for sandbox \"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\" successfully" Jan 23 01:03:14.073056 containerd[1632]: time="2026-01-23T01:03:14.073040341Z" level=info msg="Ensure that sandbox 9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2 in task-service has been cleanup successfully" Jan 23 01:03:14.076957 containerd[1632]: time="2026-01-23T01:03:14.076921925Z" level=info msg="RemovePodSandbox \"9b385cd294a6a963bf60793fa9f50fea583319747d28be85a0162d4375c093a2\" returns successfully" Jan 23 01:03:15.489480 systemd-networkd[1506]: lxc_health: Link UP Jan 23 01:03:15.490358 systemd-networkd[1506]: lxc_health: Gained carrier Jan 23 01:03:17.267256 systemd-networkd[1506]: lxc_health: Gained IPv6LL Jan 23 01:03:21.779398 sshd[4718]: Connection closed by 20.161.92.111 port 59026 Jan 23 01:03:21.779730 sshd-session[4674]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:21.783396 systemd[1]: sshd@24-10.0.7.172:22-20.161.92.111:59026.service: Deactivated successfully. Jan 23 01:03:21.785129 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 01:03:21.785861 systemd-logind[1602]: Session 25 logged out. Waiting for processes to exit. Jan 23 01:03:21.787356 systemd-logind[1602]: Removed session 25. Jan 23 01:03:47.511716 kubelet[2829]: E0123 01:03:47.511489 2829 controller.go:195] "Failed to update lease" err="Put \"https://10.0.7.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-n-6e52943716?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 01:03:47.733518 kubelet[2829]: E0123 01:03:47.733454 2829 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.7.172:39702->10.0.7.229:2379: read: connection timed out" Jan 23 01:03:48.588440 systemd[1]: cri-containerd-61b2495c0a208264200afff0891f4810a86ea3d8470308e9a0327533df761463.scope: Deactivated successfully. Jan 23 01:03:48.588730 systemd[1]: cri-containerd-61b2495c0a208264200afff0891f4810a86ea3d8470308e9a0327533df761463.scope: Consumed 2.776s CPU time, 56.7M memory peak. Jan 23 01:03:48.591959 containerd[1632]: time="2026-01-23T01:03:48.591922890Z" level=info msg="received container exit event container_id:\"61b2495c0a208264200afff0891f4810a86ea3d8470308e9a0327533df761463\" id:\"61b2495c0a208264200afff0891f4810a86ea3d8470308e9a0327533df761463\" pid:2674 exit_status:1 exited_at:{seconds:1769130228 nanos:590696918}" Jan 23 01:03:48.617210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61b2495c0a208264200afff0891f4810a86ea3d8470308e9a0327533df761463-rootfs.mount: Deactivated successfully. Jan 23 01:03:49.502551 kubelet[2829]: I0123 01:03:49.502520 2829 scope.go:117] "RemoveContainer" containerID="61b2495c0a208264200afff0891f4810a86ea3d8470308e9a0327533df761463" Jan 23 01:03:49.504312 containerd[1632]: time="2026-01-23T01:03:49.504254565Z" level=info msg="CreateContainer within sandbox \"3fd93cedb0f47da64875220e3e2c24c171c8a6c116a748a6fb8da24ebd9f7991\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 01:03:49.513133 containerd[1632]: time="2026-01-23T01:03:49.510945740Z" level=info msg="Container 1842a14aeb8f6bd4ad61a0f4d6d9c6947f4a2f0c9a5a474d4f457fe03deb98be: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:49.518971 containerd[1632]: time="2026-01-23T01:03:49.518931481Z" level=info msg="CreateContainer within sandbox \"3fd93cedb0f47da64875220e3e2c24c171c8a6c116a748a6fb8da24ebd9f7991\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1842a14aeb8f6bd4ad61a0f4d6d9c6947f4a2f0c9a5a474d4f457fe03deb98be\"" Jan 23 01:03:49.519598 containerd[1632]: time="2026-01-23T01:03:49.519506683Z" level=info msg="StartContainer for \"1842a14aeb8f6bd4ad61a0f4d6d9c6947f4a2f0c9a5a474d4f457fe03deb98be\"" Jan 23 01:03:49.520630 containerd[1632]: time="2026-01-23T01:03:49.520613168Z" level=info msg="connecting to shim 1842a14aeb8f6bd4ad61a0f4d6d9c6947f4a2f0c9a5a474d4f457fe03deb98be" address="unix:///run/containerd/s/b37a963dda18a708dc6429a4718549f3aa30afebe272b939dfd2ac159c48df34" protocol=ttrpc version=3 Jan 23 01:03:49.542318 systemd[1]: Started cri-containerd-1842a14aeb8f6bd4ad61a0f4d6d9c6947f4a2f0c9a5a474d4f457fe03deb98be.scope - libcontainer container 1842a14aeb8f6bd4ad61a0f4d6d9c6947f4a2f0c9a5a474d4f457fe03deb98be. Jan 23 01:03:49.588631 containerd[1632]: time="2026-01-23T01:03:49.588600364Z" level=info msg="StartContainer for \"1842a14aeb8f6bd4ad61a0f4d6d9c6947f4a2f0c9a5a474d4f457fe03deb98be\" returns successfully" Jan 23 01:03:52.021826 kubelet[2829]: E0123 01:03:52.021349 2829 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.7.172:39530->10.0.7.229:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-2-2-n-6e52943716.188d3690324d7c9d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-2-2-n-6e52943716,UID:425e8997713e272c7ba57c9b39853339,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-n-6e52943716,},FirstTimestamp:2026-01-23 01:03:41.573602461 +0000 UTC m=+207.585043133,LastTimestamp:2026-01-23 01:03:41.573602461 +0000 UTC m=+207.585043133,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-n-6e52943716,}" Jan 23 01:03:53.237146 kubelet[2829]: I0123 01:03:53.236965 2829 status_manager.go:895] "Failed to get status for pod" podUID="425e8997713e272c7ba57c9b39853339" pod="kube-system/kube-apiserver-ci-4459-2-2-n-6e52943716" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.7.172:39648->10.0.7.229:2379: read: connection timed out" Jan 23 01:03:53.623880 systemd[1]: cri-containerd-1fb94f9409489dd9c0504933bea9b80e4de3bbb87403d3a5981a598095bc2032.scope: Deactivated successfully. Jan 23 01:03:53.624203 systemd[1]: cri-containerd-1fb94f9409489dd9c0504933bea9b80e4de3bbb87403d3a5981a598095bc2032.scope: Consumed 2.223s CPU time, 21.5M memory peak. Jan 23 01:03:53.625937 containerd[1632]: time="2026-01-23T01:03:53.625875993Z" level=info msg="received container exit event container_id:\"1fb94f9409489dd9c0504933bea9b80e4de3bbb87403d3a5981a598095bc2032\" id:\"1fb94f9409489dd9c0504933bea9b80e4de3bbb87403d3a5981a598095bc2032\" pid:2644 exit_status:1 exited_at:{seconds:1769130233 nanos:625620492}" Jan 23 01:03:53.644953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fb94f9409489dd9c0504933bea9b80e4de3bbb87403d3a5981a598095bc2032-rootfs.mount: Deactivated successfully. Jan 23 01:03:54.519460 kubelet[2829]: I0123 01:03:54.519272 2829 scope.go:117] "RemoveContainer" containerID="1fb94f9409489dd9c0504933bea9b80e4de3bbb87403d3a5981a598095bc2032" Jan 23 01:03:54.520602 containerd[1632]: time="2026-01-23T01:03:54.520578182Z" level=info msg="CreateContainer within sandbox \"4ce1f8a48ed5ffb45ba91499d31d2de8b0f7113f9aad17414bb21f453a3fb5d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 01:03:54.532134 containerd[1632]: time="2026-01-23T01:03:54.531798094Z" level=info msg="Container a788eec2010160db0f056506464c871852a09cdfd58e0cf6012b984e5c0bb33d: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:54.540022 containerd[1632]: time="2026-01-23T01:03:54.539997692Z" level=info msg="CreateContainer within sandbox \"4ce1f8a48ed5ffb45ba91499d31d2de8b0f7113f9aad17414bb21f453a3fb5d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a788eec2010160db0f056506464c871852a09cdfd58e0cf6012b984e5c0bb33d\"" Jan 23 01:03:54.540541 containerd[1632]: time="2026-01-23T01:03:54.540519776Z" level=info msg="StartContainer for \"a788eec2010160db0f056506464c871852a09cdfd58e0cf6012b984e5c0bb33d\"" Jan 23 01:03:54.541351 containerd[1632]: time="2026-01-23T01:03:54.541332702Z" level=info msg="connecting to shim a788eec2010160db0f056506464c871852a09cdfd58e0cf6012b984e5c0bb33d" address="unix:///run/containerd/s/652e54625ba14c6a3141f4610a554a682df20a9c5e30032af709bb43d6f46b89" protocol=ttrpc version=3 Jan 23 01:03:54.559253 systemd[1]: Started cri-containerd-a788eec2010160db0f056506464c871852a09cdfd58e0cf6012b984e5c0bb33d.scope - libcontainer container a788eec2010160db0f056506464c871852a09cdfd58e0cf6012b984e5c0bb33d. Jan 23 01:03:54.604338 containerd[1632]: time="2026-01-23T01:03:54.604307319Z" level=info msg="StartContainer for \"a788eec2010160db0f056506464c871852a09cdfd58e0cf6012b984e5c0bb33d\" returns successfully" Jan 23 01:03:57.735006 kubelet[2829]: E0123 01:03:57.734970 2829 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-4459-2-2-n-6e52943716)"