Dec 16 13:08:35.981797 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:08:35.981851 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:08:35.981923 kernel: BIOS-provided physical RAM map: Dec 16 13:08:35.981945 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:08:35.981959 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 16 13:08:35.981972 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 16 13:08:35.981988 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 16 13:08:35.982002 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 16 13:08:35.982015 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 16 13:08:35.982033 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 16 13:08:35.982047 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000007e73efff] usable Dec 16 13:08:35.982061 kernel: BIOS-e820: [mem 0x000000007e73f000-0x000000007e7fffff] reserved Dec 16 13:08:35.982074 kernel: BIOS-e820: [mem 0x000000007e800000-0x000000007ea70fff] usable Dec 16 13:08:35.982087 kernel: BIOS-e820: [mem 0x000000007ea71000-0x000000007eb84fff] reserved Dec 16 13:08:35.982104 kernel: BIOS-e820: [mem 0x000000007eb85000-0x000000007f6ecfff] usable Dec 16 13:08:35.982122 kernel: BIOS-e820: [mem 0x000000007f6ed000-0x000000007f96cfff] reserved Dec 16 13:08:35.982137 kernel: BIOS-e820: [mem 0x000000007f96d000-0x000000007f97efff] ACPI data Dec 16 13:08:35.982151 kernel: BIOS-e820: [mem 0x000000007f97f000-0x000000007f9fefff] ACPI NVS Dec 16 13:08:35.982165 kernel: BIOS-e820: [mem 0x000000007f9ff000-0x000000007fe4efff] usable Dec 16 13:08:35.982179 kernel: BIOS-e820: [mem 0x000000007fe4f000-0x000000007fe52fff] reserved Dec 16 13:08:35.982192 kernel: BIOS-e820: [mem 0x000000007fe53000-0x000000007fe54fff] ACPI NVS Dec 16 13:08:35.982206 kernel: BIOS-e820: [mem 0x000000007fe55000-0x000000007febbfff] usable Dec 16 13:08:35.982220 kernel: BIOS-e820: [mem 0x000000007febc000-0x000000007ff3ffff] reserved Dec 16 13:08:35.982234 kernel: BIOS-e820: [mem 0x000000007ff40000-0x000000007fffffff] ACPI NVS Dec 16 13:08:35.982248 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 13:08:35.982265 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:08:35.982279 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 16 13:08:35.982293 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000047fffffff] usable Dec 16 13:08:35.982307 kernel: NX (Execute Disable) protection: active Dec 16 13:08:35.982321 kernel: APIC: Static calls initialized Dec 16 13:08:35.982335 kernel: e820: update [mem 0x7dd4e018-0x7dd57a57] usable ==> usable Dec 16 13:08:35.982350 kernel: e820: update [mem 0x7dd26018-0x7dd4d457] usable ==> usable Dec 16 13:08:35.982365 kernel: extended physical RAM map: Dec 16 13:08:35.982379 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:08:35.982393 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 16 13:08:35.982407 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 16 13:08:35.982425 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 16 13:08:35.982439 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 16 13:08:35.982453 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 16 13:08:35.982468 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 16 13:08:35.982488 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000007dd26017] usable Dec 16 13:08:35.982503 kernel: reserve setup_data: [mem 0x000000007dd26018-0x000000007dd4d457] usable Dec 16 13:08:35.982518 kernel: reserve setup_data: [mem 0x000000007dd4d458-0x000000007dd4e017] usable Dec 16 13:08:35.982537 kernel: reserve setup_data: [mem 0x000000007dd4e018-0x000000007dd57a57] usable Dec 16 13:08:35.982552 kernel: reserve setup_data: [mem 0x000000007dd57a58-0x000000007e73efff] usable Dec 16 13:08:35.982567 kernel: reserve setup_data: [mem 0x000000007e73f000-0x000000007e7fffff] reserved Dec 16 13:08:35.982581 kernel: reserve setup_data: [mem 0x000000007e800000-0x000000007ea70fff] usable Dec 16 13:08:35.982596 kernel: reserve setup_data: [mem 0x000000007ea71000-0x000000007eb84fff] reserved Dec 16 13:08:35.982611 kernel: reserve setup_data: [mem 0x000000007eb85000-0x000000007f6ecfff] usable Dec 16 13:08:35.982625 kernel: reserve setup_data: [mem 0x000000007f6ed000-0x000000007f96cfff] reserved Dec 16 13:08:35.982640 kernel: reserve setup_data: [mem 0x000000007f96d000-0x000000007f97efff] ACPI data Dec 16 13:08:35.982655 kernel: reserve setup_data: [mem 0x000000007f97f000-0x000000007f9fefff] ACPI NVS Dec 16 13:08:35.982673 kernel: reserve setup_data: [mem 0x000000007f9ff000-0x000000007fe4efff] usable Dec 16 13:08:35.982688 kernel: reserve setup_data: [mem 0x000000007fe4f000-0x000000007fe52fff] reserved Dec 16 13:08:35.982703 kernel: reserve setup_data: [mem 0x000000007fe53000-0x000000007fe54fff] ACPI NVS Dec 16 13:08:35.982718 kernel: reserve setup_data: [mem 0x000000007fe55000-0x000000007febbfff] usable Dec 16 13:08:35.982733 kernel: reserve setup_data: [mem 0x000000007febc000-0x000000007ff3ffff] reserved Dec 16 13:08:35.982748 kernel: reserve setup_data: [mem 0x000000007ff40000-0x000000007fffffff] ACPI NVS Dec 16 13:08:35.982763 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 13:08:35.982778 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:08:35.982792 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 16 13:08:35.982807 kernel: reserve setup_data: [mem 0x0000000100000000-0x000000047fffffff] usable Dec 16 13:08:35.982822 kernel: efi: EFI v2.7 by EDK II Dec 16 13:08:35.982840 kernel: efi: SMBIOS=0x7f772000 ACPI=0x7f97e000 ACPI 2.0=0x7f97e014 MEMATTR=0x7e282018 RNG=0x7f972018 Dec 16 13:08:35.982855 kernel: random: crng init done Dec 16 13:08:35.982884 kernel: efi: Remove mem152: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Dec 16 13:08:35.982899 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Dec 16 13:08:35.982914 kernel: secureboot: Secure boot disabled Dec 16 13:08:35.982928 kernel: SMBIOS 2.8 present. Dec 16 13:08:35.982961 kernel: DMI: STACKIT Cloud OpenStack Nova/Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 16 13:08:35.982976 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:08:35.982990 kernel: Hypervisor detected: KVM Dec 16 13:08:35.983005 kernel: last_pfn = 0x7febc max_arch_pfn = 0x10000000000 Dec 16 13:08:35.983020 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:08:35.983034 kernel: kvm-clock: using sched offset of 6970504987 cycles Dec 16 13:08:35.983054 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:08:35.983083 kernel: tsc: Detected 2294.608 MHz processor Dec 16 13:08:35.983100 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:08:35.983115 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:08:35.983131 kernel: last_pfn = 0x480000 max_arch_pfn = 0x10000000000 Dec 16 13:08:35.983146 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:08:35.983162 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:08:35.983176 kernel: last_pfn = 0x7febc max_arch_pfn = 0x10000000000 Dec 16 13:08:35.983190 kernel: Using GB pages for direct mapping Dec 16 13:08:35.983208 kernel: ACPI: Early table checksum verification disabled Dec 16 13:08:35.983223 kernel: ACPI: RSDP 0x000000007F97E014 000024 (v02 BOCHS ) Dec 16 13:08:35.983237 kernel: ACPI: XSDT 0x000000007F97D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Dec 16 13:08:35.983252 kernel: ACPI: FACP 0x000000007F977000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:08:35.983266 kernel: ACPI: DSDT 0x000000007F978000 004441 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:08:35.983280 kernel: ACPI: FACS 0x000000007F9DD000 000040 Dec 16 13:08:35.983294 kernel: ACPI: APIC 0x000000007F976000 0000B0 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:08:35.983309 kernel: ACPI: MCFG 0x000000007F975000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:08:35.983323 kernel: ACPI: WAET 0x000000007F974000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:08:35.983341 kernel: ACPI: BGRT 0x000000007F973000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 16 13:08:35.983355 kernel: ACPI: Reserving FACP table memory at [mem 0x7f977000-0x7f9770f3] Dec 16 13:08:35.983369 kernel: ACPI: Reserving DSDT table memory at [mem 0x7f978000-0x7f97c440] Dec 16 13:08:35.983383 kernel: ACPI: Reserving FACS table memory at [mem 0x7f9dd000-0x7f9dd03f] Dec 16 13:08:35.983398 kernel: ACPI: Reserving APIC table memory at [mem 0x7f976000-0x7f9760af] Dec 16 13:08:35.983411 kernel: ACPI: Reserving MCFG table memory at [mem 0x7f975000-0x7f97503b] Dec 16 13:08:35.983426 kernel: ACPI: Reserving WAET table memory at [mem 0x7f974000-0x7f974027] Dec 16 13:08:35.983440 kernel: ACPI: Reserving BGRT table memory at [mem 0x7f973000-0x7f973037] Dec 16 13:08:35.983454 kernel: No NUMA configuration found Dec 16 13:08:35.983473 kernel: Faking a node at [mem 0x0000000000000000-0x000000047fffffff] Dec 16 13:08:35.983487 kernel: NODE_DATA(0) allocated [mem 0x47fff8dc0-0x47fffffff] Dec 16 13:08:35.983501 kernel: Zone ranges: Dec 16 13:08:35.983516 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:08:35.983530 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:08:35.983544 kernel: Normal [mem 0x0000000100000000-0x000000047fffffff] Dec 16 13:08:35.983558 kernel: Device empty Dec 16 13:08:35.983573 kernel: Movable zone start for each node Dec 16 13:08:35.983587 kernel: Early memory node ranges Dec 16 13:08:35.983601 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:08:35.983618 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 16 13:08:35.983632 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 16 13:08:35.983647 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Dec 16 13:08:35.983661 kernel: node 0: [mem 0x0000000000900000-0x000000007e73efff] Dec 16 13:08:35.983675 kernel: node 0: [mem 0x000000007e800000-0x000000007ea70fff] Dec 16 13:08:35.983690 kernel: node 0: [mem 0x000000007eb85000-0x000000007f6ecfff] Dec 16 13:08:35.983717 kernel: node 0: [mem 0x000000007f9ff000-0x000000007fe4efff] Dec 16 13:08:35.983735 kernel: node 0: [mem 0x000000007fe55000-0x000000007febbfff] Dec 16 13:08:35.983750 kernel: node 0: [mem 0x0000000100000000-0x000000047fffffff] Dec 16 13:08:35.983766 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000047fffffff] Dec 16 13:08:35.983782 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:08:35.983797 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:08:35.983816 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 16 13:08:35.983832 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:08:35.983847 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Dec 16 13:08:35.983873 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Dec 16 13:08:35.983889 kernel: On node 0, zone DMA32: 276 pages in unavailable ranges Dec 16 13:08:35.983908 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 16 13:08:35.983923 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 16 13:08:35.983940 kernel: On node 0, zone Normal: 324 pages in unavailable ranges Dec 16 13:08:35.983955 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 13:08:35.983971 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:08:35.983987 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:08:35.984003 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 13:08:35.984019 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:08:35.984035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:08:35.984053 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:08:35.984069 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:08:35.984085 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:08:35.984100 kernel: TSC deadline timer available Dec 16 13:08:35.984116 kernel: CPU topo: Max. logical packages: 8 Dec 16 13:08:35.984132 kernel: CPU topo: Max. logical dies: 8 Dec 16 13:08:35.984147 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:08:35.984162 kernel: CPU topo: Max. threads per core: 1 Dec 16 13:08:35.984178 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:08:35.984196 kernel: CPU topo: Num. threads per package: 1 Dec 16 13:08:35.984212 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs Dec 16 13:08:35.984227 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:08:35.984243 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 13:08:35.984258 kernel: kvm-guest: setup PV sched yield Dec 16 13:08:35.984274 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 16 13:08:35.984289 kernel: Booting paravirtualized kernel on KVM Dec 16 13:08:35.984305 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:08:35.984321 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 Dec 16 13:08:35.984339 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Dec 16 13:08:35.984355 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Dec 16 13:08:35.984370 kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 Dec 16 13:08:35.984386 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:08:35.984401 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:08:35.984419 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:08:35.984435 kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 16 13:08:35.984451 kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 13:08:35.984470 kernel: Fallback order for Node 0: 0 Dec 16 13:08:35.984486 kernel: Built 1 zonelists, mobility grouping on. Total pages: 4192374 Dec 16 13:08:35.984501 kernel: Policy zone: Normal Dec 16 13:08:35.984517 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:08:35.984533 kernel: software IO TLB: area num 8. Dec 16 13:08:35.984549 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Dec 16 13:08:35.984564 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:08:35.984581 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:08:35.984596 kernel: Dynamic Preempt: voluntary Dec 16 13:08:35.984615 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:08:35.984633 kernel: rcu: RCU event tracing is enabled. Dec 16 13:08:35.984649 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=8. Dec 16 13:08:35.984665 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:08:35.984681 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:08:35.984697 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:08:35.984712 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:08:35.984728 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 Dec 16 13:08:35.984744 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8. Dec 16 13:08:35.984763 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8. Dec 16 13:08:35.984779 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8. Dec 16 13:08:35.984795 kernel: NR_IRQS: 33024, nr_irqs: 488, preallocated irqs: 16 Dec 16 13:08:35.984810 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:08:35.984826 kernel: Console: colour dummy device 80x25 Dec 16 13:08:35.984842 kernel: printk: legacy console [tty0] enabled Dec 16 13:08:35.984857 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:08:35.984883 kernel: ACPI: Core revision 20240827 Dec 16 13:08:35.984899 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:08:35.984918 kernel: x2apic enabled Dec 16 13:08:35.984934 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:08:35.984949 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 13:08:35.984965 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 13:08:35.984981 kernel: kvm-guest: setup PV IPIs Dec 16 13:08:35.984997 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Dec 16 13:08:35.985013 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Dec 16 13:08:35.985028 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:08:35.985044 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 13:08:35.985063 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 13:08:35.985078 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:08:35.985093 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Dec 16 13:08:35.985108 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 16 13:08:35.985124 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 16 13:08:35.985139 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:08:35.985154 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:08:35.985169 kernel: TAA: Mitigation: Clear CPU buffers Dec 16 13:08:35.985184 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Dec 16 13:08:35.985200 kernel: active return thunk: its_return_thunk Dec 16 13:08:35.985215 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:08:35.985234 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:08:35.985249 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:08:35.985264 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:08:35.985280 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:08:35.985295 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:08:35.985310 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:08:35.985325 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 16 13:08:35.985341 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:08:35.985356 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 16 13:08:35.985371 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 16 13:08:35.985386 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 16 13:08:35.985404 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Dec 16 13:08:35.985420 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Dec 16 13:08:35.985435 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:08:35.985450 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:08:35.985466 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:08:35.985481 kernel: landlock: Up and running. Dec 16 13:08:35.985496 kernel: SELinux: Initializing. Dec 16 13:08:35.985511 kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:08:35.985527 kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:08:35.985542 kernel: smpboot: CPU0: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (family: 0x6, model: 0x6a, stepping: 0x6) Dec 16 13:08:35.985558 kernel: Performance Events: PEBS fmt0-, Icelake events, full-width counters, Intel PMU driver. Dec 16 13:08:35.985577 kernel: ... version: 2 Dec 16 13:08:35.985593 kernel: ... bit width: 48 Dec 16 13:08:35.985608 kernel: ... generic registers: 8 Dec 16 13:08:35.985624 kernel: ... value mask: 0000ffffffffffff Dec 16 13:08:35.985639 kernel: ... max period: 00007fffffffffff Dec 16 13:08:35.985655 kernel: ... fixed-purpose events: 3 Dec 16 13:08:35.985670 kernel: ... event mask: 00000007000000ff Dec 16 13:08:35.985686 kernel: signal: max sigframe size: 3632 Dec 16 13:08:35.985701 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:08:35.985717 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:08:35.985736 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:08:35.985752 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:08:35.985767 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:08:35.985783 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Dec 16 13:08:35.985798 kernel: smp: Brought up 1 node, 8 CPUs Dec 16 13:08:35.985814 kernel: smpboot: Total of 8 processors activated (36713.72 BogoMIPS) Dec 16 13:08:35.985830 kernel: Memory: 16308140K/16769496K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 453240K reserved, 0K cma-reserved) Dec 16 13:08:35.985846 kernel: devtmpfs: initialized Dec 16 13:08:35.985872 kernel: x86/mm: Memory block size: 128MB Dec 16 13:08:35.985891 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 16 13:08:35.985907 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 16 13:08:35.985923 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Dec 16 13:08:35.985938 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7f97f000-0x7f9fefff] (524288 bytes) Dec 16 13:08:35.985954 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fe53000-0x7fe54fff] (8192 bytes) Dec 16 13:08:35.985969 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7ff40000-0x7fffffff] (786432 bytes) Dec 16 13:08:35.985985 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:08:35.986001 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear) Dec 16 13:08:35.986019 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:08:35.986035 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:08:35.986050 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:08:35.986066 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:08:35.986082 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:08:35.986097 kernel: audit: type=2000 audit(1765890511.889:1): state=initialized audit_enabled=0 res=1 Dec 16 13:08:35.986113 kernel: cpuidle: using governor menu Dec 16 13:08:35.986128 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:08:35.986144 kernel: dca service started, version 1.12.1 Dec 16 13:08:35.986163 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 16 13:08:35.986179 kernel: PCI: Using configuration type 1 for base access Dec 16 13:08:35.986195 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:08:35.986211 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:08:35.986226 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:08:35.986242 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:08:35.986258 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:08:35.986273 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:08:35.986289 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:08:35.986307 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:08:35.986323 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:08:35.986338 kernel: ACPI: Interpreter enabled Dec 16 13:08:35.986354 kernel: ACPI: PM: (supports S0 S3 S5) Dec 16 13:08:35.986370 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:08:35.986386 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:08:35.986401 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:08:35.986417 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 13:08:35.986433 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:08:35.986682 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:08:35.986839 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 13:08:35.987017 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 13:08:35.987038 kernel: PCI host bridge to bus 0000:00 Dec 16 13:08:35.987178 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:08:35.987301 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:08:35.987426 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:08:35.987545 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Dec 16 13:08:35.987665 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 16 13:08:35.987782 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38e800003fff window] Dec 16 13:08:35.987915 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:08:35.988073 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:08:35.988230 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:08:35.988375 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Dec 16 13:08:35.988513 kernel: pci 0000:00:01.0: BAR 2 [mem 0x38e800000000-0x38e800003fff 64bit pref] Dec 16 13:08:35.988647 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8439e000-0x8439efff] Dec 16 13:08:35.988781 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 13:08:35.988931 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:08:35.989078 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.989215 kernel: pci 0000:00:02.0: BAR 0 [mem 0x8439d000-0x8439dfff] Dec 16 13:08:35.989354 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 13:08:35.989492 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Dec 16 13:08:35.989626 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Dec 16 13:08:35.989759 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Dec 16 13:08:35.989930 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.990070 kernel: pci 0000:00:02.1: BAR 0 [mem 0x8439c000-0x8439cfff] Dec 16 13:08:35.990209 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 13:08:35.990341 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Dec 16 13:08:35.990473 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Dec 16 13:08:35.990615 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.990750 kernel: pci 0000:00:02.2: BAR 0 [mem 0x8439b000-0x8439bfff] Dec 16 13:08:35.990898 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 13:08:35.991052 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Dec 16 13:08:35.991191 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Dec 16 13:08:35.991337 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.991473 kernel: pci 0000:00:02.3: BAR 0 [mem 0x8439a000-0x8439afff] Dec 16 13:08:35.991598 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 13:08:35.991721 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Dec 16 13:08:35.991845 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Dec 16 13:08:35.991992 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.992122 kernel: pci 0000:00:02.4: BAR 0 [mem 0x84399000-0x84399fff] Dec 16 13:08:35.992247 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 13:08:35.992371 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Dec 16 13:08:35.992499 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Dec 16 13:08:35.992632 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.992759 kernel: pci 0000:00:02.5: BAR 0 [mem 0x84398000-0x84398fff] Dec 16 13:08:35.992900 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 13:08:35.993027 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Dec 16 13:08:35.993156 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Dec 16 13:08:35.993289 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.993421 kernel: pci 0000:00:02.6: BAR 0 [mem 0x84397000-0x84397fff] Dec 16 13:08:35.993546 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 13:08:35.993670 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Dec 16 13:08:35.993795 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Dec 16 13:08:35.993959 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.994094 kernel: pci 0000:00:02.7: BAR 0 [mem 0x84396000-0x84396fff] Dec 16 13:08:35.994218 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 13:08:35.994341 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Dec 16 13:08:35.994466 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Dec 16 13:08:35.994599 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.994726 kernel: pci 0000:00:03.0: BAR 0 [mem 0x84395000-0x84395fff] Dec 16 13:08:35.994856 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Dec 16 13:08:35.995015 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Dec 16 13:08:35.995140 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Dec 16 13:08:35.995280 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.995432 kernel: pci 0000:00:03.1: BAR 0 [mem 0x84394000-0x84394fff] Dec 16 13:08:35.995546 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Dec 16 13:08:35.995655 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Dec 16 13:08:35.995769 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Dec 16 13:08:35.995901 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.996015 kernel: pci 0000:00:03.2: BAR 0 [mem 0x84393000-0x84393fff] Dec 16 13:08:35.996126 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Dec 16 13:08:35.996236 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Dec 16 13:08:35.996345 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Dec 16 13:08:35.996463 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.996579 kernel: pci 0000:00:03.3: BAR 0 [mem 0x84392000-0x84392fff] Dec 16 13:08:35.996692 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Dec 16 13:08:35.996804 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Dec 16 13:08:35.996925 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Dec 16 13:08:35.997044 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.997158 kernel: pci 0000:00:03.4: BAR 0 [mem 0x84391000-0x84391fff] Dec 16 13:08:35.997269 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Dec 16 13:08:35.997381 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Dec 16 13:08:35.997493 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Dec 16 13:08:35.997612 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.997725 kernel: pci 0000:00:03.5: BAR 0 [mem 0x84390000-0x84390fff] Dec 16 13:08:35.997835 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Dec 16 13:08:35.997963 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Dec 16 13:08:35.998074 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Dec 16 13:08:35.998192 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.998302 kernel: pci 0000:00:03.6: BAR 0 [mem 0x8438f000-0x8438ffff] Dec 16 13:08:35.998414 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Dec 16 13:08:35.998526 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Dec 16 13:08:35.998637 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Dec 16 13:08:35.998761 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.998885 kernel: pci 0000:00:03.7: BAR 0 [mem 0x8438e000-0x8438efff] Dec 16 13:08:35.999009 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Dec 16 13:08:35.999119 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Dec 16 13:08:35.999230 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Dec 16 13:08:35.999349 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:35.999462 kernel: pci 0000:00:04.0: BAR 0 [mem 0x8438d000-0x8438dfff] Dec 16 13:08:35.999572 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Dec 16 13:08:35.999676 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Dec 16 13:08:35.999781 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Dec 16 13:08:35.999902 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:36.000008 kernel: pci 0000:00:04.1: BAR 0 [mem 0x8438c000-0x8438cfff] Dec 16 13:08:36.000111 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Dec 16 13:08:36.000216 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Dec 16 13:08:36.000327 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Dec 16 13:08:36.000444 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:36.000549 kernel: pci 0000:00:04.2: BAR 0 [mem 0x8438b000-0x8438bfff] Dec 16 13:08:36.000653 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Dec 16 13:08:36.000757 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Dec 16 13:08:36.000860 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Dec 16 13:08:36.000983 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:36.001092 kernel: pci 0000:00:04.3: BAR 0 [mem 0x8438a000-0x8438afff] Dec 16 13:08:36.001197 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Dec 16 13:08:36.001300 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Dec 16 13:08:36.001404 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Dec 16 13:08:36.001516 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:36.001621 kernel: pci 0000:00:04.4: BAR 0 [mem 0x84389000-0x84389fff] Dec 16 13:08:36.001726 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Dec 16 13:08:36.001830 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Dec 16 13:08:36.001952 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Dec 16 13:08:36.002066 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:36.002175 kernel: pci 0000:00:04.5: BAR 0 [mem 0x84388000-0x84388fff] Dec 16 13:08:36.002280 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Dec 16 13:08:36.002384 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Dec 16 13:08:36.002491 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Dec 16 13:08:36.002603 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:36.002712 kernel: pci 0000:00:04.6: BAR 0 [mem 0x84387000-0x84387fff] Dec 16 13:08:36.002818 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Dec 16 13:08:36.002952 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Dec 16 13:08:36.003059 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Dec 16 13:08:36.003178 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:36.003284 kernel: pci 0000:00:04.7: BAR 0 [mem 0x84386000-0x84386fff] Dec 16 13:08:36.003384 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Dec 16 13:08:36.003486 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Dec 16 13:08:36.003585 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Dec 16 13:08:36.003691 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:36.003791 kernel: pci 0000:00:05.0: BAR 0 [mem 0x84385000-0x84385fff] Dec 16 13:08:36.003904 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Dec 16 13:08:36.004004 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Dec 16 13:08:36.004103 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Dec 16 13:08:36.004208 kernel: pci 0000:00:05.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:36.004307 kernel: pci 0000:00:05.1: BAR 0 [mem 0x84384000-0x84384fff] Dec 16 13:08:36.004406 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Dec 16 13:08:36.004504 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Dec 16 13:08:36.004606 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Dec 16 13:08:36.004711 kernel: pci 0000:00:05.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:36.004810 kernel: pci 0000:00:05.2: BAR 0 [mem 0x84383000-0x84383fff] Dec 16 13:08:36.004919 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Dec 16 13:08:36.005017 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Dec 16 13:08:36.005117 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Dec 16 13:08:36.005231 kernel: pci 0000:00:05.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:36.005335 kernel: pci 0000:00:05.3: BAR 0 [mem 0x84382000-0x84382fff] Dec 16 13:08:36.005433 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Dec 16 13:08:36.005531 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Dec 16 13:08:36.005629 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Dec 16 13:08:36.005735 kernel: pci 0000:00:05.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 13:08:36.005836 kernel: pci 0000:00:05.4: BAR 0 [mem 0x84381000-0x84381fff] Dec 16 13:08:36.005948 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Dec 16 13:08:36.006050 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Dec 16 13:08:36.006148 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Dec 16 13:08:36.006254 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:08:36.006355 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 13:08:36.006462 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 13:08:36.006561 kernel: pci 0000:00:1f.2: BAR 4 [io 0x7040-0x705f] Dec 16 13:08:36.006660 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x84380000-0x84380fff] Dec 16 13:08:36.006768 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 13:08:36.006878 kernel: pci 0000:00:1f.3: BAR 4 [io 0x7000-0x703f] Dec 16 13:08:36.007002 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Dec 16 13:08:36.007108 kernel: pci 0000:01:00.0: BAR 0 [mem 0x84200000-0x842000ff 64bit] Dec 16 13:08:36.007211 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 13:08:36.007313 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Dec 16 13:08:36.007418 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Dec 16 13:08:36.007523 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Dec 16 13:08:36.007620 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 13:08:36.007730 kernel: pci_bus 0000:02: extended config space not accessible Dec 16 13:08:36.007745 kernel: acpiphp: Slot [1] registered Dec 16 13:08:36.007757 kernel: acpiphp: Slot [0] registered Dec 16 13:08:36.007767 kernel: acpiphp: Slot [2] registered Dec 16 13:08:36.007778 kernel: acpiphp: Slot [3] registered Dec 16 13:08:36.007788 kernel: acpiphp: Slot [4] registered Dec 16 13:08:36.007802 kernel: acpiphp: Slot [5] registered Dec 16 13:08:36.007813 kernel: acpiphp: Slot [6] registered Dec 16 13:08:36.007823 kernel: acpiphp: Slot [7] registered Dec 16 13:08:36.007834 kernel: acpiphp: Slot [8] registered Dec 16 13:08:36.007844 kernel: acpiphp: Slot [9] registered Dec 16 13:08:36.007855 kernel: acpiphp: Slot [10] registered Dec 16 13:08:36.007875 kernel: acpiphp: Slot [11] registered Dec 16 13:08:36.007886 kernel: acpiphp: Slot [12] registered Dec 16 13:08:36.007896 kernel: acpiphp: Slot [13] registered Dec 16 13:08:36.007907 kernel: acpiphp: Slot [14] registered Dec 16 13:08:36.007919 kernel: acpiphp: Slot [15] registered Dec 16 13:08:36.007930 kernel: acpiphp: Slot [16] registered Dec 16 13:08:36.007940 kernel: acpiphp: Slot [17] registered Dec 16 13:08:36.007951 kernel: acpiphp: Slot [18] registered Dec 16 13:08:36.007961 kernel: acpiphp: Slot [19] registered Dec 16 13:08:36.007971 kernel: acpiphp: Slot [20] registered Dec 16 13:08:36.007981 kernel: acpiphp: Slot [21] registered Dec 16 13:08:36.007992 kernel: acpiphp: Slot [22] registered Dec 16 13:08:36.008002 kernel: acpiphp: Slot [23] registered Dec 16 13:08:36.008015 kernel: acpiphp: Slot [24] registered Dec 16 13:08:36.008025 kernel: acpiphp: Slot [25] registered Dec 16 13:08:36.008035 kernel: acpiphp: Slot [26] registered Dec 16 13:08:36.008045 kernel: acpiphp: Slot [27] registered Dec 16 13:08:36.008056 kernel: acpiphp: Slot [28] registered Dec 16 13:08:36.008066 kernel: acpiphp: Slot [29] registered Dec 16 13:08:36.008076 kernel: acpiphp: Slot [30] registered Dec 16 13:08:36.008087 kernel: acpiphp: Slot [31] registered Dec 16 13:08:36.008197 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Dec 16 13:08:36.008302 kernel: pci 0000:02:01.0: BAR 4 [io 0x6000-0x601f] Dec 16 13:08:36.008401 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 13:08:36.008415 kernel: acpiphp: Slot [0-2] registered Dec 16 13:08:36.008524 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 16 13:08:36.008626 kernel: pci 0000:03:00.0: BAR 1 [mem 0x83e00000-0x83e00fff] Dec 16 13:08:36.008725 kernel: pci 0000:03:00.0: BAR 4 [mem 0x380800000000-0x380800003fff 64bit pref] Dec 16 13:08:36.008823 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref] Dec 16 13:08:36.008935 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 13:08:36.008949 kernel: acpiphp: Slot [0-3] registered Dec 16 13:08:36.009054 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Dec 16 13:08:36.009155 kernel: pci 0000:04:00.0: BAR 1 [mem 0x83c00000-0x83c00fff] Dec 16 13:08:36.009253 kernel: pci 0000:04:00.0: BAR 4 [mem 0x381000000000-0x381000003fff 64bit pref] Dec 16 13:08:36.009350 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 13:08:36.009364 kernel: acpiphp: Slot [0-4] registered Dec 16 13:08:36.009468 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Dec 16 13:08:36.009570 kernel: pci 0000:05:00.0: BAR 4 [mem 0x381800000000-0x381800003fff 64bit pref] Dec 16 13:08:36.009668 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 13:08:36.009682 kernel: acpiphp: Slot [0-5] registered Dec 16 13:08:36.009786 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 16 13:08:36.009895 kernel: pci 0000:06:00.0: BAR 1 [mem 0x83800000-0x83800fff] Dec 16 13:08:36.009997 kernel: pci 0000:06:00.0: BAR 4 [mem 0x382000000000-0x382000003fff 64bit pref] Dec 16 13:08:36.010094 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 13:08:36.010111 kernel: acpiphp: Slot [0-6] registered Dec 16 13:08:36.010205 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 13:08:36.010220 kernel: acpiphp: Slot [0-7] registered Dec 16 13:08:36.010314 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 13:08:36.010328 kernel: acpiphp: Slot [0-8] registered Dec 16 13:08:36.010421 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 13:08:36.010435 kernel: acpiphp: Slot [0-9] registered Dec 16 13:08:36.010533 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Dec 16 13:08:36.010547 kernel: acpiphp: Slot [0-10] registered Dec 16 13:08:36.010640 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Dec 16 13:08:36.010654 kernel: acpiphp: Slot [0-11] registered Dec 16 13:08:36.010747 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Dec 16 13:08:36.010760 kernel: acpiphp: Slot [0-12] registered Dec 16 13:08:36.010852 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Dec 16 13:08:36.010876 kernel: acpiphp: Slot [0-13] registered Dec 16 13:08:36.010987 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Dec 16 13:08:36.011002 kernel: acpiphp: Slot [0-14] registered Dec 16 13:08:36.011094 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Dec 16 13:08:36.011108 kernel: acpiphp: Slot [0-15] registered Dec 16 13:08:36.011201 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Dec 16 13:08:36.011215 kernel: acpiphp: Slot [0-16] registered Dec 16 13:08:36.011306 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Dec 16 13:08:36.011319 kernel: acpiphp: Slot [0-17] registered Dec 16 13:08:36.011412 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Dec 16 13:08:36.011425 kernel: acpiphp: Slot [0-18] registered Dec 16 13:08:36.011514 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Dec 16 13:08:36.011527 kernel: acpiphp: Slot [0-19] registered Dec 16 13:08:36.011614 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Dec 16 13:08:36.011627 kernel: acpiphp: Slot [0-20] registered Dec 16 13:08:36.011716 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Dec 16 13:08:36.011729 kernel: acpiphp: Slot [0-21] registered Dec 16 13:08:36.011821 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Dec 16 13:08:36.011834 kernel: acpiphp: Slot [0-22] registered Dec 16 13:08:36.011939 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Dec 16 13:08:36.011953 kernel: acpiphp: Slot [0-23] registered Dec 16 13:08:36.012040 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Dec 16 13:08:36.012054 kernel: acpiphp: Slot [0-24] registered Dec 16 13:08:36.012142 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Dec 16 13:08:36.012155 kernel: acpiphp: Slot [0-25] registered Dec 16 13:08:36.012247 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Dec 16 13:08:36.012260 kernel: acpiphp: Slot [0-26] registered Dec 16 13:08:36.012350 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Dec 16 13:08:36.012363 kernel: acpiphp: Slot [0-27] registered Dec 16 13:08:36.012450 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Dec 16 13:08:36.012463 kernel: acpiphp: Slot [0-28] registered Dec 16 13:08:36.012552 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Dec 16 13:08:36.012565 kernel: acpiphp: Slot [0-29] registered Dec 16 13:08:36.012657 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Dec 16 13:08:36.012670 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:08:36.012680 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:08:36.012691 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:08:36.012701 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:08:36.012712 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 13:08:36.012722 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 13:08:36.012732 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 13:08:36.012744 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 13:08:36.012754 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 13:08:36.012764 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 13:08:36.012774 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 13:08:36.012784 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 13:08:36.012794 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 13:08:36.012804 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 13:08:36.012814 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 13:08:36.012824 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 13:08:36.012836 kernel: iommu: Default domain type: Translated Dec 16 13:08:36.012846 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:08:36.012856 kernel: efivars: Registered efivars operations Dec 16 13:08:36.012880 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:08:36.012890 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:08:36.012900 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 16 13:08:36.012910 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Dec 16 13:08:36.012920 kernel: e820: reserve RAM buffer [mem 0x7dd26018-0x7fffffff] Dec 16 13:08:36.012930 kernel: e820: reserve RAM buffer [mem 0x7dd4e018-0x7fffffff] Dec 16 13:08:36.012942 kernel: e820: reserve RAM buffer [mem 0x7e73f000-0x7fffffff] Dec 16 13:08:36.012952 kernel: e820: reserve RAM buffer [mem 0x7ea71000-0x7fffffff] Dec 16 13:08:36.012962 kernel: e820: reserve RAM buffer [mem 0x7f6ed000-0x7fffffff] Dec 16 13:08:36.012972 kernel: e820: reserve RAM buffer [mem 0x7fe4f000-0x7fffffff] Dec 16 13:08:36.012982 kernel: e820: reserve RAM buffer [mem 0x7febc000-0x7fffffff] Dec 16 13:08:36.013075 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 13:08:36.013166 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 13:08:36.013255 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:08:36.013268 kernel: vgaarb: loaded Dec 16 13:08:36.013281 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:08:36.013291 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:08:36.013301 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:08:36.013312 kernel: pnp: PnP ACPI init Dec 16 13:08:36.013413 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Dec 16 13:08:36.013427 kernel: pnp: PnP ACPI: found 5 devices Dec 16 13:08:36.013438 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:08:36.013448 kernel: NET: Registered PF_INET protocol family Dec 16 13:08:36.013460 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:08:36.013471 kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Dec 16 13:08:36.013481 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:08:36.013492 kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:08:36.013502 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 13:08:36.013512 kernel: TCP: Hash tables configured (established 131072 bind 65536) Dec 16 13:08:36.013522 kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 16 13:08:36.013532 kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 16 13:08:36.013542 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:08:36.013555 kernel: NET: Registered PF_XDP protocol family Dec 16 13:08:36.013647 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Dec 16 13:08:36.013738 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 16 13:08:36.013830 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 16 13:08:36.013932 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 16 13:08:36.014024 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 16 13:08:36.014115 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 13:08:36.014205 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 13:08:36.014300 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 13:08:36.014389 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 16 13:08:36.014481 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Dec 16 13:08:36.014572 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Dec 16 13:08:36.014661 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Dec 16 13:08:36.014751 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 16 13:08:36.014840 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 16 13:08:36.014953 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 16 13:08:36.015051 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 16 13:08:36.015142 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 16 13:08:36.015231 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Dec 16 13:08:36.015313 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Dec 16 13:08:36.015395 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Dec 16 13:08:36.015479 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 16 13:08:36.015562 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 16 13:08:36.015647 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 16 13:08:36.015729 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 16 13:08:36.015813 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 16 13:08:36.015902 kernel: pci 0000:00:05.1: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Dec 16 13:08:36.015984 kernel: pci 0000:00:05.2: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Dec 16 13:08:36.016067 kernel: pci 0000:00:05.3: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 16 13:08:36.016149 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 16 13:08:36.016231 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Dec 16 13:08:36.016316 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Dec 16 13:08:36.016399 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Dec 16 13:08:36.016480 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Dec 16 13:08:36.016563 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Dec 16 13:08:36.016647 kernel: pci 0000:00:02.6: bridge window [io 0x8000-0x8fff]: assigned Dec 16 13:08:36.016730 kernel: pci 0000:00:02.7: bridge window [io 0x9000-0x9fff]: assigned Dec 16 13:08:36.016814 kernel: pci 0000:00:03.0: bridge window [io 0xa000-0xafff]: assigned Dec 16 13:08:36.016904 kernel: pci 0000:00:03.1: bridge window [io 0xb000-0xbfff]: assigned Dec 16 13:08:36.016991 kernel: pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]: assigned Dec 16 13:08:36.017074 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff]: assigned Dec 16 13:08:36.017157 kernel: pci 0000:00:03.4: bridge window [io 0xe000-0xefff]: assigned Dec 16 13:08:36.017243 kernel: pci 0000:00:03.5: bridge window [io 0xf000-0xffff]: assigned Dec 16 13:08:36.017326 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.017409 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.017493 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.017576 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.017663 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.017745 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.017828 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.017918 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.018001 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.018082 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.018163 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.018245 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.018329 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.018411 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.018493 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.018575 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.018657 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.018740 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.018821 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.018912 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.019011 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.019095 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.019177 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.019259 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.019339 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.019418 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.019497 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.019578 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.019656 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.019735 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.019813 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff]: assigned Dec 16 13:08:36.019900 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff]: assigned Dec 16 13:08:36.019979 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff]: assigned Dec 16 13:08:36.020058 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff]: assigned Dec 16 13:08:36.020139 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff]: assigned Dec 16 13:08:36.020222 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff]: assigned Dec 16 13:08:36.020300 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff]: assigned Dec 16 13:08:36.020379 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff]: assigned Dec 16 13:08:36.020457 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff]: assigned Dec 16 13:08:36.020537 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff]: assigned Dec 16 13:08:36.020616 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff]: assigned Dec 16 13:08:36.020696 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff]: assigned Dec 16 13:08:36.020775 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff]: assigned Dec 16 13:08:36.020856 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.020953 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.021034 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.021113 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.021192 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.021270 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.021349 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.021428 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.021507 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.021589 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.021668 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.021747 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.021827 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.021914 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.021995 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.022073 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.022151 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.022234 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.022313 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.022393 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.022472 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.022551 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.022629 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.022707 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.022786 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.022874 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.022967 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.023048 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.023128 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Dec 16 13:08:36.023207 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Dec 16 13:08:36.023291 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 13:08:36.023370 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Dec 16 13:08:36.023450 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Dec 16 13:08:36.023534 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Dec 16 13:08:36.023612 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 13:08:36.023688 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Dec 16 13:08:36.023763 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Dec 16 13:08:36.023838 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Dec 16 13:08:36.023926 kernel: pci 0000:03:00.0: ROM [mem 0x83e80000-0x83efffff pref]: assigned Dec 16 13:08:36.024003 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 13:08:36.024079 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Dec 16 13:08:36.024155 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Dec 16 13:08:36.024233 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 13:08:36.024310 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Dec 16 13:08:36.024386 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Dec 16 13:08:36.024462 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 13:08:36.024537 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Dec 16 13:08:36.024613 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Dec 16 13:08:36.024688 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 13:08:36.024763 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Dec 16 13:08:36.024839 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Dec 16 13:08:36.024921 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 13:08:36.025000 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Dec 16 13:08:36.025076 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Dec 16 13:08:36.025154 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 13:08:36.025230 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Dec 16 13:08:36.025305 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Dec 16 13:08:36.025381 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 13:08:36.025459 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Dec 16 13:08:36.025535 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Dec 16 13:08:36.025610 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Dec 16 13:08:36.025686 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Dec 16 13:08:36.025762 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Dec 16 13:08:36.025838 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Dec 16 13:08:36.025920 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Dec 16 13:08:36.025995 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Dec 16 13:08:36.026071 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Dec 16 13:08:36.026146 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Dec 16 13:08:36.026225 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Dec 16 13:08:36.026301 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Dec 16 13:08:36.026376 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Dec 16 13:08:36.026452 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Dec 16 13:08:36.026527 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Dec 16 13:08:36.026603 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Dec 16 13:08:36.026679 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Dec 16 13:08:36.026756 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Dec 16 13:08:36.026831 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Dec 16 13:08:36.026915 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Dec 16 13:08:36.027005 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Dec 16 13:08:36.027081 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Dec 16 13:08:36.027156 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Dec 16 13:08:36.027232 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Dec 16 13:08:36.027307 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Dec 16 13:08:36.027380 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Dec 16 13:08:36.027453 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Dec 16 13:08:36.027526 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff] Dec 16 13:08:36.027603 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Dec 16 13:08:36.027676 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Dec 16 13:08:36.027748 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Dec 16 13:08:36.027821 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff] Dec 16 13:08:36.027901 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Dec 16 13:08:36.027974 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Dec 16 13:08:36.028048 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Dec 16 13:08:36.028123 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff] Dec 16 13:08:36.028195 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Dec 16 13:08:36.028268 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Dec 16 13:08:36.028341 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Dec 16 13:08:36.028413 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff] Dec 16 13:08:36.028487 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Dec 16 13:08:36.028560 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Dec 16 13:08:36.028635 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Dec 16 13:08:36.028707 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff] Dec 16 13:08:36.028780 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Dec 16 13:08:36.028853 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Dec 16 13:08:36.028943 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Dec 16 13:08:36.029017 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff] Dec 16 13:08:36.029092 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Dec 16 13:08:36.029166 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Dec 16 13:08:36.029242 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Dec 16 13:08:36.029315 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff] Dec 16 13:08:36.029388 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Dec 16 13:08:36.029461 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Dec 16 13:08:36.029537 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Dec 16 13:08:36.029609 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff] Dec 16 13:08:36.029681 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Dec 16 13:08:36.029754 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Dec 16 13:08:36.029830 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Dec 16 13:08:36.029913 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff] Dec 16 13:08:36.029987 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Dec 16 13:08:36.030060 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Dec 16 13:08:36.030134 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Dec 16 13:08:36.030211 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff] Dec 16 13:08:36.030285 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Dec 16 13:08:36.030363 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Dec 16 13:08:36.030439 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Dec 16 13:08:36.030527 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff] Dec 16 13:08:36.030601 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Dec 16 13:08:36.030674 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Dec 16 13:08:36.030753 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Dec 16 13:08:36.030828 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff] Dec 16 13:08:36.030915 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Dec 16 13:08:36.031000 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Dec 16 13:08:36.031074 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Dec 16 13:08:36.031148 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff] Dec 16 13:08:36.031221 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Dec 16 13:08:36.031294 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Dec 16 13:08:36.031368 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:08:36.031436 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:08:36.031499 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:08:36.031562 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Dec 16 13:08:36.031624 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 16 13:08:36.031686 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x38e800003fff window] Dec 16 13:08:36.031761 kernel: pci_bus 0000:01: resource 0 [io 0x6000-0x6fff] Dec 16 13:08:36.031828 kernel: pci_bus 0000:01: resource 1 [mem 0x84000000-0x842fffff] Dec 16 13:08:36.031905 kernel: pci_bus 0000:01: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Dec 16 13:08:36.031978 kernel: pci_bus 0000:02: resource 0 [io 0x6000-0x6fff] Dec 16 13:08:36.032047 kernel: pci_bus 0000:02: resource 1 [mem 0x84000000-0x841fffff] Dec 16 13:08:36.032115 kernel: pci_bus 0000:02: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Dec 16 13:08:36.032189 kernel: pci_bus 0000:03: resource 1 [mem 0x83e00000-0x83ffffff] Dec 16 13:08:36.032257 kernel: pci_bus 0000:03: resource 2 [mem 0x380800000000-0x380fffffffff 64bit pref] Dec 16 13:08:36.032335 kernel: pci_bus 0000:04: resource 1 [mem 0x83c00000-0x83dfffff] Dec 16 13:08:36.032405 kernel: pci_bus 0000:04: resource 2 [mem 0x381000000000-0x3817ffffffff 64bit pref] Dec 16 13:08:36.032477 kernel: pci_bus 0000:05: resource 1 [mem 0x83a00000-0x83bfffff] Dec 16 13:08:36.032545 kernel: pci_bus 0000:05: resource 2 [mem 0x381800000000-0x381fffffffff 64bit pref] Dec 16 13:08:36.032618 kernel: pci_bus 0000:06: resource 1 [mem 0x83800000-0x839fffff] Dec 16 13:08:36.032685 kernel: pci_bus 0000:06: resource 2 [mem 0x382000000000-0x3827ffffffff 64bit pref] Dec 16 13:08:36.032757 kernel: pci_bus 0000:07: resource 1 [mem 0x83600000-0x837fffff] Dec 16 13:08:36.032827 kernel: pci_bus 0000:07: resource 2 [mem 0x382800000000-0x382fffffffff 64bit pref] Dec 16 13:08:36.032906 kernel: pci_bus 0000:08: resource 1 [mem 0x83400000-0x835fffff] Dec 16 13:08:36.032975 kernel: pci_bus 0000:08: resource 2 [mem 0x383000000000-0x3837ffffffff 64bit pref] Dec 16 13:08:36.033047 kernel: pci_bus 0000:09: resource 1 [mem 0x83200000-0x833fffff] Dec 16 13:08:36.033114 kernel: pci_bus 0000:09: resource 2 [mem 0x383800000000-0x383fffffffff 64bit pref] Dec 16 13:08:36.033185 kernel: pci_bus 0000:0a: resource 1 [mem 0x83000000-0x831fffff] Dec 16 13:08:36.033252 kernel: pci_bus 0000:0a: resource 2 [mem 0x384000000000-0x3847ffffffff 64bit pref] Dec 16 13:08:36.033327 kernel: pci_bus 0000:0b: resource 1 [mem 0x82e00000-0x82ffffff] Dec 16 13:08:36.033394 kernel: pci_bus 0000:0b: resource 2 [mem 0x384800000000-0x384fffffffff 64bit pref] Dec 16 13:08:36.033466 kernel: pci_bus 0000:0c: resource 1 [mem 0x82c00000-0x82dfffff] Dec 16 13:08:36.033534 kernel: pci_bus 0000:0c: resource 2 [mem 0x385000000000-0x3857ffffffff 64bit pref] Dec 16 13:08:36.033610 kernel: pci_bus 0000:0d: resource 1 [mem 0x82a00000-0x82bfffff] Dec 16 13:08:36.033678 kernel: pci_bus 0000:0d: resource 2 [mem 0x385800000000-0x385fffffffff 64bit pref] Dec 16 13:08:36.033750 kernel: pci_bus 0000:0e: resource 1 [mem 0x82800000-0x829fffff] Dec 16 13:08:36.033817 kernel: pci_bus 0000:0e: resource 2 [mem 0x386000000000-0x3867ffffffff 64bit pref] Dec 16 13:08:36.033895 kernel: pci_bus 0000:0f: resource 1 [mem 0x82600000-0x827fffff] Dec 16 13:08:36.033963 kernel: pci_bus 0000:0f: resource 2 [mem 0x386800000000-0x386fffffffff 64bit pref] Dec 16 13:08:36.034035 kernel: pci_bus 0000:10: resource 1 [mem 0x82400000-0x825fffff] Dec 16 13:08:36.034103 kernel: pci_bus 0000:10: resource 2 [mem 0x387000000000-0x3877ffffffff 64bit pref] Dec 16 13:08:36.034178 kernel: pci_bus 0000:11: resource 1 [mem 0x82200000-0x823fffff] Dec 16 13:08:36.034244 kernel: pci_bus 0000:11: resource 2 [mem 0x387800000000-0x387fffffffff 64bit pref] Dec 16 13:08:36.034315 kernel: pci_bus 0000:12: resource 0 [io 0xf000-0xffff] Dec 16 13:08:36.034382 kernel: pci_bus 0000:12: resource 1 [mem 0x82000000-0x821fffff] Dec 16 13:08:36.034447 kernel: pci_bus 0000:12: resource 2 [mem 0x388000000000-0x3887ffffffff 64bit pref] Dec 16 13:08:36.034523 kernel: pci_bus 0000:13: resource 0 [io 0xe000-0xefff] Dec 16 13:08:36.034589 kernel: pci_bus 0000:13: resource 1 [mem 0x81e00000-0x81ffffff] Dec 16 13:08:36.034654 kernel: pci_bus 0000:13: resource 2 [mem 0x388800000000-0x388fffffffff 64bit pref] Dec 16 13:08:36.034724 kernel: pci_bus 0000:14: resource 0 [io 0xd000-0xdfff] Dec 16 13:08:36.034791 kernel: pci_bus 0000:14: resource 1 [mem 0x81c00000-0x81dfffff] Dec 16 13:08:36.034857 kernel: pci_bus 0000:14: resource 2 [mem 0x389000000000-0x3897ffffffff 64bit pref] Dec 16 13:08:36.034936 kernel: pci_bus 0000:15: resource 0 [io 0xc000-0xcfff] Dec 16 13:08:36.035326 kernel: pci_bus 0000:15: resource 1 [mem 0x81a00000-0x81bfffff] Dec 16 13:08:36.035392 kernel: pci_bus 0000:15: resource 2 [mem 0x389800000000-0x389fffffffff 64bit pref] Dec 16 13:08:36.035468 kernel: pci_bus 0000:16: resource 0 [io 0xb000-0xbfff] Dec 16 13:08:36.035532 kernel: pci_bus 0000:16: resource 1 [mem 0x81800000-0x819fffff] Dec 16 13:08:36.035595 kernel: pci_bus 0000:16: resource 2 [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Dec 16 13:08:36.035666 kernel: pci_bus 0000:17: resource 0 [io 0xa000-0xafff] Dec 16 13:08:36.035730 kernel: pci_bus 0000:17: resource 1 [mem 0x81600000-0x817fffff] Dec 16 13:08:36.035798 kernel: pci_bus 0000:17: resource 2 [mem 0x38a800000000-0x38afffffffff 64bit pref] Dec 16 13:08:36.035879 kernel: pci_bus 0000:18: resource 0 [io 0x9000-0x9fff] Dec 16 13:08:36.035944 kernel: pci_bus 0000:18: resource 1 [mem 0x81400000-0x815fffff] Dec 16 13:08:36.036008 kernel: pci_bus 0000:18: resource 2 [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Dec 16 13:08:36.036077 kernel: pci_bus 0000:19: resource 0 [io 0x8000-0x8fff] Dec 16 13:08:36.036141 kernel: pci_bus 0000:19: resource 1 [mem 0x81200000-0x813fffff] Dec 16 13:08:36.036205 kernel: pci_bus 0000:19: resource 2 [mem 0x38b800000000-0x38bfffffffff 64bit pref] Dec 16 13:08:36.036278 kernel: pci_bus 0000:1a: resource 0 [io 0x5000-0x5fff] Dec 16 13:08:36.036342 kernel: pci_bus 0000:1a: resource 1 [mem 0x81000000-0x811fffff] Dec 16 13:08:36.036405 kernel: pci_bus 0000:1a: resource 2 [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Dec 16 13:08:36.036473 kernel: pci_bus 0000:1b: resource 0 [io 0x4000-0x4fff] Dec 16 13:08:36.036537 kernel: pci_bus 0000:1b: resource 1 [mem 0x80e00000-0x80ffffff] Dec 16 13:08:36.036601 kernel: pci_bus 0000:1b: resource 2 [mem 0x38c800000000-0x38cfffffffff 64bit pref] Dec 16 13:08:36.036671 kernel: pci_bus 0000:1c: resource 0 [io 0x3000-0x3fff] Dec 16 13:08:36.036735 kernel: pci_bus 0000:1c: resource 1 [mem 0x80c00000-0x80dfffff] Dec 16 13:08:36.036797 kernel: pci_bus 0000:1c: resource 2 [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Dec 16 13:08:36.036873 kernel: pci_bus 0000:1d: resource 0 [io 0x2000-0x2fff] Dec 16 13:08:36.036939 kernel: pci_bus 0000:1d: resource 1 [mem 0x80a00000-0x80bfffff] Dec 16 13:08:36.037002 kernel: pci_bus 0000:1d: resource 2 [mem 0x38d800000000-0x38dfffffffff 64bit pref] Dec 16 13:08:36.037076 kernel: pci_bus 0000:1e: resource 0 [io 0x1000-0x1fff] Dec 16 13:08:36.037144 kernel: pci_bus 0000:1e: resource 1 [mem 0x80800000-0x809fffff] Dec 16 13:08:36.037207 kernel: pci_bus 0000:1e: resource 2 [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Dec 16 13:08:36.037219 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 13:08:36.037227 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:08:36.037235 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:08:36.037243 kernel: software IO TLB: mapped [mem 0x0000000077e7e000-0x000000007be7e000] (64MB) Dec 16 13:08:36.037251 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 13:08:36.037259 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Dec 16 13:08:36.037268 kernel: Initialise system trusted keyrings Dec 16 13:08:36.037277 kernel: workingset: timestamp_bits=39 max_order=22 bucket_order=0 Dec 16 13:08:36.037285 kernel: Key type asymmetric registered Dec 16 13:08:36.037292 kernel: Asymmetric key parser 'x509' registered Dec 16 13:08:36.037300 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:08:36.037307 kernel: io scheduler mq-deadline registered Dec 16 13:08:36.037315 kernel: io scheduler kyber registered Dec 16 13:08:36.037323 kernel: io scheduler bfq registered Dec 16 13:08:36.037397 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 16 13:08:36.037472 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 16 13:08:36.037545 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 16 13:08:36.037616 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 16 13:08:36.037686 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 16 13:08:36.037757 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 16 13:08:36.037828 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 16 13:08:36.037947 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 16 13:08:36.038019 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 16 13:08:36.038109 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 16 13:08:36.038182 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 16 13:08:36.038251 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 16 13:08:36.038321 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 16 13:08:36.038392 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 16 13:08:36.038462 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 16 13:08:36.038531 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 16 13:08:36.038541 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 13:08:36.038609 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 16 13:08:36.038679 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 16 13:08:36.038751 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33 Dec 16 13:08:36.038821 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33 Dec 16 13:08:36.038899 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34 Dec 16 13:08:36.038983 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34 Dec 16 13:08:36.039054 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35 Dec 16 13:08:36.039126 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35 Dec 16 13:08:36.039195 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36 Dec 16 13:08:36.039263 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36 Dec 16 13:08:36.039333 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37 Dec 16 13:08:36.039403 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37 Dec 16 13:08:36.039473 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38 Dec 16 13:08:36.039544 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38 Dec 16 13:08:36.039613 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39 Dec 16 13:08:36.039682 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39 Dec 16 13:08:36.039692 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 13:08:36.039760 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40 Dec 16 13:08:36.039829 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40 Dec 16 13:08:36.039906 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 41 Dec 16 13:08:36.039974 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 41 Dec 16 13:08:36.040047 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 42 Dec 16 13:08:36.040115 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 42 Dec 16 13:08:36.040184 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 43 Dec 16 13:08:36.040253 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 43 Dec 16 13:08:36.040323 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 44 Dec 16 13:08:36.040391 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 44 Dec 16 13:08:36.040461 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 45 Dec 16 13:08:36.040530 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 45 Dec 16 13:08:36.040599 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 46 Dec 16 13:08:36.040670 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 46 Dec 16 13:08:36.040740 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 47 Dec 16 13:08:36.040808 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 47 Dec 16 13:08:36.040818 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 16 13:08:36.040893 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 48 Dec 16 13:08:36.040962 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 48 Dec 16 13:08:36.041030 kernel: pcieport 0000:00:05.1: PME: Signaling with IRQ 49 Dec 16 13:08:36.041099 kernel: pcieport 0000:00:05.1: AER: enabled with IRQ 49 Dec 16 13:08:36.041171 kernel: pcieport 0000:00:05.2: PME: Signaling with IRQ 50 Dec 16 13:08:36.041239 kernel: pcieport 0000:00:05.2: AER: enabled with IRQ 50 Dec 16 13:08:36.041309 kernel: pcieport 0000:00:05.3: PME: Signaling with IRQ 51 Dec 16 13:08:36.041376 kernel: pcieport 0000:00:05.3: AER: enabled with IRQ 51 Dec 16 13:08:36.041445 kernel: pcieport 0000:00:05.4: PME: Signaling with IRQ 52 Dec 16 13:08:36.041513 kernel: pcieport 0000:00:05.4: AER: enabled with IRQ 52 Dec 16 13:08:36.041523 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:08:36.041531 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:08:36.041541 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:08:36.041549 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:08:36.041557 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:08:36.041564 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:08:36.041640 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 13:08:36.041651 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:08:36.041714 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 13:08:36.041778 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T13:08:35 UTC (1765890515) Dec 16 13:08:36.041843 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 16 13:08:36.041852 kernel: intel_pstate: CPU model not supported Dec 16 13:08:36.041860 kernel: efifb: probing for efifb Dec 16 13:08:36.041874 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Dec 16 13:08:36.041882 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 16 13:08:36.041890 kernel: efifb: scrolling: redraw Dec 16 13:08:36.041897 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:08:36.041905 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 13:08:36.041913 kernel: fb0: EFI VGA frame buffer device Dec 16 13:08:36.041922 kernel: pstore: Using crash dump compression: deflate Dec 16 13:08:36.041930 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:08:36.041938 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:08:36.041945 kernel: Segment Routing with IPv6 Dec 16 13:08:36.041953 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:08:36.041961 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:08:36.041968 kernel: Key type dns_resolver registered Dec 16 13:08:36.041976 kernel: IPI shorthand broadcast: enabled Dec 16 13:08:36.041984 kernel: sched_clock: Marking stable (4585002355, 163557027)->(4994522931, -245963549) Dec 16 13:08:36.041994 kernel: registered taskstats version 1 Dec 16 13:08:36.042002 kernel: Loading compiled-in X.509 certificates Dec 16 13:08:36.042010 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:08:36.042018 kernel: Demotion targets for Node 0: null Dec 16 13:08:36.042025 kernel: Key type .fscrypt registered Dec 16 13:08:36.042033 kernel: Key type fscrypt-provisioning registered Dec 16 13:08:36.042040 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:08:36.042048 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:08:36.042055 kernel: ima: No architecture policies found Dec 16 13:08:36.042064 kernel: clk: Disabling unused clocks Dec 16 13:08:36.042072 kernel: Warning: unable to open an initial console. Dec 16 13:08:36.042081 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:08:36.042088 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:08:36.042096 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:08:36.042104 kernel: Run /init as init process Dec 16 13:08:36.042111 kernel: with arguments: Dec 16 13:08:36.042119 kernel: /init Dec 16 13:08:36.042127 kernel: with environment: Dec 16 13:08:36.042134 kernel: HOME=/ Dec 16 13:08:36.042143 kernel: TERM=linux Dec 16 13:08:36.042153 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:08:36.042165 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:08:36.042174 systemd[1]: Detected virtualization kvm. Dec 16 13:08:36.042182 systemd[1]: Detected architecture x86-64. Dec 16 13:08:36.042190 systemd[1]: Running in initrd. Dec 16 13:08:36.042198 systemd[1]: No hostname configured, using default hostname. Dec 16 13:08:36.042208 systemd[1]: Hostname set to . Dec 16 13:08:36.042216 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:08:36.042234 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:08:36.042244 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:08:36.042253 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:08:36.042262 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:08:36.042270 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:08:36.042278 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:08:36.042287 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:08:36.042298 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:08:36.042307 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:08:36.042315 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:08:36.042323 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:08:36.042332 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:08:36.042340 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:08:36.042348 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:08:36.042356 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:08:36.042366 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:08:36.042374 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:08:36.042382 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:08:36.042390 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:08:36.042399 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:08:36.042407 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:08:36.042415 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:08:36.042423 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:08:36.042431 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:08:36.042441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:08:36.042450 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:08:36.042458 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:08:36.042466 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:08:36.042474 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:08:36.042483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:08:36.042491 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:08:36.042499 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:08:36.042510 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:08:36.042518 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:08:36.042528 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:08:36.042536 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:08:36.042545 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:08:36.042558 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:08:36.042590 systemd-journald[274]: Collecting audit messages is disabled. Dec 16 13:08:36.042615 kernel: Bridge firewalling registered Dec 16 13:08:36.042623 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:08:36.042631 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:08:36.042640 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:08:36.042649 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:08:36.042657 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:08:36.042668 systemd-journald[274]: Journal started Dec 16 13:08:36.042690 systemd-journald[274]: Runtime Journal (/run/log/journal/436e699a48af45a9b1f2c99e57087b55) is 8M, max 319.5M, 311.5M free. Dec 16 13:08:35.942036 systemd-modules-load[275]: Inserted module 'overlay' Dec 16 13:08:36.046352 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:08:35.978608 systemd-modules-load[275]: Inserted module 'br_netfilter' Dec 16 13:08:36.049422 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:08:36.055138 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:08:36.057974 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:08:36.060734 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:08:36.094604 systemd-tmpfiles[317]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:08:36.100397 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:08:36.103521 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:08:36.114562 dracut-cmdline[319]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:08:36.168968 systemd-resolved[327]: Positive Trust Anchors: Dec 16 13:08:36.168985 systemd-resolved[327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:08:36.169034 systemd-resolved[327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:08:36.172379 systemd-resolved[327]: Defaulting to hostname 'linux'. Dec 16 13:08:36.173728 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:08:36.177653 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:08:36.311985 kernel: SCSI subsystem initialized Dec 16 13:08:36.331921 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:08:36.346922 kernel: iscsi: registered transport (tcp) Dec 16 13:08:36.398324 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:08:36.398464 kernel: QLogic iSCSI HBA Driver Dec 16 13:08:36.435626 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:08:36.472076 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:08:36.477541 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:08:36.567330 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:08:36.571793 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:08:36.678927 kernel: raid6: avx512x4 gen() 14210 MB/s Dec 16 13:08:36.696955 kernel: raid6: avx512x2 gen() 23220 MB/s Dec 16 13:08:36.714985 kernel: raid6: avx512x1 gen() 28901 MB/s Dec 16 13:08:36.732909 kernel: raid6: avx2x4 gen() 26277 MB/s Dec 16 13:08:36.749904 kernel: raid6: avx2x2 gen() 29383 MB/s Dec 16 13:08:36.767362 kernel: raid6: avx2x1 gen() 19160 MB/s Dec 16 13:08:36.767434 kernel: raid6: using algorithm avx2x2 gen() 29383 MB/s Dec 16 13:08:36.786988 kernel: raid6: .... xor() 21766 MB/s, rmw enabled Dec 16 13:08:36.787069 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:08:36.821942 kernel: xor: automatically using best checksumming function avx Dec 16 13:08:37.008939 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:08:37.025751 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:08:37.030857 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:08:37.094696 systemd-udevd[532]: Using default interface naming scheme 'v255'. Dec 16 13:08:37.103804 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:08:37.106542 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:08:37.168009 dracut-pre-trigger[539]: rd.md=0: removing MD RAID activation Dec 16 13:08:37.230689 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:08:37.234646 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:08:37.375344 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:08:37.380160 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:08:37.462900 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues Dec 16 13:08:37.475435 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:08:37.502921 kernel: ACPI: bus type USB registered Dec 16 13:08:37.503016 kernel: usbcore: registered new interface driver usbfs Dec 16 13:08:37.505097 kernel: usbcore: registered new interface driver hub Dec 16 13:08:37.511882 kernel: usbcore: registered new device driver usb Dec 16 13:08:37.511918 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:08:37.521465 kernel: virtio_blk virtio2: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Dec 16 13:08:37.525884 kernel: AES CTR mode by8 optimization enabled Dec 16 13:08:37.531477 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:08:37.531639 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:08:37.532388 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:08:37.545201 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:08:37.545233 kernel: GPT:17805311 != 104857599 Dec 16 13:08:37.545252 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:08:37.545268 kernel: GPT:17805311 != 104857599 Dec 16 13:08:37.545280 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:08:37.545309 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:08:37.534781 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:08:37.546363 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:08:37.552065 kernel: libata version 3.00 loaded. Dec 16 13:08:37.554890 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller Dec 16 13:08:37.555101 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1 Dec 16 13:08:37.557074 kernel: uhci_hcd 0000:02:01.0: detected 2 ports Dec 16 13:08:37.558302 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x00006000 Dec 16 13:08:37.564557 kernel: hub 1-0:1.0: USB hub found Dec 16 13:08:37.564775 kernel: hub 1-0:1.0: 2 ports detected Dec 16 13:08:37.568880 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 13:08:37.569024 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 13:08:37.573881 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 13:08:37.574011 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 13:08:37.574097 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 13:08:37.575922 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:08:37.597978 kernel: scsi host0: ahci Dec 16 13:08:37.598184 kernel: scsi host1: ahci Dec 16 13:08:37.598309 kernel: scsi host2: ahci Dec 16 13:08:37.598421 kernel: scsi host3: ahci Dec 16 13:08:37.598532 kernel: scsi host4: ahci Dec 16 13:08:37.598648 kernel: scsi host5: ahci Dec 16 13:08:37.598771 kernel: ata1: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380100 irq 67 lpm-pol 1 Dec 16 13:08:37.598786 kernel: ata2: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380180 irq 67 lpm-pol 1 Dec 16 13:08:37.598799 kernel: ata3: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380200 irq 67 lpm-pol 1 Dec 16 13:08:37.598812 kernel: ata4: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380280 irq 67 lpm-pol 1 Dec 16 13:08:37.598825 kernel: ata5: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380300 irq 67 lpm-pol 1 Dec 16 13:08:37.598838 kernel: ata6: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380380 irq 67 lpm-pol 1 Dec 16 13:08:37.620824 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 13:08:37.629425 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 13:08:37.636413 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 13:08:37.636984 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 13:08:37.645901 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 13:08:37.647377 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:08:37.682498 disk-uuid[750]: Primary Header is updated. Dec 16 13:08:37.682498 disk-uuid[750]: Secondary Entries is updated. Dec 16 13:08:37.682498 disk-uuid[750]: Secondary Header is updated. Dec 16 13:08:37.691930 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:08:37.791183 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Dec 16 13:08:37.899903 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 13:08:37.900057 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 13:08:37.903043 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 13:08:37.906984 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 13:08:37.910909 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 13:08:37.915962 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 13:08:37.928795 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:08:37.932310 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:08:37.933943 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:08:37.935530 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:08:37.939216 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:08:37.982014 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:08:37.996928 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 13:08:38.013513 kernel: usbcore: registered new interface driver usbhid Dec 16 13:08:38.013609 kernel: usbhid: USB HID core driver Dec 16 13:08:38.038741 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Dec 16 13:08:38.041848 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0 Dec 16 13:08:38.716989 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:08:38.718950 disk-uuid[751]: The operation has completed successfully. Dec 16 13:08:38.810699 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:08:38.811008 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:08:38.868094 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:08:38.904918 sh[777]: Success Dec 16 13:08:38.961303 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:08:38.961425 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:08:38.964680 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:08:38.995933 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:08:39.110801 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:08:39.116381 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:08:39.144446 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:08:39.181970 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (790) Dec 16 13:08:39.189790 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:08:39.189909 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:08:39.218282 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:08:39.218381 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:08:39.225625 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:08:39.228045 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:08:39.229678 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:08:39.231815 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:08:39.236060 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:08:39.303943 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (823) Dec 16 13:08:39.312375 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:08:39.312466 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:08:39.327493 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:08:39.327576 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:08:39.343937 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:08:39.345351 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:08:39.348381 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:08:39.473815 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:08:39.479548 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:08:39.544818 systemd-networkd[971]: lo: Link UP Dec 16 13:08:39.544830 systemd-networkd[971]: lo: Gained carrier Dec 16 13:08:39.546249 systemd-networkd[971]: Enumeration completed Dec 16 13:08:39.546610 systemd-networkd[971]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:08:39.546616 systemd-networkd[971]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:08:39.547458 systemd-networkd[971]: eth0: Link UP Dec 16 13:08:39.547592 systemd-networkd[971]: eth0: Gained carrier Dec 16 13:08:39.547604 systemd-networkd[971]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:08:39.548388 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:08:39.560124 ignition[888]: Ignition 2.22.0 Dec 16 13:08:39.551146 systemd[1]: Reached target network.target - Network. Dec 16 13:08:39.560134 ignition[888]: Stage: fetch-offline Dec 16 13:08:39.563590 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:08:39.560177 ignition[888]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:08:39.566176 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:08:39.560188 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 13:08:39.560307 ignition[888]: parsed url from cmdline: "" Dec 16 13:08:39.560313 ignition[888]: no config URL provided Dec 16 13:08:39.560320 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:08:39.560333 ignition[888]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:08:39.560341 ignition[888]: failed to fetch config: resource requires networking Dec 16 13:08:39.560547 ignition[888]: Ignition finished successfully Dec 16 13:08:39.585968 systemd-networkd[971]: eth0: DHCPv4 address 10.0.21.22/25, gateway 10.0.21.1 acquired from 10.0.21.1 Dec 16 13:08:39.622208 ignition[985]: Ignition 2.22.0 Dec 16 13:08:39.622233 ignition[985]: Stage: fetch Dec 16 13:08:39.622483 ignition[985]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:08:39.622501 ignition[985]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 13:08:39.622640 ignition[985]: parsed url from cmdline: "" Dec 16 13:08:39.622647 ignition[985]: no config URL provided Dec 16 13:08:39.622656 ignition[985]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:08:39.622669 ignition[985]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:08:39.622833 ignition[985]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 16 13:08:39.623185 ignition[985]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 16 13:08:39.623340 ignition[985]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 16 13:08:40.120203 ignition[985]: GET result: OK Dec 16 13:08:40.120402 ignition[985]: parsing config with SHA512: 99c0083844be6f99da2c7d01665967994d4748bd1c53aa39aec19f5390b98993cd2a0e1974cefd9600c8989490e1cdc3f3969152bf30dcf877800c9d6ae0eb1c Dec 16 13:08:40.135950 unknown[985]: fetched base config from "system" Dec 16 13:08:40.135976 unknown[985]: fetched base config from "system" Dec 16 13:08:40.136906 ignition[985]: fetch: fetch complete Dec 16 13:08:40.135990 unknown[985]: fetched user config from "openstack" Dec 16 13:08:40.136920 ignition[985]: fetch: fetch passed Dec 16 13:08:40.137019 ignition[985]: Ignition finished successfully Dec 16 13:08:40.143169 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:08:40.147614 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:08:40.209975 ignition[998]: Ignition 2.22.0 Dec 16 13:08:40.209992 ignition[998]: Stage: kargs Dec 16 13:08:40.210204 ignition[998]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:08:40.210219 ignition[998]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 13:08:40.211426 ignition[998]: kargs: kargs passed Dec 16 13:08:40.211491 ignition[998]: Ignition finished successfully Dec 16 13:08:40.215210 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:08:40.217706 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:08:40.287170 ignition[1008]: Ignition 2.22.0 Dec 16 13:08:40.287205 ignition[1008]: Stage: disks Dec 16 13:08:40.287558 ignition[1008]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:08:40.287580 ignition[1008]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 13:08:40.289745 ignition[1008]: disks: disks passed Dec 16 13:08:40.289837 ignition[1008]: Ignition finished successfully Dec 16 13:08:40.293650 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:08:40.295365 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:08:40.296684 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:08:40.298439 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:08:40.300322 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:08:40.302143 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:08:40.306278 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:08:40.376968 systemd-fsck[1022]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Dec 16 13:08:40.380800 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:08:40.385036 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:08:40.661956 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:08:40.664439 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:08:40.666751 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:08:40.672502 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:08:40.676407 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:08:40.678415 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:08:40.680321 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 16 13:08:40.681674 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:08:40.681742 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:08:40.709723 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:08:40.714230 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:08:40.736958 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1030) Dec 16 13:08:40.747339 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:08:40.747450 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:08:40.765917 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:08:40.766032 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:08:40.772664 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:08:40.806939 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 13:08:40.833965 initrd-setup-root[1059]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:08:40.845973 initrd-setup-root[1066]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:08:40.855651 initrd-setup-root[1073]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:08:40.863028 initrd-setup-root[1080]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:08:41.025502 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:08:41.028789 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:08:41.032098 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:08:41.048799 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:08:41.050247 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:08:41.073482 ignition[1148]: INFO : Ignition 2.22.0 Dec 16 13:08:41.073482 ignition[1148]: INFO : Stage: mount Dec 16 13:08:41.076031 ignition[1148]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:08:41.076031 ignition[1148]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 13:08:41.076031 ignition[1148]: INFO : mount: mount passed Dec 16 13:08:41.076031 ignition[1148]: INFO : Ignition finished successfully Dec 16 13:08:41.076884 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:08:41.078506 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:08:41.329280 systemd-networkd[971]: eth0: Gained IPv6LL Dec 16 13:08:41.863078 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 13:08:43.890944 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 13:08:47.900973 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 13:08:47.905738 coreos-metadata[1032]: Dec 16 13:08:47.905 WARN failed to locate config-drive, using the metadata service API instead Dec 16 13:08:47.944599 coreos-metadata[1032]: Dec 16 13:08:47.944 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 13:08:48.204931 coreos-metadata[1032]: Dec 16 13:08:48.204 INFO Fetch successful Dec 16 13:08:48.205974 coreos-metadata[1032]: Dec 16 13:08:48.205 INFO wrote hostname ci-4459-2-2-3-ab2e4a938e to /sysroot/etc/hostname Dec 16 13:08:48.209287 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 16 13:08:48.210091 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 16 13:08:48.211912 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:08:48.256812 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:08:48.310926 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1172) Dec 16 13:08:48.319404 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:08:48.319491 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:08:48.335656 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:08:48.335753 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:08:48.340990 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:08:48.387255 ignition[1190]: INFO : Ignition 2.22.0 Dec 16 13:08:48.387255 ignition[1190]: INFO : Stage: files Dec 16 13:08:48.390034 ignition[1190]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:08:48.390034 ignition[1190]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 13:08:48.390034 ignition[1190]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:08:48.393365 ignition[1190]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:08:48.393365 ignition[1190]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:08:48.397883 ignition[1190]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:08:48.399069 ignition[1190]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:08:48.400217 ignition[1190]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:08:48.399458 unknown[1190]: wrote ssh authorized keys file for user: core Dec 16 13:08:48.403624 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:08:48.406081 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 13:08:48.465566 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:08:48.593251 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:08:48.593251 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:08:48.593251 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 16 13:08:49.002212 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 13:08:49.168005 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:08:49.168005 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:08:49.170542 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:08:49.170542 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:08:49.170542 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:08:49.170542 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:08:49.170542 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:08:49.170542 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:08:49.170542 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:08:49.177448 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:08:49.177448 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:08:49.177448 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:08:49.177448 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:08:49.177448 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:08:49.177448 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 13:08:49.392656 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 13:08:49.965470 ignition[1190]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:08:49.965470 ignition[1190]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 13:08:49.971369 ignition[1190]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:08:49.973793 ignition[1190]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:08:49.973793 ignition[1190]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 13:08:49.973793 ignition[1190]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:08:49.973793 ignition[1190]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:08:49.981801 ignition[1190]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:08:49.981801 ignition[1190]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:08:49.981801 ignition[1190]: INFO : files: files passed Dec 16 13:08:49.981801 ignition[1190]: INFO : Ignition finished successfully Dec 16 13:08:49.977961 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:08:49.983225 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:08:49.986190 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:08:50.003956 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:08:50.004195 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:08:50.012773 initrd-setup-root-after-ignition[1225]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:08:50.012773 initrd-setup-root-after-ignition[1225]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:08:50.016188 initrd-setup-root-after-ignition[1229]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:08:50.016107 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:08:50.017382 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:08:50.020774 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:08:50.129260 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:08:50.129450 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:08:50.132156 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:08:50.134038 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:08:50.136378 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:08:50.137649 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:08:50.188081 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:08:50.190070 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:08:50.228124 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:08:50.229522 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:08:50.231231 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:08:50.232569 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:08:50.232801 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:08:50.234848 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:08:50.236426 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:08:50.237945 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:08:50.239063 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:08:50.239813 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:08:50.240528 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:08:50.241295 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:08:50.242067 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:08:50.242842 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:08:50.243639 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:08:50.244440 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:08:50.245217 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:08:50.245341 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:08:50.246360 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:08:50.247218 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:08:50.247923 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:08:50.248056 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:08:50.248649 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:08:50.248745 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:08:50.249807 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:08:50.249917 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:08:50.250601 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:08:50.250682 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:08:50.252291 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:08:50.252813 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:08:50.252939 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:08:50.254360 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:08:50.255085 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:08:50.255182 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:08:50.255860 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:08:50.255956 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:08:50.259463 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:08:50.259543 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:08:50.295656 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:08:50.315683 ignition[1249]: INFO : Ignition 2.22.0 Dec 16 13:08:50.315683 ignition[1249]: INFO : Stage: umount Dec 16 13:08:50.317782 ignition[1249]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:08:50.317782 ignition[1249]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 13:08:50.319394 ignition[1249]: INFO : umount: umount passed Dec 16 13:08:50.319394 ignition[1249]: INFO : Ignition finished successfully Dec 16 13:08:50.321475 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:08:50.321638 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:08:50.324057 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:08:50.324141 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:08:50.324965 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:08:50.325045 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:08:50.326077 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:08:50.326148 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:08:50.327328 systemd[1]: Stopped target network.target - Network. Dec 16 13:08:50.328443 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:08:50.328525 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:08:50.329617 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:08:50.330626 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:08:50.335002 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:08:50.335518 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:08:50.336541 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:08:50.337541 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:08:50.337586 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:08:50.338467 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:08:50.338501 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:08:50.339183 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:08:50.339235 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:08:50.340503 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:08:50.340540 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:08:50.341396 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:08:50.342029 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:08:50.343429 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:08:50.343513 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:08:50.344640 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:08:50.344715 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:08:50.348794 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:08:50.349153 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:08:50.354825 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:08:50.356113 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:08:50.356302 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:08:50.361047 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:08:50.361805 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:08:50.362107 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:08:50.366066 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:08:50.367144 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:08:50.368789 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:08:50.368932 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:08:50.372076 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:08:50.373216 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:08:50.373328 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:08:50.375017 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:08:50.375109 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:08:50.376523 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:08:50.376604 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:08:50.377920 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:08:50.379568 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:08:50.407821 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:08:50.408104 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:08:50.409852 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:08:50.409925 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:08:50.411041 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:08:50.411087 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:08:50.412425 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:08:50.412501 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:08:50.414598 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:08:50.414658 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:08:50.416813 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:08:50.416894 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:08:50.420326 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:08:50.421178 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:08:50.421259 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:08:50.422803 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:08:50.423005 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:08:50.424271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:08:50.424332 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:08:50.426640 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:08:50.426781 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:08:50.445317 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:08:50.445476 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:08:50.447215 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:08:50.449429 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:08:50.474876 systemd[1]: Switching root. Dec 16 13:08:50.566501 systemd-journald[274]: Journal stopped Dec 16 13:08:51.659820 systemd-journald[274]: Received SIGTERM from PID 1 (systemd). Dec 16 13:08:51.659947 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:08:51.659974 kernel: SELinux: policy capability open_perms=1 Dec 16 13:08:51.659995 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:08:51.660011 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:08:51.660031 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:08:51.660056 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:08:51.660072 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:08:51.660092 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:08:51.660111 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:08:51.660133 kernel: audit: type=1403 audit(1765890530.722:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:08:51.660154 systemd[1]: Successfully loaded SELinux policy in 80.014ms. Dec 16 13:08:51.660189 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.526ms. Dec 16 13:08:51.660207 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:08:51.660229 systemd[1]: Detected virtualization kvm. Dec 16 13:08:51.660247 systemd[1]: Detected architecture x86-64. Dec 16 13:08:51.660267 systemd[1]: Detected first boot. Dec 16 13:08:51.660287 systemd[1]: Hostname set to . Dec 16 13:08:51.660308 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:08:51.660325 zram_generator::config[1298]: No configuration found. Dec 16 13:08:51.660345 kernel: Guest personality initialized and is inactive Dec 16 13:08:51.660361 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:08:51.660377 kernel: Initialized host personality Dec 16 13:08:51.660398 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:08:51.660415 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:08:51.660435 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:08:51.660451 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:08:51.660467 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:08:51.660484 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:08:51.660501 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:08:51.660518 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:08:51.660534 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:08:51.660551 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:08:51.660571 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:08:51.660588 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:08:51.660605 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:08:51.660623 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:08:51.660640 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:08:51.660657 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:08:51.660673 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:08:51.660691 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:08:51.660711 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:08:51.660728 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:08:51.660745 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:08:51.660761 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:08:51.660778 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:08:51.660795 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:08:51.660812 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:08:51.660831 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:08:51.660848 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:08:51.660877 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:08:51.660894 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:08:51.660911 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:08:51.660927 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:08:51.660944 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:08:51.660960 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:08:51.660977 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:08:51.660998 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:08:51.661015 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:08:51.661032 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:08:51.661049 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:08:51.661065 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:08:51.661082 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:08:51.661098 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:08:51.661116 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:08:51.661132 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:08:51.661152 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:08:51.661169 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:08:51.661189 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:08:51.661207 systemd[1]: Reached target machines.target - Containers. Dec 16 13:08:51.661223 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:08:51.661241 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:08:51.661257 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:08:51.661274 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:08:51.661291 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:08:51.661313 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:08:51.661330 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:08:51.661347 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:08:51.661365 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:08:51.661383 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:08:51.661399 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:08:51.661416 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:08:51.661433 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:08:51.661452 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:08:51.661470 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:08:51.661487 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:08:51.661502 kernel: loop: module loaded Dec 16 13:08:51.661518 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:08:51.661536 kernel: fuse: init (API version 7.41) Dec 16 13:08:51.661552 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:08:51.661569 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:08:51.661585 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:08:51.661601 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:08:51.661618 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:08:51.661635 systemd[1]: Stopped verity-setup.service. Dec 16 13:08:51.661654 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:08:51.661672 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:08:51.661688 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:08:51.661705 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:08:51.661723 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:08:51.661741 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:08:51.661783 systemd-journald[1368]: Collecting audit messages is disabled. Dec 16 13:08:51.661826 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:08:51.661844 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:08:51.661860 kernel: ACPI: bus type drm_connector registered Dec 16 13:08:51.661888 systemd-journald[1368]: Journal started Dec 16 13:08:51.661927 systemd-journald[1368]: Runtime Journal (/run/log/journal/436e699a48af45a9b1f2c99e57087b55) is 8M, max 319.5M, 311.5M free. Dec 16 13:08:51.435785 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:08:51.457182 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 13:08:51.457646 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:08:51.663894 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:08:51.665069 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:08:51.665788 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:08:51.665987 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:08:51.666618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:08:51.666797 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:08:51.667456 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:08:51.667619 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:08:51.668211 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:08:51.668355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:08:51.668946 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:08:51.669080 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:08:51.669655 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:08:51.669787 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:08:51.670408 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:08:51.671031 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:08:51.671644 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:08:51.672256 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:08:51.682013 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:08:51.684443 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:08:51.685804 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:08:51.686265 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:08:51.686299 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:08:51.687597 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:08:51.688966 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:08:51.689490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:08:51.698657 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:08:51.700200 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:08:51.700726 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:08:51.701621 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:08:51.702147 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:08:51.703024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:08:51.705031 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:08:51.706372 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:08:51.708337 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:08:51.708898 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:08:51.712493 systemd-journald[1368]: Time spent on flushing to /var/log/journal/436e699a48af45a9b1f2c99e57087b55 is 31.443ms for 1709 entries. Dec 16 13:08:51.712493 systemd-journald[1368]: System Journal (/var/log/journal/436e699a48af45a9b1f2c99e57087b55) is 8M, max 584.8M, 576.8M free. Dec 16 13:08:51.762319 systemd-journald[1368]: Received client request to flush runtime journal. Dec 16 13:08:51.762392 kernel: loop0: detected capacity change from 0 to 110984 Dec 16 13:08:51.714251 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:08:51.715243 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:08:51.716750 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:08:51.733209 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:08:51.759008 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:08:51.759918 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:08:51.763453 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:08:51.764408 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:08:51.767895 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:08:51.778182 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:08:51.796491 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. Dec 16 13:08:51.796507 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. Dec 16 13:08:51.801295 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:08:51.810892 kernel: loop1: detected capacity change from 0 to 224512 Dec 16 13:08:51.926904 kernel: loop2: detected capacity change from 0 to 128560 Dec 16 13:08:52.066909 kernel: loop3: detected capacity change from 0 to 1640 Dec 16 13:08:52.102932 kernel: loop4: detected capacity change from 0 to 110984 Dec 16 13:08:52.125949 kernel: loop5: detected capacity change from 0 to 224512 Dec 16 13:08:52.162983 kernel: loop6: detected capacity change from 0 to 128560 Dec 16 13:08:52.181367 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:08:52.183994 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:08:52.193893 kernel: loop7: detected capacity change from 0 to 1640 Dec 16 13:08:52.205242 (sd-merge)[1447]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-stackit'. Dec 16 13:08:52.206589 (sd-merge)[1447]: Merged extensions into '/usr'. Dec 16 13:08:52.212669 systemd[1]: Reload requested from client PID 1424 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:08:52.212686 systemd[1]: Reloading... Dec 16 13:08:52.251898 zram_generator::config[1474]: No configuration found. Dec 16 13:08:52.284283 systemd-udevd[1449]: Using default interface naming scheme 'v255'. Dec 16 13:08:52.395129 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:08:52.406896 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 13:08:52.414047 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:08:52.435611 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:08:52.435818 systemd[1]: Reloading finished in 222 ms. Dec 16 13:08:52.448883 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 16 13:08:52.454237 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:08:52.462354 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:08:52.476881 ldconfig[1419]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:08:52.491882 kernel: Console: switching to colour dummy device 80x25 Dec 16 13:08:52.497495 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 16 13:08:52.517145 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 16 13:08:52.517169 kernel: [drm] features: -context_init Dec 16 13:08:52.517191 kernel: [drm] number of scanouts: 1 Dec 16 13:08:52.517206 kernel: [drm] number of cap sets: 0 Dec 16 13:08:52.512218 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:08:52.520257 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Dec 16 13:08:52.520337 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 16 13:08:52.520526 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 13:08:52.522158 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 16 13:08:52.522186 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 13:08:52.522216 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 13:08:52.527888 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 16 13:08:52.531940 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:08:52.548031 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 13:08:52.575304 systemd[1]: Starting ensure-sysext.service... Dec 16 13:08:52.577883 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:08:52.581995 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:08:52.584655 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:08:52.587069 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:08:52.596478 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:08:52.597517 systemd[1]: Reload requested from client PID 1582 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:08:52.597536 systemd[1]: Reloading... Dec 16 13:08:52.601361 systemd-tmpfiles[1585]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:08:52.601391 systemd-tmpfiles[1585]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:08:52.601685 systemd-tmpfiles[1585]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:08:52.601998 systemd-tmpfiles[1585]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:08:52.602640 systemd-tmpfiles[1585]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:08:52.602843 systemd-tmpfiles[1585]: ACLs are not supported, ignoring. Dec 16 13:08:52.602918 systemd-tmpfiles[1585]: ACLs are not supported, ignoring. Dec 16 13:08:52.607491 systemd-tmpfiles[1585]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:08:52.607501 systemd-tmpfiles[1585]: Skipping /boot Dec 16 13:08:52.614056 systemd-tmpfiles[1585]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:08:52.614068 systemd-tmpfiles[1585]: Skipping /boot Dec 16 13:08:52.635894 zram_generator::config[1618]: No configuration found. Dec 16 13:08:52.809957 systemd[1]: Reloading finished in 212 ms. Dec 16 13:08:52.839031 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:08:52.839565 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:08:52.862130 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:08:52.867149 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:08:52.870970 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:08:52.876062 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:08:52.879086 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:08:52.882373 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:08:52.888899 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:08:52.889068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:08:52.890186 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:08:52.894442 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:08:52.898094 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:08:52.898378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:08:52.898551 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:08:52.898700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:08:52.900320 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:08:52.900694 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:08:52.903046 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:08:52.903279 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:08:52.913401 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:08:52.913670 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:08:52.917080 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:08:52.920608 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:08:52.929225 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:08:52.929448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:08:52.930794 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:08:52.933314 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:08:52.933452 augenrules[1703]: No rules Dec 16 13:08:52.936732 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:08:52.949226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:08:52.952115 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Dec 16 13:08:52.954271 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:08:52.954410 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:08:52.954598 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:08:52.956523 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:08:52.959193 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:08:52.963148 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:08:52.965941 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:08:52.966207 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:08:52.968021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:08:52.968245 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:08:52.968891 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 13:08:52.968929 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 13:08:52.974368 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:08:52.974621 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:08:52.975979 kernel: PTP clock support registered Dec 16 13:08:52.976608 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:08:52.976843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:08:52.978638 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:08:52.978797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:08:52.980802 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:08:52.982602 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Dec 16 13:08:52.982824 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Dec 16 13:08:52.987417 systemd[1]: Finished ensure-sysext.service. Dec 16 13:08:52.995484 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:08:52.995540 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:08:53.014218 systemd-resolved[1669]: Positive Trust Anchors: Dec 16 13:08:53.014230 systemd-resolved[1669]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:08:53.014262 systemd-resolved[1669]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:08:53.018115 systemd-networkd[1584]: lo: Link UP Dec 16 13:08:53.018123 systemd-networkd[1584]: lo: Gained carrier Dec 16 13:08:53.018742 systemd-resolved[1669]: Using system hostname 'ci-4459-2-2-3-ab2e4a938e'. Dec 16 13:08:53.019239 systemd-networkd[1584]: Enumeration completed Dec 16 13:08:53.019421 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:08:53.019518 systemd-networkd[1584]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:08:53.019522 systemd-networkd[1584]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:08:53.020588 systemd-networkd[1584]: eth0: Link UP Dec 16 13:08:53.020824 systemd-networkd[1584]: eth0: Gained carrier Dec 16 13:08:53.020845 systemd-networkd[1584]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:08:53.021030 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:08:53.021557 systemd[1]: Reached target network.target - Network. Dec 16 13:08:53.022079 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:08:53.024831 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:08:53.027711 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:08:53.052113 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:08:53.054267 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:08:53.054357 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:08:53.055607 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:08:53.059266 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:08:53.060129 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:08:53.061208 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:08:53.065164 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:08:53.065552 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:08:53.065912 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:08:53.065940 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:08:53.066276 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:08:53.069081 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:08:53.072601 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:08:53.075257 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:08:53.075787 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:08:53.076145 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:08:53.079279 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:08:53.083096 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:08:53.084721 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:08:53.085258 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:08:53.086783 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:08:53.088352 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:08:53.088777 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:08:53.088807 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:08:53.091522 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 13:08:53.093129 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:08:53.095403 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:08:53.097459 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:08:53.098692 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:08:53.100896 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:08:53.104891 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 13:08:53.107576 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:08:53.109709 jq[1739]: false Dec 16 13:08:53.109980 systemd-networkd[1584]: eth0: DHCPv4 address 10.0.21.22/25, gateway 10.0.21.1 acquired from 10.0.21.1 Dec 16 13:08:53.110361 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:08:53.111414 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:08:53.112669 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:08:53.113915 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:08:53.115123 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:08:53.116276 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:08:53.118519 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:08:53.120885 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:08:53.121264 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:08:53.121724 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:08:53.122678 extend-filesystems[1742]: Found /dev/vda6 Dec 16 13:08:53.125945 google_oslogin_nss_cache[1743]: oslogin_cache_refresh[1743]: Refreshing passwd entry cache Dec 16 13:08:53.124992 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:08:53.124625 oslogin_cache_refresh[1743]: Refreshing passwd entry cache Dec 16 13:08:53.127575 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:08:53.130320 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:08:53.130957 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:08:53.131395 extend-filesystems[1742]: Found /dev/vda9 Dec 16 13:08:53.140720 extend-filesystems[1742]: Checking size of /dev/vda9 Dec 16 13:08:53.141231 jq[1755]: true Dec 16 13:08:53.132514 oslogin_cache_refresh[1743]: Failure getting users, quitting Dec 16 13:08:53.132026 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:08:53.141388 google_oslogin_nss_cache[1743]: oslogin_cache_refresh[1743]: Failure getting users, quitting Dec 16 13:08:53.141388 google_oslogin_nss_cache[1743]: oslogin_cache_refresh[1743]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:08:53.141388 google_oslogin_nss_cache[1743]: oslogin_cache_refresh[1743]: Refreshing group entry cache Dec 16 13:08:53.141388 google_oslogin_nss_cache[1743]: oslogin_cache_refresh[1743]: Failure getting groups, quitting Dec 16 13:08:53.141388 google_oslogin_nss_cache[1743]: oslogin_cache_refresh[1743]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:08:53.132532 oslogin_cache_refresh[1743]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:08:53.132183 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:08:53.132571 oslogin_cache_refresh[1743]: Refreshing group entry cache Dec 16 13:08:53.138646 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:08:53.139801 oslogin_cache_refresh[1743]: Failure getting groups, quitting Dec 16 13:08:53.138813 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:08:53.146026 extend-filesystems[1742]: Resized partition /dev/vda9 Dec 16 13:08:53.139812 oslogin_cache_refresh[1743]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:08:53.143138 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:08:53.146565 update_engine[1752]: I20251216 13:08:53.146289 1752 main.cc:92] Flatcar Update Engine starting Dec 16 13:08:53.143355 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:08:53.144587 (ntainerd)[1770]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:08:53.147364 jq[1769]: true Dec 16 13:08:53.149012 extend-filesystems[1778]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:08:53.149698 chronyd[1734]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 13:08:53.151709 chronyd[1734]: Loaded seccomp filter (level 2) Dec 16 13:08:53.152254 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 13:08:53.159123 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 12499963 blocks Dec 16 13:08:53.163054 tar[1761]: linux-amd64/LICENSE Dec 16 13:08:53.163266 tar[1761]: linux-amd64/helm Dec 16 13:08:53.179054 dbus-daemon[1737]: [system] SELinux support is enabled Dec 16 13:08:53.180203 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:08:53.198448 update_engine[1752]: I20251216 13:08:53.181087 1752 update_check_scheduler.cc:74] Next update check in 3m45s Dec 16 13:08:53.185622 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:08:53.185642 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:08:53.186669 systemd-logind[1751]: New seat seat0. Dec 16 13:08:53.186707 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:08:53.186721 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:08:53.188987 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:08:53.191487 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:08:53.197978 systemd-logind[1751]: Watching system buttons on /dev/input/event3 (Power Button) Dec 16 13:08:53.197997 systemd-logind[1751]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:08:53.198238 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:08:53.238493 locksmithd[1802]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:08:53.314523 containerd[1770]: time="2025-12-16T13:08:53Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:08:53.315184 containerd[1770]: time="2025-12-16T13:08:53.315107190Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:08:53.331945 containerd[1770]: time="2025-12-16T13:08:53.331780127Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.016µs" Dec 16 13:08:53.331945 containerd[1770]: time="2025-12-16T13:08:53.331826832Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:08:53.331945 containerd[1770]: time="2025-12-16T13:08:53.331853887Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:08:53.332966 containerd[1770]: time="2025-12-16T13:08:53.332938990Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:08:53.333050 containerd[1770]: time="2025-12-16T13:08:53.333034185Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:08:53.333136 containerd[1770]: time="2025-12-16T13:08:53.333120406Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:08:53.333273 containerd[1770]: time="2025-12-16T13:08:53.333252029Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:08:53.333339 containerd[1770]: time="2025-12-16T13:08:53.333325253Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:08:53.333743 containerd[1770]: time="2025-12-16T13:08:53.333717112Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:08:53.333828 containerd[1770]: time="2025-12-16T13:08:53.333812808Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:08:53.333922 containerd[1770]: time="2025-12-16T13:08:53.333904881Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:08:53.333986 containerd[1770]: time="2025-12-16T13:08:53.333972504Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:08:53.334824 containerd[1770]: time="2025-12-16T13:08:53.334804078Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:08:53.335252 containerd[1770]: time="2025-12-16T13:08:53.335227079Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:08:53.335355 containerd[1770]: time="2025-12-16T13:08:53.335336558Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:08:53.335432 containerd[1770]: time="2025-12-16T13:08:53.335417320Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:08:53.335528 containerd[1770]: time="2025-12-16T13:08:53.335512234Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:08:53.336224 containerd[1770]: time="2025-12-16T13:08:53.336150010Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:08:53.336384 containerd[1770]: time="2025-12-16T13:08:53.336343276Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:08:53.336918 bash[1801]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:08:53.338095 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:08:53.345341 systemd[1]: Starting sshkeys.service... Dec 16 13:08:53.361016 sshd_keygen[1767]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:08:53.372232 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 13:08:53.375247 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 13:08:53.396081 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:08:53.405663 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 13:08:53.400278 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:08:53.408782 containerd[1770]: time="2025-12-16T13:08:53.407859711Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:08:53.408782 containerd[1770]: time="2025-12-16T13:08:53.407939709Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:08:53.408782 containerd[1770]: time="2025-12-16T13:08:53.407956017Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:08:53.408782 containerd[1770]: time="2025-12-16T13:08:53.407967410Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:08:53.408782 containerd[1770]: time="2025-12-16T13:08:53.407979494Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:08:53.408782 containerd[1770]: time="2025-12-16T13:08:53.407989276Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:08:53.408782 containerd[1770]: time="2025-12-16T13:08:53.408000145Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:08:53.408782 containerd[1770]: time="2025-12-16T13:08:53.408014696Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:08:53.408782 containerd[1770]: time="2025-12-16T13:08:53.408025891Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:08:53.408782 containerd[1770]: time="2025-12-16T13:08:53.408038971Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:08:53.408782 containerd[1770]: time="2025-12-16T13:08:53.408049088Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:08:53.408782 containerd[1770]: time="2025-12-16T13:08:53.408060755Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:08:53.408782 containerd[1770]: time="2025-12-16T13:08:53.408175170Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:08:53.408782 containerd[1770]: time="2025-12-16T13:08:53.408192441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:08:53.409058 containerd[1770]: time="2025-12-16T13:08:53.408205040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:08:53.409058 containerd[1770]: time="2025-12-16T13:08:53.408214278Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:08:53.409058 containerd[1770]: time="2025-12-16T13:08:53.408223311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:08:53.409058 containerd[1770]: time="2025-12-16T13:08:53.408231922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:08:53.409058 containerd[1770]: time="2025-12-16T13:08:53.408242199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:08:53.409058 containerd[1770]: time="2025-12-16T13:08:53.408250859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:08:53.409058 containerd[1770]: time="2025-12-16T13:08:53.408260877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:08:53.409058 containerd[1770]: time="2025-12-16T13:08:53.408269742Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:08:53.409058 containerd[1770]: time="2025-12-16T13:08:53.408279436Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:08:53.409058 containerd[1770]: time="2025-12-16T13:08:53.408324240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:08:53.409058 containerd[1770]: time="2025-12-16T13:08:53.408336799Z" level=info msg="Start snapshots syncer" Dec 16 13:08:53.409058 containerd[1770]: time="2025-12-16T13:08:53.408362029Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:08:53.409249 containerd[1770]: time="2025-12-16T13:08:53.408590739Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:08:53.409249 containerd[1770]: time="2025-12-16T13:08:53.408635403Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:08:53.410425 containerd[1770]: time="2025-12-16T13:08:53.410404291Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:08:53.410574 containerd[1770]: time="2025-12-16T13:08:53.410561173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:08:53.410626 containerd[1770]: time="2025-12-16T13:08:53.410618486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:08:53.410661 containerd[1770]: time="2025-12-16T13:08:53.410653461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:08:53.410698 containerd[1770]: time="2025-12-16T13:08:53.410690644Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:08:53.410734 containerd[1770]: time="2025-12-16T13:08:53.410727377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:08:53.410769 containerd[1770]: time="2025-12-16T13:08:53.410762355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:08:53.410805 containerd[1770]: time="2025-12-16T13:08:53.410797715Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:08:53.410859 containerd[1770]: time="2025-12-16T13:08:53.410850791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:08:53.410934 containerd[1770]: time="2025-12-16T13:08:53.410925269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:08:53.410969 containerd[1770]: time="2025-12-16T13:08:53.410961811Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:08:53.411027 containerd[1770]: time="2025-12-16T13:08:53.411019379Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:08:53.411069 containerd[1770]: time="2025-12-16T13:08:53.411060116Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:08:53.411105 containerd[1770]: time="2025-12-16T13:08:53.411097020Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:08:53.411146 containerd[1770]: time="2025-12-16T13:08:53.411138109Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:08:53.411177 containerd[1770]: time="2025-12-16T13:08:53.411170840Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:08:53.411213 containerd[1770]: time="2025-12-16T13:08:53.411206077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:08:53.411260 containerd[1770]: time="2025-12-16T13:08:53.411252111Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:08:53.411300 containerd[1770]: time="2025-12-16T13:08:53.411293403Z" level=info msg="runtime interface created" Dec 16 13:08:53.411327 containerd[1770]: time="2025-12-16T13:08:53.411321875Z" level=info msg="created NRI interface" Dec 16 13:08:53.411362 containerd[1770]: time="2025-12-16T13:08:53.411354891Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:08:53.411400 containerd[1770]: time="2025-12-16T13:08:53.411393356Z" level=info msg="Connect containerd service" Dec 16 13:08:53.411444 containerd[1770]: time="2025-12-16T13:08:53.411437059Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:08:53.412069 containerd[1770]: time="2025-12-16T13:08:53.412051020Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:08:53.429582 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:08:53.429854 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:08:53.435935 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:08:53.474961 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:08:53.479446 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:08:53.481594 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:08:53.483473 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:08:53.517599 containerd[1770]: time="2025-12-16T13:08:53.517400444Z" level=info msg="Start subscribing containerd event" Dec 16 13:08:53.517599 containerd[1770]: time="2025-12-16T13:08:53.517500404Z" level=info msg="Start recovering state" Dec 16 13:08:53.517850 containerd[1770]: time="2025-12-16T13:08:53.517745249Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:08:53.517936 containerd[1770]: time="2025-12-16T13:08:53.517896540Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:08:53.518148 containerd[1770]: time="2025-12-16T13:08:53.518120125Z" level=info msg="Start event monitor" Dec 16 13:08:53.518634 containerd[1770]: time="2025-12-16T13:08:53.518259005Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:08:53.518634 containerd[1770]: time="2025-12-16T13:08:53.518282673Z" level=info msg="Start streaming server" Dec 16 13:08:53.518634 containerd[1770]: time="2025-12-16T13:08:53.518312410Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:08:53.518634 containerd[1770]: time="2025-12-16T13:08:53.518327223Z" level=info msg="runtime interface starting up..." Dec 16 13:08:53.518634 containerd[1770]: time="2025-12-16T13:08:53.518377504Z" level=info msg="starting plugins..." Dec 16 13:08:53.518634 containerd[1770]: time="2025-12-16T13:08:53.518432818Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:08:53.519226 containerd[1770]: time="2025-12-16T13:08:53.519192887Z" level=info msg="containerd successfully booted in 0.205440s" Dec 16 13:08:53.519299 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:08:53.642044 tar[1761]: linux-amd64/README.md Dec 16 13:08:53.674919 kernel: EXT4-fs (vda9): resized filesystem to 12499963 Dec 16 13:08:53.698930 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:08:53.702977 extend-filesystems[1778]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 13:08:53.702977 extend-filesystems[1778]: old_desc_blocks = 1, new_desc_blocks = 6 Dec 16 13:08:53.702977 extend-filesystems[1778]: The filesystem on /dev/vda9 is now 12499963 (4k) blocks long. Dec 16 13:08:53.704269 extend-filesystems[1742]: Resized filesystem in /dev/vda9 Dec 16 13:08:53.704367 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:08:53.704574 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:08:54.112935 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 13:08:54.162598 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:08:54.165954 systemd[1]: Started sshd@0-10.0.21.22:22-147.75.109.163:45430.service - OpenSSH per-connection server daemon (147.75.109.163:45430). Dec 16 13:08:54.424959 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 13:08:54.769301 systemd-networkd[1584]: eth0: Gained IPv6LL Dec 16 13:08:54.774720 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:08:54.778843 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:08:54.787633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:08:54.791792 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:08:54.856375 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:08:55.205693 sshd[1860]: Accepted publickey for core from 147.75.109.163 port 45430 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:08:55.210372 sshd-session[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:55.237228 systemd-logind[1751]: New session 1 of user core. Dec 16 13:08:55.239429 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:08:55.243824 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:08:55.293652 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:08:55.298434 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:08:55.328331 (systemd)[1878]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:08:55.332824 systemd-logind[1751]: New session c1 of user core. Dec 16 13:08:55.511978 systemd[1878]: Queued start job for default target default.target. Dec 16 13:08:55.535362 systemd[1878]: Created slice app.slice - User Application Slice. Dec 16 13:08:55.535408 systemd[1878]: Reached target paths.target - Paths. Dec 16 13:08:55.535466 systemd[1878]: Reached target timers.target - Timers. Dec 16 13:08:55.537244 systemd[1878]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:08:55.562299 systemd[1878]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:08:55.562463 systemd[1878]: Reached target sockets.target - Sockets. Dec 16 13:08:55.562519 systemd[1878]: Reached target basic.target - Basic System. Dec 16 13:08:55.562570 systemd[1878]: Reached target default.target - Main User Target. Dec 16 13:08:55.562608 systemd[1878]: Startup finished in 217ms. Dec 16 13:08:55.563339 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:08:55.567613 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:08:56.128091 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 13:08:56.262448 systemd[1]: Started sshd@1-10.0.21.22:22-147.75.109.163:53178.service - OpenSSH per-connection server daemon (147.75.109.163:53178). Dec 16 13:08:56.461947 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 13:08:56.618201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:08:56.626541 (kubelet)[1899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:08:57.273824 sshd[1890]: Accepted publickey for core from 147.75.109.163 port 53178 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:08:57.277020 sshd-session[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:57.290001 systemd-logind[1751]: New session 2 of user core. Dec 16 13:08:57.312405 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:08:57.798387 kubelet[1899]: E1216 13:08:57.798264 1899 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:08:57.803855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:08:57.804199 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:08:57.804944 systemd[1]: kubelet.service: Consumed 1.723s CPU time, 268.2M memory peak. Dec 16 13:08:57.948109 sshd[1905]: Connection closed by 147.75.109.163 port 53178 Dec 16 13:08:57.949229 sshd-session[1890]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:57.957007 systemd[1]: sshd@1-10.0.21.22:22-147.75.109.163:53178.service: Deactivated successfully. Dec 16 13:08:57.960588 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:08:57.962387 systemd-logind[1751]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:08:57.964620 systemd-logind[1751]: Removed session 2. Dec 16 13:08:58.134711 systemd[1]: Started sshd@2-10.0.21.22:22-147.75.109.163:53190.service - OpenSSH per-connection server daemon (147.75.109.163:53190). Dec 16 13:08:59.149717 sshd[1916]: Accepted publickey for core from 147.75.109.163 port 53190 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:08:59.152855 sshd-session[1916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:08:59.165331 systemd-logind[1751]: New session 3 of user core. Dec 16 13:08:59.190398 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:08:59.825679 sshd[1923]: Connection closed by 147.75.109.163 port 53190 Dec 16 13:08:59.826593 sshd-session[1916]: pam_unix(sshd:session): session closed for user core Dec 16 13:08:59.834222 systemd[1]: sshd@2-10.0.21.22:22-147.75.109.163:53190.service: Deactivated successfully. Dec 16 13:08:59.838698 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:08:59.843146 systemd-logind[1751]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:08:59.845576 systemd-logind[1751]: Removed session 3. Dec 16 13:09:00.139994 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 13:09:00.150068 coreos-metadata[1736]: Dec 16 13:09:00.149 WARN failed to locate config-drive, using the metadata service API instead Dec 16 13:09:00.182205 coreos-metadata[1736]: Dec 16 13:09:00.182 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 16 13:09:00.487970 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 13:09:00.505293 coreos-metadata[1823]: Dec 16 13:09:00.505 WARN failed to locate config-drive, using the metadata service API instead Dec 16 13:09:00.523630 coreos-metadata[1736]: Dec 16 13:09:00.523 INFO Fetch successful Dec 16 13:09:00.523630 coreos-metadata[1736]: Dec 16 13:09:00.523 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 13:09:00.545913 coreos-metadata[1823]: Dec 16 13:09:00.545 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 16 13:09:00.774771 coreos-metadata[1736]: Dec 16 13:09:00.774 INFO Fetch successful Dec 16 13:09:00.774771 coreos-metadata[1736]: Dec 16 13:09:00.774 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 16 13:09:00.776649 coreos-metadata[1823]: Dec 16 13:09:00.776 INFO Fetch successful Dec 16 13:09:00.776649 coreos-metadata[1823]: Dec 16 13:09:00.776 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 13:09:02.598667 coreos-metadata[1736]: Dec 16 13:09:02.598 INFO Fetch successful Dec 16 13:09:02.598667 coreos-metadata[1736]: Dec 16 13:09:02.598 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 16 13:09:02.818249 coreos-metadata[1823]: Dec 16 13:09:02.818 INFO Fetch successful Dec 16 13:09:02.822672 coreos-metadata[1736]: Dec 16 13:09:02.822 INFO Fetch successful Dec 16 13:09:02.822672 coreos-metadata[1736]: Dec 16 13:09:02.822 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 16 13:09:02.823807 unknown[1823]: wrote ssh authorized keys file for user: core Dec 16 13:09:02.866173 update-ssh-keys[1933]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:09:02.868694 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 13:09:02.874273 systemd[1]: Finished sshkeys.service. Dec 16 13:09:02.942504 coreos-metadata[1736]: Dec 16 13:09:02.942 INFO Fetch successful Dec 16 13:09:02.942772 coreos-metadata[1736]: Dec 16 13:09:02.942 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 16 13:09:03.085364 coreos-metadata[1736]: Dec 16 13:09:03.085 INFO Fetch successful Dec 16 13:09:03.162759 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:09:03.163313 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:09:03.163472 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:09:03.166942 systemd[1]: Startup finished in 4.686s (kernel) + 15.024s (initrd) + 12.521s (userspace) = 32.232s. Dec 16 13:09:08.056190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:09:08.059995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:09:08.367511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:08.374231 (kubelet)[1949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:09:08.427657 kubelet[1949]: E1216 13:09:08.427574 1949 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:09:08.432819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:09:08.432981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:09:08.433475 systemd[1]: kubelet.service: Consumed 315ms CPU time, 111.1M memory peak. Dec 16 13:09:10.007412 systemd[1]: Started sshd@3-10.0.21.22:22-147.75.109.163:55434.service - OpenSSH per-connection server daemon (147.75.109.163:55434). Dec 16 13:09:11.031176 sshd[1962]: Accepted publickey for core from 147.75.109.163 port 55434 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:09:11.032411 sshd-session[1962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:11.036778 systemd-logind[1751]: New session 4 of user core. Dec 16 13:09:11.054106 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:09:11.713578 sshd[1965]: Connection closed by 147.75.109.163 port 55434 Dec 16 13:09:11.714366 sshd-session[1962]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:11.719223 systemd[1]: sshd@3-10.0.21.22:22-147.75.109.163:55434.service: Deactivated successfully. Dec 16 13:09:11.722322 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:09:11.724669 systemd-logind[1751]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:09:11.727090 systemd-logind[1751]: Removed session 4. Dec 16 13:09:11.889602 systemd[1]: Started sshd@4-10.0.21.22:22-147.75.109.163:52436.service - OpenSSH per-connection server daemon (147.75.109.163:52436). Dec 16 13:09:12.901582 sshd[1971]: Accepted publickey for core from 147.75.109.163 port 52436 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:09:12.903946 sshd-session[1971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:12.915142 systemd-logind[1751]: New session 5 of user core. Dec 16 13:09:12.930288 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:09:13.575142 sshd[1974]: Connection closed by 147.75.109.163 port 52436 Dec 16 13:09:13.576109 sshd-session[1971]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:13.583526 systemd[1]: sshd@4-10.0.21.22:22-147.75.109.163:52436.service: Deactivated successfully. Dec 16 13:09:13.587509 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:09:13.591905 systemd-logind[1751]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:09:13.594274 systemd-logind[1751]: Removed session 5. Dec 16 13:09:13.755947 systemd[1]: Started sshd@5-10.0.21.22:22-147.75.109.163:52442.service - OpenSSH per-connection server daemon (147.75.109.163:52442). Dec 16 13:09:14.784758 sshd[1980]: Accepted publickey for core from 147.75.109.163 port 52442 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:09:14.787707 sshd-session[1980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:14.797176 systemd-logind[1751]: New session 6 of user core. Dec 16 13:09:14.808222 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:09:15.463351 sshd[1983]: Connection closed by 147.75.109.163 port 52442 Dec 16 13:09:15.464318 sshd-session[1980]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:15.471796 systemd[1]: sshd@5-10.0.21.22:22-147.75.109.163:52442.service: Deactivated successfully. Dec 16 13:09:15.475749 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:09:15.480215 systemd-logind[1751]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:09:15.482413 systemd-logind[1751]: Removed session 6. Dec 16 13:09:15.638777 systemd[1]: Started sshd@6-10.0.21.22:22-147.75.109.163:52454.service - OpenSSH per-connection server daemon (147.75.109.163:52454). Dec 16 13:09:16.672599 sshd[1989]: Accepted publickey for core from 147.75.109.163 port 52454 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:09:16.675288 sshd-session[1989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:16.681594 systemd-logind[1751]: New session 7 of user core. Dec 16 13:09:16.693028 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:09:16.940810 chronyd[1734]: Selected source PHC0 Dec 16 13:09:16.940905 chronyd[1734]: System clock wrong by 1.203660 seconds Dec 16 13:09:18.144759 systemd-resolved[1669]: Clock change detected. Flushing caches. Dec 16 13:09:18.144620 chronyd[1734]: System clock was stepped by 1.203660 seconds Dec 16 13:09:18.424549 sudo[1993]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:09:18.425173 sudo[1993]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:09:18.448738 sudo[1993]: pam_unix(sudo:session): session closed for user root Dec 16 13:09:18.608482 sshd[1992]: Connection closed by 147.75.109.163 port 52454 Dec 16 13:09:18.609224 sshd-session[1989]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:18.615068 systemd[1]: sshd@6-10.0.21.22:22-147.75.109.163:52454.service: Deactivated successfully. Dec 16 13:09:18.616766 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:09:18.618011 systemd-logind[1751]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:09:18.619995 systemd-logind[1751]: Removed session 7. Dec 16 13:09:18.779455 systemd[1]: Started sshd@7-10.0.21.22:22-147.75.109.163:52464.service - OpenSSH per-connection server daemon (147.75.109.163:52464). Dec 16 13:09:19.802004 sshd[1999]: Accepted publickey for core from 147.75.109.163 port 52464 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:09:19.804263 sshd-session[1999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:19.806430 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:09:19.809380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:09:19.817854 systemd-logind[1751]: New session 8 of user core. Dec 16 13:09:19.838851 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:09:20.018875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:20.030415 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:09:20.115365 kubelet[2011]: E1216 13:09:20.115043 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:09:20.119600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:09:20.120000 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:09:20.121001 systemd[1]: kubelet.service: Consumed 270ms CPU time, 111.3M memory peak. Dec 16 13:09:20.320910 sudo[2024]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:09:20.321298 sudo[2024]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:09:20.328676 sudo[2024]: pam_unix(sudo:session): session closed for user root Dec 16 13:09:20.334594 sudo[2023]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:09:20.334867 sudo[2023]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:09:20.348721 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:09:20.405076 augenrules[2046]: No rules Dec 16 13:09:20.405794 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:09:20.406033 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:09:20.407202 sudo[2023]: pam_unix(sudo:session): session closed for user root Dec 16 13:09:20.564340 sshd[2005]: Connection closed by 147.75.109.163 port 52464 Dec 16 13:09:20.565567 sshd-session[1999]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:20.574239 systemd[1]: sshd@7-10.0.21.22:22-147.75.109.163:52464.service: Deactivated successfully. Dec 16 13:09:20.577219 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:09:20.579367 systemd-logind[1751]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:09:20.581022 systemd-logind[1751]: Removed session 8. Dec 16 13:09:20.750378 systemd[1]: Started sshd@8-10.0.21.22:22-147.75.109.163:52472.service - OpenSSH per-connection server daemon (147.75.109.163:52472). Dec 16 13:09:21.755787 sshd[2055]: Accepted publickey for core from 147.75.109.163 port 52472 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:09:21.758769 sshd-session[2055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:09:21.768355 systemd-logind[1751]: New session 9 of user core. Dec 16 13:09:21.789775 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:09:22.270100 sudo[2059]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:09:22.270392 sudo[2059]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:09:22.692628 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:09:22.717062 (dockerd)[2086]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:09:23.098341 dockerd[2086]: time="2025-12-16T13:09:23.098189939Z" level=info msg="Starting up" Dec 16 13:09:23.102310 dockerd[2086]: time="2025-12-16T13:09:23.102286343Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:09:23.124634 dockerd[2086]: time="2025-12-16T13:09:23.124590901Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:09:23.190138 dockerd[2086]: time="2025-12-16T13:09:23.190026862Z" level=info msg="Loading containers: start." Dec 16 13:09:23.209574 kernel: Initializing XFRM netlink socket Dec 16 13:09:23.615411 systemd-networkd[1584]: docker0: Link UP Dec 16 13:09:23.622671 dockerd[2086]: time="2025-12-16T13:09:23.622624319Z" level=info msg="Loading containers: done." Dec 16 13:09:23.636881 dockerd[2086]: time="2025-12-16T13:09:23.636814369Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:09:23.637045 dockerd[2086]: time="2025-12-16T13:09:23.636928145Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:09:23.637045 dockerd[2086]: time="2025-12-16T13:09:23.637007849Z" level=info msg="Initializing buildkit" Dec 16 13:09:23.668474 dockerd[2086]: time="2025-12-16T13:09:23.668399906Z" level=info msg="Completed buildkit initialization" Dec 16 13:09:23.672185 dockerd[2086]: time="2025-12-16T13:09:23.672124120Z" level=info msg="Daemon has completed initialization" Dec 16 13:09:23.672294 dockerd[2086]: time="2025-12-16T13:09:23.672219147Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:09:23.672598 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:09:25.016731 containerd[1770]: time="2025-12-16T13:09:25.016642761Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 13:09:25.685747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1650039643.mount: Deactivated successfully. Dec 16 13:09:26.829303 containerd[1770]: time="2025-12-16T13:09:26.829237883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:26.830594 containerd[1770]: time="2025-12-16T13:09:26.830554817Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072281" Dec 16 13:09:26.834582 containerd[1770]: time="2025-12-16T13:09:26.834544794Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:26.837261 containerd[1770]: time="2025-12-16T13:09:26.837222690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:26.838166 containerd[1770]: time="2025-12-16T13:09:26.838138629Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 1.821436995s" Dec 16 13:09:26.838407 containerd[1770]: time="2025-12-16T13:09:26.838391309Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 13:09:26.840336 containerd[1770]: time="2025-12-16T13:09:26.840294178Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 13:09:28.880992 containerd[1770]: time="2025-12-16T13:09:28.880925638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:28.882700 containerd[1770]: time="2025-12-16T13:09:28.882638666Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992030" Dec 16 13:09:28.884334 containerd[1770]: time="2025-12-16T13:09:28.884292903Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:28.887885 containerd[1770]: time="2025-12-16T13:09:28.887822656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:28.889686 containerd[1770]: time="2025-12-16T13:09:28.889636738Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 2.04921043s" Dec 16 13:09:28.889752 containerd[1770]: time="2025-12-16T13:09:28.889686071Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 13:09:28.890379 containerd[1770]: time="2025-12-16T13:09:28.890304532Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 13:09:30.142978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 13:09:30.146475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:09:30.282714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:30.287230 (kubelet)[2383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:09:30.328161 kubelet[2383]: E1216 13:09:30.328049 2383 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:09:30.331074 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:09:30.331215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:09:30.331534 systemd[1]: kubelet.service: Consumed 156ms CPU time, 113.3M memory peak. Dec 16 13:09:30.461750 containerd[1770]: time="2025-12-16T13:09:30.461606384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:30.463200 containerd[1770]: time="2025-12-16T13:09:30.463162244Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404268" Dec 16 13:09:30.468486 containerd[1770]: time="2025-12-16T13:09:30.468453253Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:30.474123 containerd[1770]: time="2025-12-16T13:09:30.474077826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:30.474890 containerd[1770]: time="2025-12-16T13:09:30.474850984Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.584486166s" Dec 16 13:09:30.474890 containerd[1770]: time="2025-12-16T13:09:30.474886519Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 13:09:30.475307 containerd[1770]: time="2025-12-16T13:09:30.475286037Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 13:09:31.615831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3079187699.mount: Deactivated successfully. Dec 16 13:09:31.932208 containerd[1770]: time="2025-12-16T13:09:31.932148029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:31.934284 containerd[1770]: time="2025-12-16T13:09:31.934251261Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161449" Dec 16 13:09:31.935884 containerd[1770]: time="2025-12-16T13:09:31.935860420Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:31.938862 containerd[1770]: time="2025-12-16T13:09:31.938839352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:31.939227 containerd[1770]: time="2025-12-16T13:09:31.939186677Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.463876537s" Dec 16 13:09:31.939254 containerd[1770]: time="2025-12-16T13:09:31.939226850Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 13:09:31.940596 containerd[1770]: time="2025-12-16T13:09:31.939724815Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 13:09:32.472687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount566431721.mount: Deactivated successfully. Dec 16 13:09:33.107441 containerd[1770]: time="2025-12-16T13:09:33.107385807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:33.108596 containerd[1770]: time="2025-12-16T13:09:33.108571599Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565333" Dec 16 13:09:33.110479 containerd[1770]: time="2025-12-16T13:09:33.110428801Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:33.113237 containerd[1770]: time="2025-12-16T13:09:33.113200244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:33.113913 containerd[1770]: time="2025-12-16T13:09:33.113887726Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.174138478s" Dec 16 13:09:33.113967 containerd[1770]: time="2025-12-16T13:09:33.113918871Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 13:09:33.114333 containerd[1770]: time="2025-12-16T13:09:33.114315995Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:09:33.734790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount635426036.mount: Deactivated successfully. Dec 16 13:09:33.740848 containerd[1770]: time="2025-12-16T13:09:33.740769842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:09:33.749622 containerd[1770]: time="2025-12-16T13:09:33.749558206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321158" Dec 16 13:09:33.752888 containerd[1770]: time="2025-12-16T13:09:33.752848040Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:09:33.755291 containerd[1770]: time="2025-12-16T13:09:33.755186178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:09:33.756062 containerd[1770]: time="2025-12-16T13:09:33.756024607Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 641.678486ms" Dec 16 13:09:33.756062 containerd[1770]: time="2025-12-16T13:09:33.756057683Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:09:33.756582 containerd[1770]: time="2025-12-16T13:09:33.756518853Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 13:09:34.408790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4200871391.mount: Deactivated successfully. Dec 16 13:09:35.909054 containerd[1770]: time="2025-12-16T13:09:35.908977988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:35.910082 containerd[1770]: time="2025-12-16T13:09:35.910044002Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682130" Dec 16 13:09:35.911358 containerd[1770]: time="2025-12-16T13:09:35.911327862Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:35.913979 containerd[1770]: time="2025-12-16T13:09:35.913947606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:35.914734 containerd[1770]: time="2025-12-16T13:09:35.914710065Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.158141453s" Dec 16 13:09:35.914789 containerd[1770]: time="2025-12-16T13:09:35.914740493Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 13:09:39.050462 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:39.051356 systemd[1]: kubelet.service: Consumed 156ms CPU time, 113.3M memory peak. Dec 16 13:09:39.057119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:09:39.083513 systemd[1]: Reload requested from client PID 2552 ('systemctl') (unit session-9.scope)... Dec 16 13:09:39.083537 systemd[1]: Reloading... Dec 16 13:09:39.145566 zram_generator::config[2595]: No configuration found. Dec 16 13:09:39.155369 update_engine[1752]: I20251216 13:09:39.153606 1752 update_attempter.cc:509] Updating boot flags... Dec 16 13:09:39.340181 systemd[1]: Reloading finished in 256 ms. Dec 16 13:09:39.421164 (kubelet)[2657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:09:39.426122 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:09:39.431209 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:09:39.431427 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:39.431490 systemd[1]: kubelet.service: Consumed 122ms CPU time, 99.6M memory peak. Dec 16 13:09:39.433520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:09:39.562954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:39.566769 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:09:39.602845 kubelet[2668]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:09:39.602845 kubelet[2668]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:09:39.602845 kubelet[2668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:09:39.602845 kubelet[2668]: I1216 13:09:39.602820 2668 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:09:39.956088 kubelet[2668]: I1216 13:09:39.955990 2668 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:09:39.956088 kubelet[2668]: I1216 13:09:39.956045 2668 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:09:39.956659 kubelet[2668]: I1216 13:09:39.956610 2668 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:09:40.004512 kubelet[2668]: I1216 13:09:40.004439 2668 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:09:40.005253 kubelet[2668]: E1216 13:09:40.005226 2668 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.21.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.21.22:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:09:40.012087 kubelet[2668]: I1216 13:09:40.012062 2668 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:09:40.018830 kubelet[2668]: I1216 13:09:40.018780 2668 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:09:40.020089 kubelet[2668]: I1216 13:09:40.020023 2668 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:09:40.020233 kubelet[2668]: I1216 13:09:40.020069 2668 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-3-ab2e4a938e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:09:40.020336 kubelet[2668]: I1216 13:09:40.020236 2668 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:09:40.020336 kubelet[2668]: I1216 13:09:40.020244 2668 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:09:40.020393 kubelet[2668]: I1216 13:09:40.020362 2668 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:09:40.025731 kubelet[2668]: I1216 13:09:40.025695 2668 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:09:40.025731 kubelet[2668]: I1216 13:09:40.025726 2668 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:09:40.025845 kubelet[2668]: I1216 13:09:40.025749 2668 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:09:40.025845 kubelet[2668]: I1216 13:09:40.025761 2668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:09:40.027619 kubelet[2668]: W1216 13:09:40.027556 2668 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.21.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.21.22:6443: connect: connection refused Dec 16 13:09:40.027697 kubelet[2668]: E1216 13:09:40.027625 2668 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.21.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.21.22:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:09:40.027697 kubelet[2668]: W1216 13:09:40.027516 2668 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.21.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-3-ab2e4a938e&limit=500&resourceVersion=0": dial tcp 10.0.21.22:6443: connect: connection refused Dec 16 13:09:40.027697 kubelet[2668]: E1216 13:09:40.027674 2668 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.21.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-3-ab2e4a938e&limit=500&resourceVersion=0\": dial tcp 10.0.21.22:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:09:40.029348 kubelet[2668]: I1216 13:09:40.029320 2668 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:09:40.029752 kubelet[2668]: I1216 13:09:40.029718 2668 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:09:40.030615 kubelet[2668]: W1216 13:09:40.030579 2668 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:09:40.032901 kubelet[2668]: I1216 13:09:40.032866 2668 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:09:40.032901 kubelet[2668]: I1216 13:09:40.032903 2668 server.go:1287] "Started kubelet" Dec 16 13:09:40.033517 kubelet[2668]: I1216 13:09:40.033083 2668 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:09:40.033517 kubelet[2668]: I1216 13:09:40.033083 2668 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:09:40.033517 kubelet[2668]: I1216 13:09:40.033448 2668 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:09:40.033982 kubelet[2668]: I1216 13:09:40.033950 2668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:09:40.034088 kubelet[2668]: E1216 13:09:40.034060 2668 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-3-ab2e4a938e\" not found" Dec 16 13:09:40.034088 kubelet[2668]: I1216 13:09:40.034077 2668 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:09:40.034775 kubelet[2668]: I1216 13:09:40.034262 2668 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:09:40.034775 kubelet[2668]: E1216 13:09:40.034284 2668 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.21.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-3-ab2e4a938e?timeout=10s\": dial tcp 10.0.21.22:6443: connect: connection refused" interval="200ms" Dec 16 13:09:40.034775 kubelet[2668]: I1216 13:09:40.034362 2668 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:09:40.034775 kubelet[2668]: I1216 13:09:40.034586 2668 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:09:40.034775 kubelet[2668]: I1216 13:09:40.034612 2668 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:09:40.034775 kubelet[2668]: W1216 13:09:40.034646 2668 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.21.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.21.22:6443: connect: connection refused Dec 16 13:09:40.034775 kubelet[2668]: I1216 13:09:40.034667 2668 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:09:40.034775 kubelet[2668]: E1216 13:09:40.034709 2668 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.21.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.21.22:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:09:40.034775 kubelet[2668]: I1216 13:09:40.034757 2668 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:09:40.035647 kubelet[2668]: I1216 13:09:40.035600 2668 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:09:40.036569 kubelet[2668]: E1216 13:09:40.035694 2668 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:09:40.045193 kubelet[2668]: E1216 13:09:40.043566 2668 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.21.22:6443/api/v1/namespaces/default/events\": dial tcp 10.0.21.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-2-3-ab2e4a938e.1881b4205f118438 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-3-ab2e4a938e,UID:ci-4459-2-2-3-ab2e4a938e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-3-ab2e4a938e,},FirstTimestamp:2025-12-16 13:09:40.03288172 +0000 UTC m=+0.462922724,LastTimestamp:2025-12-16 13:09:40.03288172 +0000 UTC m=+0.462922724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-3-ab2e4a938e,}" Dec 16 13:09:40.050800 kubelet[2668]: I1216 13:09:40.050771 2668 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:09:40.050800 kubelet[2668]: I1216 13:09:40.050786 2668 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:09:40.050800 kubelet[2668]: I1216 13:09:40.050801 2668 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:09:40.052626 kubelet[2668]: I1216 13:09:40.052584 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:09:40.057495 kubelet[2668]: I1216 13:09:40.053543 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:09:40.057495 kubelet[2668]: I1216 13:09:40.053562 2668 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:09:40.057495 kubelet[2668]: I1216 13:09:40.053578 2668 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:09:40.057495 kubelet[2668]: I1216 13:09:40.053588 2668 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:09:40.057495 kubelet[2668]: E1216 13:09:40.053631 2668 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:09:40.057495 kubelet[2668]: W1216 13:09:40.053976 2668 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.21.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.21.22:6443: connect: connection refused Dec 16 13:09:40.057495 kubelet[2668]: E1216 13:09:40.054021 2668 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.21.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.21.22:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:09:40.059616 kubelet[2668]: I1216 13:09:40.059592 2668 policy_none.go:49] "None policy: Start" Dec 16 13:09:40.059616 kubelet[2668]: I1216 13:09:40.059616 2668 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:09:40.059684 kubelet[2668]: I1216 13:09:40.059628 2668 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:09:40.067698 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:09:40.077103 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:09:40.079558 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:09:40.094775 kubelet[2668]: I1216 13:09:40.094741 2668 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:09:40.094967 kubelet[2668]: I1216 13:09:40.094954 2668 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:09:40.095018 kubelet[2668]: I1216 13:09:40.094969 2668 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:09:40.095167 kubelet[2668]: I1216 13:09:40.095150 2668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:09:40.096482 kubelet[2668]: E1216 13:09:40.096465 2668 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:09:40.096533 kubelet[2668]: E1216 13:09:40.096515 2668 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-2-3-ab2e4a938e\" not found" Dec 16 13:09:40.163565 systemd[1]: Created slice kubepods-burstable-pod7f94bd97a60f70bc9a8dcdd5709180e8.slice - libcontainer container kubepods-burstable-pod7f94bd97a60f70bc9a8dcdd5709180e8.slice. Dec 16 13:09:40.183946 kubelet[2668]: E1216 13:09:40.183872 2668 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-3-ab2e4a938e\" not found" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.187748 systemd[1]: Created slice kubepods-burstable-podf8c37d2bcd1e5ed273ab52b8c47fa982.slice - libcontainer container kubepods-burstable-podf8c37d2bcd1e5ed273ab52b8c47fa982.slice. Dec 16 13:09:40.189129 kubelet[2668]: E1216 13:09:40.189092 2668 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-3-ab2e4a938e\" not found" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.190402 systemd[1]: Created slice kubepods-burstable-poda0905e2179bb422dab90ff3c3c18b569.slice - libcontainer container kubepods-burstable-poda0905e2179bb422dab90ff3c3c18b569.slice. Dec 16 13:09:40.191874 kubelet[2668]: E1216 13:09:40.191698 2668 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-3-ab2e4a938e\" not found" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.197346 kubelet[2668]: I1216 13:09:40.197322 2668 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.197681 kubelet[2668]: E1216 13:09:40.197660 2668 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.21.22:6443/api/v1/nodes\": dial tcp 10.0.21.22:6443: connect: connection refused" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.235837 kubelet[2668]: E1216 13:09:40.235470 2668 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.21.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-3-ab2e4a938e?timeout=10s\": dial tcp 10.0.21.22:6443: connect: connection refused" interval="400ms" Dec 16 13:09:40.236741 kubelet[2668]: I1216 13:09:40.236623 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8c37d2bcd1e5ed273ab52b8c47fa982-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-3-ab2e4a938e\" (UID: \"f8c37d2bcd1e5ed273ab52b8c47fa982\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.236741 kubelet[2668]: I1216 13:09:40.236687 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8c37d2bcd1e5ed273ab52b8c47fa982-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-3-ab2e4a938e\" (UID: \"f8c37d2bcd1e5ed273ab52b8c47fa982\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.236741 kubelet[2668]: I1216 13:09:40.236733 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0905e2179bb422dab90ff3c3c18b569-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-3-ab2e4a938e\" (UID: \"a0905e2179bb422dab90ff3c3c18b569\") " pod="kube-system/kube-scheduler-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.236940 kubelet[2668]: I1216 13:09:40.236771 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f94bd97a60f70bc9a8dcdd5709180e8-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-3-ab2e4a938e\" (UID: \"7f94bd97a60f70bc9a8dcdd5709180e8\") " pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.236940 kubelet[2668]: I1216 13:09:40.236804 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f94bd97a60f70bc9a8dcdd5709180e8-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-3-ab2e4a938e\" (UID: \"7f94bd97a60f70bc9a8dcdd5709180e8\") " pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.236940 kubelet[2668]: I1216 13:09:40.236835 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8c37d2bcd1e5ed273ab52b8c47fa982-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-3-ab2e4a938e\" (UID: \"f8c37d2bcd1e5ed273ab52b8c47fa982\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.236940 kubelet[2668]: I1216 13:09:40.236884 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f94bd97a60f70bc9a8dcdd5709180e8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-3-ab2e4a938e\" (UID: \"7f94bd97a60f70bc9a8dcdd5709180e8\") " pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.236940 kubelet[2668]: I1216 13:09:40.236918 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8c37d2bcd1e5ed273ab52b8c47fa982-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-3-ab2e4a938e\" (UID: \"f8c37d2bcd1e5ed273ab52b8c47fa982\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.237121 kubelet[2668]: I1216 13:09:40.236948 2668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8c37d2bcd1e5ed273ab52b8c47fa982-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-3-ab2e4a938e\" (UID: \"f8c37d2bcd1e5ed273ab52b8c47fa982\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.399774 kubelet[2668]: I1216 13:09:40.399735 2668 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.400513 kubelet[2668]: E1216 13:09:40.400066 2668 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.21.22:6443/api/v1/nodes\": dial tcp 10.0.21.22:6443: connect: connection refused" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:40.486595 containerd[1770]: time="2025-12-16T13:09:40.486303602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-3-ab2e4a938e,Uid:7f94bd97a60f70bc9a8dcdd5709180e8,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:40.490273 containerd[1770]: time="2025-12-16T13:09:40.490219555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-3-ab2e4a938e,Uid:f8c37d2bcd1e5ed273ab52b8c47fa982,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:40.493902 containerd[1770]: time="2025-12-16T13:09:40.493553669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-3-ab2e4a938e,Uid:a0905e2179bb422dab90ff3c3c18b569,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:40.539659 containerd[1770]: time="2025-12-16T13:09:40.539575900Z" level=info msg="connecting to shim b28a68add17447e87aa99bea3403663e44e84b81d3d54a8f31ebabb1a5043381" address="unix:///run/containerd/s/ccdbc10de4fc04a8e154ac198003b8e6cc2cc8760fa6502dcc750a60c59074aa" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:40.540987 containerd[1770]: time="2025-12-16T13:09:40.540911950Z" level=info msg="connecting to shim f01191a76136ba82ffef3e1f410892c3d8e4ec62d71960a1fff7d16965623bc6" address="unix:///run/containerd/s/c716739fe386538422c58ec87eec94f5d0afba7a112c060e8e49b6b9c5674f04" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:40.557645 containerd[1770]: time="2025-12-16T13:09:40.557567134Z" level=info msg="connecting to shim 674a59231f8dbe43933c902312ecb06776966539ae2e479eabe4c252f03ecfd0" address="unix:///run/containerd/s/f25bfd4b9355aee1d06d149a2db29c5ccfb8deedd8394d13e5a0c8adff67c34b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:40.583759 systemd[1]: Started cri-containerd-f01191a76136ba82ffef3e1f410892c3d8e4ec62d71960a1fff7d16965623bc6.scope - libcontainer container f01191a76136ba82ffef3e1f410892c3d8e4ec62d71960a1fff7d16965623bc6. Dec 16 13:09:40.590401 systemd[1]: Started cri-containerd-674a59231f8dbe43933c902312ecb06776966539ae2e479eabe4c252f03ecfd0.scope - libcontainer container 674a59231f8dbe43933c902312ecb06776966539ae2e479eabe4c252f03ecfd0. Dec 16 13:09:40.592380 systemd[1]: Started cri-containerd-b28a68add17447e87aa99bea3403663e44e84b81d3d54a8f31ebabb1a5043381.scope - libcontainer container b28a68add17447e87aa99bea3403663e44e84b81d3d54a8f31ebabb1a5043381. Dec 16 13:09:40.636471 kubelet[2668]: E1216 13:09:40.636422 2668 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.21.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-3-ab2e4a938e?timeout=10s\": dial tcp 10.0.21.22:6443: connect: connection refused" interval="800ms" Dec 16 13:09:40.639449 containerd[1770]: time="2025-12-16T13:09:40.639416459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-3-ab2e4a938e,Uid:7f94bd97a60f70bc9a8dcdd5709180e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f01191a76136ba82ffef3e1f410892c3d8e4ec62d71960a1fff7d16965623bc6\"" Dec 16 13:09:40.641755 containerd[1770]: time="2025-12-16T13:09:40.641727887Z" level=info msg="CreateContainer within sandbox \"f01191a76136ba82ffef3e1f410892c3d8e4ec62d71960a1fff7d16965623bc6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:09:40.642493 containerd[1770]: time="2025-12-16T13:09:40.642426442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-3-ab2e4a938e,Uid:a0905e2179bb422dab90ff3c3c18b569,Namespace:kube-system,Attempt:0,} returns sandbox id \"674a59231f8dbe43933c902312ecb06776966539ae2e479eabe4c252f03ecfd0\"" Dec 16 13:09:40.643985 containerd[1770]: time="2025-12-16T13:09:40.643960956Z" level=info msg="CreateContainer within sandbox \"674a59231f8dbe43933c902312ecb06776966539ae2e479eabe4c252f03ecfd0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:09:40.651731 containerd[1770]: time="2025-12-16T13:09:40.651706717Z" level=info msg="Container 1f676bc7e14bfeb2033f7d2a7466619d59c14fe3d71f131a9d389bf132c83611: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:40.657854 containerd[1770]: time="2025-12-16T13:09:40.657732923Z" level=info msg="Container 244a5e46684a7d34d9726574479578f3e1e572e239dc5cfbff3d645c5cc847cb: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:40.662200 containerd[1770]: time="2025-12-16T13:09:40.662161446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-3-ab2e4a938e,Uid:f8c37d2bcd1e5ed273ab52b8c47fa982,Namespace:kube-system,Attempt:0,} returns sandbox id \"b28a68add17447e87aa99bea3403663e44e84b81d3d54a8f31ebabb1a5043381\"" Dec 16 13:09:40.665158 containerd[1770]: time="2025-12-16T13:09:40.665121546Z" level=info msg="CreateContainer within sandbox \"b28a68add17447e87aa99bea3403663e44e84b81d3d54a8f31ebabb1a5043381\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:09:40.668045 containerd[1770]: time="2025-12-16T13:09:40.668018487Z" level=info msg="CreateContainer within sandbox \"f01191a76136ba82ffef3e1f410892c3d8e4ec62d71960a1fff7d16965623bc6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1f676bc7e14bfeb2033f7d2a7466619d59c14fe3d71f131a9d389bf132c83611\"" Dec 16 13:09:40.668608 containerd[1770]: time="2025-12-16T13:09:40.668583548Z" level=info msg="StartContainer for \"1f676bc7e14bfeb2033f7d2a7466619d59c14fe3d71f131a9d389bf132c83611\"" Dec 16 13:09:40.670243 containerd[1770]: time="2025-12-16T13:09:40.670219138Z" level=info msg="connecting to shim 1f676bc7e14bfeb2033f7d2a7466619d59c14fe3d71f131a9d389bf132c83611" address="unix:///run/containerd/s/c716739fe386538422c58ec87eec94f5d0afba7a112c060e8e49b6b9c5674f04" protocol=ttrpc version=3 Dec 16 13:09:40.671594 containerd[1770]: time="2025-12-16T13:09:40.671570960Z" level=info msg="CreateContainer within sandbox \"674a59231f8dbe43933c902312ecb06776966539ae2e479eabe4c252f03ecfd0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"244a5e46684a7d34d9726574479578f3e1e572e239dc5cfbff3d645c5cc847cb\"" Dec 16 13:09:40.672777 containerd[1770]: time="2025-12-16T13:09:40.672749704Z" level=info msg="StartContainer for \"244a5e46684a7d34d9726574479578f3e1e572e239dc5cfbff3d645c5cc847cb\"" Dec 16 13:09:40.673554 containerd[1770]: time="2025-12-16T13:09:40.673503259Z" level=info msg="connecting to shim 244a5e46684a7d34d9726574479578f3e1e572e239dc5cfbff3d645c5cc847cb" address="unix:///run/containerd/s/f25bfd4b9355aee1d06d149a2db29c5ccfb8deedd8394d13e5a0c8adff67c34b" protocol=ttrpc version=3 Dec 16 13:09:40.679840 containerd[1770]: time="2025-12-16T13:09:40.679807493Z" level=info msg="Container c2d04fc8a3ca9945d0b52ebd1646f70dfc97c7d5b5dc2799f4909fb83413b8bf: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:40.689148 containerd[1770]: time="2025-12-16T13:09:40.689104790Z" level=info msg="CreateContainer within sandbox \"b28a68add17447e87aa99bea3403663e44e84b81d3d54a8f31ebabb1a5043381\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c2d04fc8a3ca9945d0b52ebd1646f70dfc97c7d5b5dc2799f4909fb83413b8bf\"" Dec 16 13:09:40.689665 containerd[1770]: time="2025-12-16T13:09:40.689639844Z" level=info msg="StartContainer for \"c2d04fc8a3ca9945d0b52ebd1646f70dfc97c7d5b5dc2799f4909fb83413b8bf\"" Dec 16 13:09:40.690714 containerd[1770]: time="2025-12-16T13:09:40.690682519Z" level=info msg="connecting to shim c2d04fc8a3ca9945d0b52ebd1646f70dfc97c7d5b5dc2799f4909fb83413b8bf" address="unix:///run/containerd/s/ccdbc10de4fc04a8e154ac198003b8e6cc2cc8760fa6502dcc750a60c59074aa" protocol=ttrpc version=3 Dec 16 13:09:40.700719 systemd[1]: Started cri-containerd-1f676bc7e14bfeb2033f7d2a7466619d59c14fe3d71f131a9d389bf132c83611.scope - libcontainer container 1f676bc7e14bfeb2033f7d2a7466619d59c14fe3d71f131a9d389bf132c83611. Dec 16 13:09:40.701945 systemd[1]: Started cri-containerd-244a5e46684a7d34d9726574479578f3e1e572e239dc5cfbff3d645c5cc847cb.scope - libcontainer container 244a5e46684a7d34d9726574479578f3e1e572e239dc5cfbff3d645c5cc847cb. Dec 16 13:09:40.704562 systemd[1]: Started cri-containerd-c2d04fc8a3ca9945d0b52ebd1646f70dfc97c7d5b5dc2799f4909fb83413b8bf.scope - libcontainer container c2d04fc8a3ca9945d0b52ebd1646f70dfc97c7d5b5dc2799f4909fb83413b8bf. Dec 16 13:09:40.752100 containerd[1770]: time="2025-12-16T13:09:40.751915253Z" level=info msg="StartContainer for \"1f676bc7e14bfeb2033f7d2a7466619d59c14fe3d71f131a9d389bf132c83611\" returns successfully" Dec 16 13:09:40.754745 containerd[1770]: time="2025-12-16T13:09:40.754718651Z" level=info msg="StartContainer for \"244a5e46684a7d34d9726574479578f3e1e572e239dc5cfbff3d645c5cc847cb\" returns successfully" Dec 16 13:09:40.760442 containerd[1770]: time="2025-12-16T13:09:40.760360071Z" level=info msg="StartContainer for \"c2d04fc8a3ca9945d0b52ebd1646f70dfc97c7d5b5dc2799f4909fb83413b8bf\" returns successfully" Dec 16 13:09:40.801831 kubelet[2668]: I1216 13:09:40.801789 2668 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:41.063203 kubelet[2668]: E1216 13:09:41.063016 2668 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-3-ab2e4a938e\" not found" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:41.066383 kubelet[2668]: E1216 13:09:41.066336 2668 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-3-ab2e4a938e\" not found" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:41.070217 kubelet[2668]: E1216 13:09:41.070128 2668 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-3-ab2e4a938e\" not found" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:41.893220 kubelet[2668]: E1216 13:09:41.893175 2668 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-2-3-ab2e4a938e\" not found" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:41.996035 kubelet[2668]: I1216 13:09:41.995958 2668 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:42.027065 kubelet[2668]: I1216 13:09:42.027009 2668 apiserver.go:52] "Watching apiserver" Dec 16 13:09:42.035236 kubelet[2668]: I1216 13:09:42.035128 2668 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:42.035494 kubelet[2668]: I1216 13:09:42.035148 2668 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:09:42.042408 kubelet[2668]: E1216 13:09:42.042340 2668 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-3-ab2e4a938e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:42.042408 kubelet[2668]: I1216 13:09:42.042372 2668 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:42.044441 kubelet[2668]: E1216 13:09:42.044350 2668 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-3-ab2e4a938e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:42.044441 kubelet[2668]: I1216 13:09:42.044373 2668 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:42.046079 kubelet[2668]: E1216 13:09:42.046041 2668 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-3-ab2e4a938e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:42.070485 kubelet[2668]: I1216 13:09:42.070463 2668 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:42.070799 kubelet[2668]: I1216 13:09:42.070611 2668 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:42.072568 kubelet[2668]: E1216 13:09:42.072510 2668 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-3-ab2e4a938e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:42.072769 kubelet[2668]: E1216 13:09:42.072696 2668 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-3-ab2e4a938e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:43.666248 kubelet[2668]: I1216 13:09:43.666202 2668 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.031626 systemd[1]: Reload requested from client PID 2948 ('systemctl') (unit session-9.scope)... Dec 16 13:09:44.031641 systemd[1]: Reloading... Dec 16 13:09:44.080580 zram_generator::config[2991]: No configuration found. Dec 16 13:09:44.282879 systemd[1]: Reloading finished in 250 ms. Dec 16 13:09:44.311491 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:09:44.326039 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:09:44.326309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:44.326367 systemd[1]: kubelet.service: Consumed 902ms CPU time, 135.5M memory peak. Dec 16 13:09:44.328949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:09:44.500370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:09:44.505748 (kubelet)[3042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:09:44.550676 kubelet[3042]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:09:44.550676 kubelet[3042]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:09:44.550676 kubelet[3042]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:09:44.550676 kubelet[3042]: I1216 13:09:44.550054 3042 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:09:44.558388 kubelet[3042]: I1216 13:09:44.558331 3042 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:09:44.558388 kubelet[3042]: I1216 13:09:44.558362 3042 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:09:44.558693 kubelet[3042]: I1216 13:09:44.558674 3042 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:09:44.559876 kubelet[3042]: I1216 13:09:44.559849 3042 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 13:09:44.561955 kubelet[3042]: I1216 13:09:44.561909 3042 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:09:44.565829 kubelet[3042]: I1216 13:09:44.565664 3042 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:09:44.573674 kubelet[3042]: I1216 13:09:44.573619 3042 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:09:44.573827 kubelet[3042]: I1216 13:09:44.573801 3042 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:09:44.573999 kubelet[3042]: I1216 13:09:44.573826 3042 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-3-ab2e4a938e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:09:44.574112 kubelet[3042]: I1216 13:09:44.574002 3042 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:09:44.574112 kubelet[3042]: I1216 13:09:44.574012 3042 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:09:44.574112 kubelet[3042]: I1216 13:09:44.574061 3042 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:09:44.574233 kubelet[3042]: I1216 13:09:44.574215 3042 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:09:44.574263 kubelet[3042]: I1216 13:09:44.574254 3042 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:09:44.574285 kubelet[3042]: I1216 13:09:44.574276 3042 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:09:44.574305 kubelet[3042]: I1216 13:09:44.574290 3042 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:09:44.575306 kubelet[3042]: I1216 13:09:44.575286 3042 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:09:44.575675 kubelet[3042]: I1216 13:09:44.575661 3042 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:09:44.576078 kubelet[3042]: I1216 13:09:44.576051 3042 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:09:44.576078 kubelet[3042]: I1216 13:09:44.576083 3042 server.go:1287] "Started kubelet" Dec 16 13:09:44.576257 kubelet[3042]: I1216 13:09:44.576220 3042 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:09:44.576614 kubelet[3042]: I1216 13:09:44.576214 3042 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:09:44.577253 kubelet[3042]: I1216 13:09:44.577237 3042 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:09:44.577584 kubelet[3042]: I1216 13:09:44.577559 3042 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:09:44.577584 kubelet[3042]: I1216 13:09:44.577576 3042 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:09:44.577752 kubelet[3042]: I1216 13:09:44.577619 3042 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:09:44.577752 kubelet[3042]: E1216 13:09:44.577645 3042 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-3-ab2e4a938e\" not found" Dec 16 13:09:44.578777 kubelet[3042]: I1216 13:09:44.578755 3042 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:09:44.580884 kubelet[3042]: I1216 13:09:44.580862 3042 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:09:44.581344 kubelet[3042]: I1216 13:09:44.581327 3042 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:09:44.589565 kubelet[3042]: E1216 13:09:44.589228 3042 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:09:44.590207 kubelet[3042]: I1216 13:09:44.590120 3042 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:09:44.590207 kubelet[3042]: I1216 13:09:44.590147 3042 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:09:44.590292 kubelet[3042]: I1216 13:09:44.590234 3042 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:09:44.594286 kubelet[3042]: I1216 13:09:44.594251 3042 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:09:44.595533 kubelet[3042]: I1216 13:09:44.595500 3042 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:09:44.595583 kubelet[3042]: I1216 13:09:44.595549 3042 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:09:44.595583 kubelet[3042]: I1216 13:09:44.595568 3042 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:09:44.595583 kubelet[3042]: I1216 13:09:44.595575 3042 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:09:44.595662 kubelet[3042]: E1216 13:09:44.595622 3042 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:09:44.618702 kubelet[3042]: I1216 13:09:44.618671 3042 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:09:44.618702 kubelet[3042]: I1216 13:09:44.618691 3042 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:09:44.618702 kubelet[3042]: I1216 13:09:44.618711 3042 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:09:44.618883 kubelet[3042]: I1216 13:09:44.618868 3042 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:09:44.618908 kubelet[3042]: I1216 13:09:44.618882 3042 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:09:44.618908 kubelet[3042]: I1216 13:09:44.618900 3042 policy_none.go:49] "None policy: Start" Dec 16 13:09:44.618946 kubelet[3042]: I1216 13:09:44.618909 3042 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:09:44.618946 kubelet[3042]: I1216 13:09:44.618920 3042 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:09:44.619023 kubelet[3042]: I1216 13:09:44.619012 3042 state_mem.go:75] "Updated machine memory state" Dec 16 13:09:44.622952 kubelet[3042]: I1216 13:09:44.622628 3042 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:09:44.622952 kubelet[3042]: I1216 13:09:44.622795 3042 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:09:44.622952 kubelet[3042]: I1216 13:09:44.622806 3042 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:09:44.623210 kubelet[3042]: I1216 13:09:44.623194 3042 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:09:44.624379 kubelet[3042]: E1216 13:09:44.624349 3042 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:09:44.696465 kubelet[3042]: I1216 13:09:44.696428 3042 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.696649 kubelet[3042]: I1216 13:09:44.696437 3042 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.696649 kubelet[3042]: I1216 13:09:44.696553 3042 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.704468 kubelet[3042]: E1216 13:09:44.704435 3042 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-3-ab2e4a938e\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.726221 kubelet[3042]: I1216 13:09:44.726192 3042 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.733651 kubelet[3042]: I1216 13:09:44.733620 3042 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.733781 kubelet[3042]: I1216 13:09:44.733698 3042 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.782416 kubelet[3042]: I1216 13:09:44.782350 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8c37d2bcd1e5ed273ab52b8c47fa982-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-3-ab2e4a938e\" (UID: \"f8c37d2bcd1e5ed273ab52b8c47fa982\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.782416 kubelet[3042]: I1216 13:09:44.782403 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8c37d2bcd1e5ed273ab52b8c47fa982-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-3-ab2e4a938e\" (UID: \"f8c37d2bcd1e5ed273ab52b8c47fa982\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.782416 kubelet[3042]: I1216 13:09:44.782425 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0905e2179bb422dab90ff3c3c18b569-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-3-ab2e4a938e\" (UID: \"a0905e2179bb422dab90ff3c3c18b569\") " pod="kube-system/kube-scheduler-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.782662 kubelet[3042]: I1216 13:09:44.782451 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f94bd97a60f70bc9a8dcdd5709180e8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-3-ab2e4a938e\" (UID: \"7f94bd97a60f70bc9a8dcdd5709180e8\") " pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.782662 kubelet[3042]: I1216 13:09:44.782468 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f94bd97a60f70bc9a8dcdd5709180e8-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-3-ab2e4a938e\" (UID: \"7f94bd97a60f70bc9a8dcdd5709180e8\") " pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.782662 kubelet[3042]: I1216 13:09:44.782483 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8c37d2bcd1e5ed273ab52b8c47fa982-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-3-ab2e4a938e\" (UID: \"f8c37d2bcd1e5ed273ab52b8c47fa982\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.782662 kubelet[3042]: I1216 13:09:44.782498 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8c37d2bcd1e5ed273ab52b8c47fa982-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-3-ab2e4a938e\" (UID: \"f8c37d2bcd1e5ed273ab52b8c47fa982\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.782662 kubelet[3042]: I1216 13:09:44.782555 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8c37d2bcd1e5ed273ab52b8c47fa982-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-3-ab2e4a938e\" (UID: \"f8c37d2bcd1e5ed273ab52b8c47fa982\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:44.782770 kubelet[3042]: I1216 13:09:44.782573 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f94bd97a60f70bc9a8dcdd5709180e8-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-3-ab2e4a938e\" (UID: \"7f94bd97a60f70bc9a8dcdd5709180e8\") " pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:45.026007 sudo[3080]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 13:09:45.026922 sudo[3080]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 13:09:45.397753 sudo[3080]: pam_unix(sudo:session): session closed for user root Dec 16 13:09:45.575145 kubelet[3042]: I1216 13:09:45.574765 3042 apiserver.go:52] "Watching apiserver" Dec 16 13:09:45.579624 kubelet[3042]: I1216 13:09:45.579589 3042 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:09:45.603826 kubelet[3042]: I1216 13:09:45.603797 3042 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:45.609960 kubelet[3042]: E1216 13:09:45.609932 3042 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-3-ab2e4a938e\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" Dec 16 13:09:45.623247 kubelet[3042]: I1216 13:09:45.623198 3042 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-2-3-ab2e4a938e" podStartSLOduration=1.623183369 podStartE2EDuration="1.623183369s" podCreationTimestamp="2025-12-16 13:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:09:45.621622456 +0000 UTC m=+1.112110048" watchObservedRunningTime="2025-12-16 13:09:45.623183369 +0000 UTC m=+1.113670945" Dec 16 13:09:45.649555 kubelet[3042]: I1216 13:09:45.648808 3042 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-2-3-ab2e4a938e" podStartSLOduration=2.648762886 podStartE2EDuration="2.648762886s" podCreationTimestamp="2025-12-16 13:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:09:45.648728876 +0000 UTC m=+1.139216459" watchObservedRunningTime="2025-12-16 13:09:45.648762886 +0000 UTC m=+1.139250454" Dec 16 13:09:45.649555 kubelet[3042]: I1216 13:09:45.648902 3042 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-2-3-ab2e4a938e" podStartSLOduration=1.648896667 podStartE2EDuration="1.648896667s" podCreationTimestamp="2025-12-16 13:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:09:45.634580265 +0000 UTC m=+1.125067850" watchObservedRunningTime="2025-12-16 13:09:45.648896667 +0000 UTC m=+1.139384252" Dec 16 13:09:47.164671 sudo[2059]: pam_unix(sudo:session): session closed for user root Dec 16 13:09:47.321175 sshd[2058]: Connection closed by 147.75.109.163 port 52472 Dec 16 13:09:47.321805 sshd-session[2055]: pam_unix(sshd:session): session closed for user core Dec 16 13:09:47.326234 systemd[1]: sshd@8-10.0.21.22:22-147.75.109.163:52472.service: Deactivated successfully. Dec 16 13:09:47.331091 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:09:47.331677 systemd[1]: session-9.scope: Consumed 4.973s CPU time, 272.5M memory peak. Dec 16 13:09:47.336577 systemd-logind[1751]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:09:47.337966 systemd-logind[1751]: Removed session 9. Dec 16 13:09:48.795280 kubelet[3042]: I1216 13:09:48.795247 3042 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:09:48.796040 containerd[1770]: time="2025-12-16T13:09:48.796007687Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:09:48.796288 kubelet[3042]: I1216 13:09:48.796244 3042 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:09:49.495998 systemd[1]: Created slice kubepods-besteffort-pod57c53fb3_0770_4115_af4b_271c411d8f2d.slice - libcontainer container kubepods-besteffort-pod57c53fb3_0770_4115_af4b_271c411d8f2d.slice. Dec 16 13:09:49.513773 kubelet[3042]: I1216 13:09:49.513714 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cilium-run\") pod \"cilium-b68n4\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " pod="kube-system/cilium-b68n4" Dec 16 13:09:49.513773 kubelet[3042]: I1216 13:09:49.513751 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57c53fb3-0770-4115-af4b-271c411d8f2d-xtables-lock\") pod \"kube-proxy-cg9gz\" (UID: \"57c53fb3-0770-4115-af4b-271c411d8f2d\") " pod="kube-system/kube-proxy-cg9gz" Dec 16 13:09:49.513773 kubelet[3042]: I1216 13:09:49.513768 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cni-path\") pod \"cilium-b68n4\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " pod="kube-system/cilium-b68n4" Dec 16 13:09:49.513773 kubelet[3042]: I1216 13:09:49.513781 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb0b880f-daf0-41ea-87f4-0c02499c98ed-hubble-tls\") pod \"cilium-b68n4\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " pod="kube-system/cilium-b68n4" Dec 16 13:09:49.513972 kubelet[3042]: I1216 13:09:49.513797 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-host-proc-sys-kernel\") pod \"cilium-b68n4\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " pod="kube-system/cilium-b68n4" Dec 16 13:09:49.513972 kubelet[3042]: I1216 13:09:49.513820 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cilium-cgroup\") pod \"cilium-b68n4\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " pod="kube-system/cilium-b68n4" Dec 16 13:09:49.513972 kubelet[3042]: I1216 13:09:49.513833 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-etc-cni-netd\") pod \"cilium-b68n4\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " pod="kube-system/cilium-b68n4" Dec 16 13:09:49.513972 kubelet[3042]: I1216 13:09:49.513846 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-host-proc-sys-net\") pod \"cilium-b68n4\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " pod="kube-system/cilium-b68n4" Dec 16 13:09:49.513972 kubelet[3042]: I1216 13:09:49.513866 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmp6n\" (UniqueName: \"kubernetes.io/projected/fb0b880f-daf0-41ea-87f4-0c02499c98ed-kube-api-access-zmp6n\") pod \"cilium-b68n4\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " pod="kube-system/cilium-b68n4" Dec 16 13:09:49.514124 kubelet[3042]: I1216 13:09:49.513881 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cilium-config-path\") pod \"cilium-b68n4\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " pod="kube-system/cilium-b68n4" Dec 16 13:09:49.514124 kubelet[3042]: I1216 13:09:49.513896 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbg5p\" (UniqueName: \"kubernetes.io/projected/57c53fb3-0770-4115-af4b-271c411d8f2d-kube-api-access-dbg5p\") pod \"kube-proxy-cg9gz\" (UID: \"57c53fb3-0770-4115-af4b-271c411d8f2d\") " pod="kube-system/kube-proxy-cg9gz" Dec 16 13:09:49.514124 kubelet[3042]: I1216 13:09:49.513931 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57c53fb3-0770-4115-af4b-271c411d8f2d-lib-modules\") pod \"kube-proxy-cg9gz\" (UID: \"57c53fb3-0770-4115-af4b-271c411d8f2d\") " pod="kube-system/kube-proxy-cg9gz" Dec 16 13:09:49.514124 kubelet[3042]: I1216 13:09:49.514009 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-xtables-lock\") pod \"cilium-b68n4\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " pod="kube-system/cilium-b68n4" Dec 16 13:09:49.514124 kubelet[3042]: I1216 13:09:49.514074 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-bpf-maps\") pod \"cilium-b68n4\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " pod="kube-system/cilium-b68n4" Dec 16 13:09:49.514124 kubelet[3042]: I1216 13:09:49.514114 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-lib-modules\") pod \"cilium-b68n4\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " pod="kube-system/cilium-b68n4" Dec 16 13:09:49.514250 kubelet[3042]: I1216 13:09:49.514153 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb0b880f-daf0-41ea-87f4-0c02499c98ed-clustermesh-secrets\") pod \"cilium-b68n4\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " pod="kube-system/cilium-b68n4" Dec 16 13:09:49.514250 kubelet[3042]: I1216 13:09:49.514196 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/57c53fb3-0770-4115-af4b-271c411d8f2d-kube-proxy\") pod \"kube-proxy-cg9gz\" (UID: \"57c53fb3-0770-4115-af4b-271c411d8f2d\") " pod="kube-system/kube-proxy-cg9gz" Dec 16 13:09:49.514250 kubelet[3042]: I1216 13:09:49.514230 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-hostproc\") pod \"cilium-b68n4\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " pod="kube-system/cilium-b68n4" Dec 16 13:09:49.516700 systemd[1]: Created slice kubepods-burstable-podfb0b880f_daf0_41ea_87f4_0c02499c98ed.slice - libcontainer container kubepods-burstable-podfb0b880f_daf0_41ea_87f4_0c02499c98ed.slice. Dec 16 13:09:49.621319 kubelet[3042]: E1216 13:09:49.621281 3042 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 13:09:49.621319 kubelet[3042]: E1216 13:09:49.621311 3042 projected.go:194] Error preparing data for projected volume kube-api-access-zmp6n for pod kube-system/cilium-b68n4: configmap "kube-root-ca.crt" not found Dec 16 13:09:49.621584 kubelet[3042]: E1216 13:09:49.621371 3042 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fb0b880f-daf0-41ea-87f4-0c02499c98ed-kube-api-access-zmp6n podName:fb0b880f-daf0-41ea-87f4-0c02499c98ed nodeName:}" failed. No retries permitted until 2025-12-16 13:09:50.121350992 +0000 UTC m=+5.611838562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zmp6n" (UniqueName: "kubernetes.io/projected/fb0b880f-daf0-41ea-87f4-0c02499c98ed-kube-api-access-zmp6n") pod "cilium-b68n4" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed") : configmap "kube-root-ca.crt" not found Dec 16 13:09:49.622715 kubelet[3042]: E1216 13:09:49.622679 3042 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 13:09:49.622715 kubelet[3042]: E1216 13:09:49.622716 3042 projected.go:194] Error preparing data for projected volume kube-api-access-dbg5p for pod kube-system/kube-proxy-cg9gz: configmap "kube-root-ca.crt" not found Dec 16 13:09:49.623434 kubelet[3042]: E1216 13:09:49.622771 3042 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/57c53fb3-0770-4115-af4b-271c411d8f2d-kube-api-access-dbg5p podName:57c53fb3-0770-4115-af4b-271c411d8f2d nodeName:}" failed. No retries permitted until 2025-12-16 13:09:50.122748028 +0000 UTC m=+5.613235611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dbg5p" (UniqueName: "kubernetes.io/projected/57c53fb3-0770-4115-af4b-271c411d8f2d-kube-api-access-dbg5p") pod "kube-proxy-cg9gz" (UID: "57c53fb3-0770-4115-af4b-271c411d8f2d") : configmap "kube-root-ca.crt" not found Dec 16 13:09:49.895727 systemd[1]: Created slice kubepods-besteffort-pod61722c5a_7be5_40ef_a95b_4c8300fb4e98.slice - libcontainer container kubepods-besteffort-pod61722c5a_7be5_40ef_a95b_4c8300fb4e98.slice. Dec 16 13:09:49.917550 kubelet[3042]: I1216 13:09:49.917492 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5xwx\" (UniqueName: \"kubernetes.io/projected/61722c5a-7be5-40ef-a95b-4c8300fb4e98-kube-api-access-l5xwx\") pod \"cilium-operator-6c4d7847fc-nw2vq\" (UID: \"61722c5a-7be5-40ef-a95b-4c8300fb4e98\") " pod="kube-system/cilium-operator-6c4d7847fc-nw2vq" Dec 16 13:09:49.917550 kubelet[3042]: I1216 13:09:49.917541 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61722c5a-7be5-40ef-a95b-4c8300fb4e98-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-nw2vq\" (UID: \"61722c5a-7be5-40ef-a95b-4c8300fb4e98\") " pod="kube-system/cilium-operator-6c4d7847fc-nw2vq" Dec 16 13:09:50.200758 containerd[1770]: time="2025-12-16T13:09:50.200567544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nw2vq,Uid:61722c5a-7be5-40ef-a95b-4c8300fb4e98,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:50.239255 containerd[1770]: time="2025-12-16T13:09:50.239054562Z" level=info msg="connecting to shim ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886" address="unix:///run/containerd/s/518179c46f88a188a6b24f30a2d8e5b618794ec812845349403730fbe8d631b2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:50.277841 systemd[1]: Started cri-containerd-ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886.scope - libcontainer container ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886. Dec 16 13:09:50.321305 containerd[1770]: time="2025-12-16T13:09:50.321255934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nw2vq,Uid:61722c5a-7be5-40ef-a95b-4c8300fb4e98,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886\"" Dec 16 13:09:50.322871 containerd[1770]: time="2025-12-16T13:09:50.322794852Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 13:09:50.414502 containerd[1770]: time="2025-12-16T13:09:50.414432776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cg9gz,Uid:57c53fb3-0770-4115-af4b-271c411d8f2d,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:50.421076 containerd[1770]: time="2025-12-16T13:09:50.421032171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b68n4,Uid:fb0b880f-daf0-41ea-87f4-0c02499c98ed,Namespace:kube-system,Attempt:0,}" Dec 16 13:09:50.447682 containerd[1770]: time="2025-12-16T13:09:50.447639517Z" level=info msg="connecting to shim 9848499725c87cc2dd747182f0d78d098bf2a2beba1d3dd2062a2ba0f926e83e" address="unix:///run/containerd/s/ef5375424f88d2b82868cecdf8f92f049b9b2740ec9e92d704b665729c59940c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:50.451733 containerd[1770]: time="2025-12-16T13:09:50.451640242Z" level=info msg="connecting to shim 8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35" address="unix:///run/containerd/s/df7fa2dc5f22967fb11d7565ad26347ec9e794d1f734e297db1ec3abd34b64af" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:09:50.475802 systemd[1]: Started cri-containerd-9848499725c87cc2dd747182f0d78d098bf2a2beba1d3dd2062a2ba0f926e83e.scope - libcontainer container 9848499725c87cc2dd747182f0d78d098bf2a2beba1d3dd2062a2ba0f926e83e. Dec 16 13:09:50.479412 systemd[1]: Started cri-containerd-8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35.scope - libcontainer container 8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35. Dec 16 13:09:50.504441 containerd[1770]: time="2025-12-16T13:09:50.504395030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cg9gz,Uid:57c53fb3-0770-4115-af4b-271c411d8f2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9848499725c87cc2dd747182f0d78d098bf2a2beba1d3dd2062a2ba0f926e83e\"" Dec 16 13:09:50.507424 containerd[1770]: time="2025-12-16T13:09:50.507364469Z" level=info msg="CreateContainer within sandbox \"9848499725c87cc2dd747182f0d78d098bf2a2beba1d3dd2062a2ba0f926e83e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:09:50.508278 containerd[1770]: time="2025-12-16T13:09:50.508235911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b68n4,Uid:fb0b880f-daf0-41ea-87f4-0c02499c98ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\"" Dec 16 13:09:50.519622 containerd[1770]: time="2025-12-16T13:09:50.519545486Z" level=info msg="Container 5f23453c4753efe3df5d70f6cdae46ed1825e399226a07c04d4fdce345146f31: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:50.530360 containerd[1770]: time="2025-12-16T13:09:50.530293097Z" level=info msg="CreateContainer within sandbox \"9848499725c87cc2dd747182f0d78d098bf2a2beba1d3dd2062a2ba0f926e83e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5f23453c4753efe3df5d70f6cdae46ed1825e399226a07c04d4fdce345146f31\"" Dec 16 13:09:50.531020 containerd[1770]: time="2025-12-16T13:09:50.530977665Z" level=info msg="StartContainer for \"5f23453c4753efe3df5d70f6cdae46ed1825e399226a07c04d4fdce345146f31\"" Dec 16 13:09:50.532247 containerd[1770]: time="2025-12-16T13:09:50.532210159Z" level=info msg="connecting to shim 5f23453c4753efe3df5d70f6cdae46ed1825e399226a07c04d4fdce345146f31" address="unix:///run/containerd/s/ef5375424f88d2b82868cecdf8f92f049b9b2740ec9e92d704b665729c59940c" protocol=ttrpc version=3 Dec 16 13:09:50.553807 systemd[1]: Started cri-containerd-5f23453c4753efe3df5d70f6cdae46ed1825e399226a07c04d4fdce345146f31.scope - libcontainer container 5f23453c4753efe3df5d70f6cdae46ed1825e399226a07c04d4fdce345146f31. Dec 16 13:09:50.659164 containerd[1770]: time="2025-12-16T13:09:50.659116282Z" level=info msg="StartContainer for \"5f23453c4753efe3df5d70f6cdae46ed1825e399226a07c04d4fdce345146f31\" returns successfully" Dec 16 13:09:51.630843 kubelet[3042]: I1216 13:09:51.630769 3042 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cg9gz" podStartSLOduration=2.630748642 podStartE2EDuration="2.630748642s" podCreationTimestamp="2025-12-16 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:09:51.629516248 +0000 UTC m=+7.120003832" watchObservedRunningTime="2025-12-16 13:09:51.630748642 +0000 UTC m=+7.121236226" Dec 16 13:09:51.996460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1389925234.mount: Deactivated successfully. Dec 16 13:09:52.400250 containerd[1770]: time="2025-12-16T13:09:52.400200058Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:52.401547 containerd[1770]: time="2025-12-16T13:09:52.401503292Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 16 13:09:52.403102 containerd[1770]: time="2025-12-16T13:09:52.403067845Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:52.404128 containerd[1770]: time="2025-12-16T13:09:52.404093655Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.081268025s" Dec 16 13:09:52.404128 containerd[1770]: time="2025-12-16T13:09:52.404122816Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 16 13:09:52.405110 containerd[1770]: time="2025-12-16T13:09:52.404848295Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 13:09:52.405717 containerd[1770]: time="2025-12-16T13:09:52.405687682Z" level=info msg="CreateContainer within sandbox \"ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 13:09:52.415853 containerd[1770]: time="2025-12-16T13:09:52.415824555Z" level=info msg="Container caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:52.422924 containerd[1770]: time="2025-12-16T13:09:52.422855364Z" level=info msg="CreateContainer within sandbox \"ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\"" Dec 16 13:09:52.423271 containerd[1770]: time="2025-12-16T13:09:52.423252578Z" level=info msg="StartContainer for \"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\"" Dec 16 13:09:52.423904 containerd[1770]: time="2025-12-16T13:09:52.423885782Z" level=info msg="connecting to shim caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861" address="unix:///run/containerd/s/518179c46f88a188a6b24f30a2d8e5b618794ec812845349403730fbe8d631b2" protocol=ttrpc version=3 Dec 16 13:09:52.449708 systemd[1]: Started cri-containerd-caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861.scope - libcontainer container caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861. Dec 16 13:09:52.488312 containerd[1770]: time="2025-12-16T13:09:52.488271616Z" level=info msg="StartContainer for \"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\" returns successfully" Dec 16 13:09:52.636772 kubelet[3042]: I1216 13:09:52.636701 3042 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-nw2vq" podStartSLOduration=1.554329306 podStartE2EDuration="3.636679392s" podCreationTimestamp="2025-12-16 13:09:49 +0000 UTC" firstStartedPulling="2025-12-16 13:09:50.322400316 +0000 UTC m=+5.812887878" lastFinishedPulling="2025-12-16 13:09:52.404750401 +0000 UTC m=+7.895237964" observedRunningTime="2025-12-16 13:09:52.636431292 +0000 UTC m=+8.126918881" watchObservedRunningTime="2025-12-16 13:09:52.636679392 +0000 UTC m=+8.127167005" Dec 16 13:09:56.780852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1446636022.mount: Deactivated successfully. Dec 16 13:09:58.107757 containerd[1770]: time="2025-12-16T13:09:58.107682930Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:58.110316 containerd[1770]: time="2025-12-16T13:09:58.110270020Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 16 13:09:58.112314 containerd[1770]: time="2025-12-16T13:09:58.112269812Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:09:58.114803 containerd[1770]: time="2025-12-16T13:09:58.114750623Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.709870764s" Dec 16 13:09:58.114803 containerd[1770]: time="2025-12-16T13:09:58.114800334Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 16 13:09:58.118333 containerd[1770]: time="2025-12-16T13:09:58.118295014Z" level=info msg="CreateContainer within sandbox \"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:09:58.131000 containerd[1770]: time="2025-12-16T13:09:58.130930536Z" level=info msg="Container fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:58.140602 containerd[1770]: time="2025-12-16T13:09:58.140499918Z" level=info msg="CreateContainer within sandbox \"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30\"" Dec 16 13:09:58.141188 containerd[1770]: time="2025-12-16T13:09:58.141146327Z" level=info msg="StartContainer for \"fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30\"" Dec 16 13:09:58.142291 containerd[1770]: time="2025-12-16T13:09:58.142252512Z" level=info msg="connecting to shim fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30" address="unix:///run/containerd/s/df7fa2dc5f22967fb11d7565ad26347ec9e794d1f734e297db1ec3abd34b64af" protocol=ttrpc version=3 Dec 16 13:09:58.167744 systemd[1]: Started cri-containerd-fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30.scope - libcontainer container fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30. Dec 16 13:09:58.200908 containerd[1770]: time="2025-12-16T13:09:58.200853119Z" level=info msg="StartContainer for \"fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30\" returns successfully" Dec 16 13:09:58.206060 systemd[1]: cri-containerd-fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30.scope: Deactivated successfully. Dec 16 13:09:58.207729 containerd[1770]: time="2025-12-16T13:09:58.207691998Z" level=info msg="received container exit event container_id:\"fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30\" id:\"fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30\" pid:3536 exited_at:{seconds:1765890598 nanos:207247615}" Dec 16 13:09:58.231282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30-rootfs.mount: Deactivated successfully. Dec 16 13:09:58.639350 containerd[1770]: time="2025-12-16T13:09:58.639275385Z" level=info msg="CreateContainer within sandbox \"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:09:58.655235 containerd[1770]: time="2025-12-16T13:09:58.655118662Z" level=info msg="Container bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:58.666808 containerd[1770]: time="2025-12-16T13:09:58.666729968Z" level=info msg="CreateContainer within sandbox \"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f\"" Dec 16 13:09:58.667401 containerd[1770]: time="2025-12-16T13:09:58.667368161Z" level=info msg="StartContainer for \"bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f\"" Dec 16 13:09:58.668780 containerd[1770]: time="2025-12-16T13:09:58.668746916Z" level=info msg="connecting to shim bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f" address="unix:///run/containerd/s/df7fa2dc5f22967fb11d7565ad26347ec9e794d1f734e297db1ec3abd34b64af" protocol=ttrpc version=3 Dec 16 13:09:58.699756 systemd[1]: Started cri-containerd-bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f.scope - libcontainer container bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f. Dec 16 13:09:58.739101 containerd[1770]: time="2025-12-16T13:09:58.738943081Z" level=info msg="StartContainer for \"bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f\" returns successfully" Dec 16 13:09:58.753571 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:09:58.753836 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:09:58.754061 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:09:58.755664 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:09:58.756803 systemd[1]: cri-containerd-bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f.scope: Deactivated successfully. Dec 16 13:09:58.757796 containerd[1770]: time="2025-12-16T13:09:58.757742939Z" level=info msg="received container exit event container_id:\"bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f\" id:\"bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f\" pid:3583 exited_at:{seconds:1765890598 nanos:757455740}" Dec 16 13:09:58.787614 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:09:59.646081 containerd[1770]: time="2025-12-16T13:09:59.645996737Z" level=info msg="CreateContainer within sandbox \"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:09:59.670728 containerd[1770]: time="2025-12-16T13:09:59.670645250Z" level=info msg="Container e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:09:59.689029 containerd[1770]: time="2025-12-16T13:09:59.688933972Z" level=info msg="CreateContainer within sandbox \"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1\"" Dec 16 13:09:59.689887 containerd[1770]: time="2025-12-16T13:09:59.689809799Z" level=info msg="StartContainer for \"e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1\"" Dec 16 13:09:59.693375 containerd[1770]: time="2025-12-16T13:09:59.693277965Z" level=info msg="connecting to shim e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1" address="unix:///run/containerd/s/df7fa2dc5f22967fb11d7565ad26347ec9e794d1f734e297db1ec3abd34b64af" protocol=ttrpc version=3 Dec 16 13:09:59.731906 systemd[1]: Started cri-containerd-e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1.scope - libcontainer container e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1. Dec 16 13:09:59.865600 systemd[1]: cri-containerd-e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1.scope: Deactivated successfully. Dec 16 13:09:59.866934 containerd[1770]: time="2025-12-16T13:09:59.866889163Z" level=info msg="received container exit event container_id:\"e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1\" id:\"e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1\" pid:3629 exited_at:{seconds:1765890599 nanos:866543466}" Dec 16 13:09:59.877092 containerd[1770]: time="2025-12-16T13:09:59.877045892Z" level=info msg="StartContainer for \"e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1\" returns successfully" Dec 16 13:09:59.891478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1-rootfs.mount: Deactivated successfully. Dec 16 13:10:00.652482 containerd[1770]: time="2025-12-16T13:10:00.652343942Z" level=info msg="CreateContainer within sandbox \"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:10:00.666622 containerd[1770]: time="2025-12-16T13:10:00.666517466Z" level=info msg="Container 0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:00.678468 containerd[1770]: time="2025-12-16T13:10:00.678374117Z" level=info msg="CreateContainer within sandbox \"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746\"" Dec 16 13:10:00.679252 containerd[1770]: time="2025-12-16T13:10:00.679202961Z" level=info msg="StartContainer for \"0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746\"" Dec 16 13:10:00.680506 containerd[1770]: time="2025-12-16T13:10:00.680441123Z" level=info msg="connecting to shim 0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746" address="unix:///run/containerd/s/df7fa2dc5f22967fb11d7565ad26347ec9e794d1f734e297db1ec3abd34b64af" protocol=ttrpc version=3 Dec 16 13:10:00.714773 systemd[1]: Started cri-containerd-0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746.scope - libcontainer container 0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746. Dec 16 13:10:00.757664 systemd[1]: cri-containerd-0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746.scope: Deactivated successfully. Dec 16 13:10:00.759785 containerd[1770]: time="2025-12-16T13:10:00.759639561Z" level=info msg="received container exit event container_id:\"0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746\" id:\"0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746\" pid:3669 exited_at:{seconds:1765890600 nanos:759157703}" Dec 16 13:10:00.775661 containerd[1770]: time="2025-12-16T13:10:00.775586761Z" level=info msg="StartContainer for \"0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746\" returns successfully" Dec 16 13:10:00.795293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746-rootfs.mount: Deactivated successfully. Dec 16 13:10:01.658940 containerd[1770]: time="2025-12-16T13:10:01.658878603Z" level=info msg="CreateContainer within sandbox \"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:10:01.674061 containerd[1770]: time="2025-12-16T13:10:01.673981812Z" level=info msg="Container a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:01.684049 containerd[1770]: time="2025-12-16T13:10:01.683927454Z" level=info msg="CreateContainer within sandbox \"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\"" Dec 16 13:10:01.685042 containerd[1770]: time="2025-12-16T13:10:01.684996539Z" level=info msg="StartContainer for \"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\"" Dec 16 13:10:01.686237 containerd[1770]: time="2025-12-16T13:10:01.686152523Z" level=info msg="connecting to shim a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b" address="unix:///run/containerd/s/df7fa2dc5f22967fb11d7565ad26347ec9e794d1f734e297db1ec3abd34b64af" protocol=ttrpc version=3 Dec 16 13:10:01.707775 systemd[1]: Started cri-containerd-a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b.scope - libcontainer container a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b. Dec 16 13:10:01.762224 containerd[1770]: time="2025-12-16T13:10:01.762173776Z" level=info msg="StartContainer for \"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\" returns successfully" Dec 16 13:10:01.839748 kubelet[3042]: I1216 13:10:01.839704 3042 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:10:01.870072 systemd[1]: Created slice kubepods-burstable-pod0df741e9_5db7_4963_afb2_3ce1eadb1d9b.slice - libcontainer container kubepods-burstable-pod0df741e9_5db7_4963_afb2_3ce1eadb1d9b.slice. Dec 16 13:10:01.873126 systemd[1]: Created slice kubepods-burstable-pod323b71d7_54e0_417a_a70c_c4cae98650db.slice - libcontainer container kubepods-burstable-pod323b71d7_54e0_417a_a70c_c4cae98650db.slice. Dec 16 13:10:01.901237 kubelet[3042]: I1216 13:10:01.901136 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/323b71d7-54e0-417a-a70c-c4cae98650db-config-volume\") pod \"coredns-668d6bf9bc-gwgvr\" (UID: \"323b71d7-54e0-417a-a70c-c4cae98650db\") " pod="kube-system/coredns-668d6bf9bc-gwgvr" Dec 16 13:10:01.901237 kubelet[3042]: I1216 13:10:01.901192 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0df741e9-5db7-4963-afb2-3ce1eadb1d9b-config-volume\") pod \"coredns-668d6bf9bc-b29rt\" (UID: \"0df741e9-5db7-4963-afb2-3ce1eadb1d9b\") " pod="kube-system/coredns-668d6bf9bc-b29rt" Dec 16 13:10:01.901548 kubelet[3042]: I1216 13:10:01.901331 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjt2m\" (UniqueName: \"kubernetes.io/projected/323b71d7-54e0-417a-a70c-c4cae98650db-kube-api-access-hjt2m\") pod \"coredns-668d6bf9bc-gwgvr\" (UID: \"323b71d7-54e0-417a-a70c-c4cae98650db\") " pod="kube-system/coredns-668d6bf9bc-gwgvr" Dec 16 13:10:01.901548 kubelet[3042]: I1216 13:10:01.901354 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klr68\" (UniqueName: \"kubernetes.io/projected/0df741e9-5db7-4963-afb2-3ce1eadb1d9b-kube-api-access-klr68\") pod \"coredns-668d6bf9bc-b29rt\" (UID: \"0df741e9-5db7-4963-afb2-3ce1eadb1d9b\") " pod="kube-system/coredns-668d6bf9bc-b29rt" Dec 16 13:10:02.173140 containerd[1770]: time="2025-12-16T13:10:02.173080210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b29rt,Uid:0df741e9-5db7-4963-afb2-3ce1eadb1d9b,Namespace:kube-system,Attempt:0,}" Dec 16 13:10:02.175855 containerd[1770]: time="2025-12-16T13:10:02.175610697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gwgvr,Uid:323b71d7-54e0-417a-a70c-c4cae98650db,Namespace:kube-system,Attempt:0,}" Dec 16 13:10:02.699552 kubelet[3042]: I1216 13:10:02.699407 3042 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b68n4" podStartSLOduration=6.093446359 podStartE2EDuration="13.699377153s" podCreationTimestamp="2025-12-16 13:09:49 +0000 UTC" firstStartedPulling="2025-12-16 13:09:50.51087943 +0000 UTC m=+6.001366992" lastFinishedPulling="2025-12-16 13:09:58.11681021 +0000 UTC m=+13.607297786" observedRunningTime="2025-12-16 13:10:02.697849688 +0000 UTC m=+18.188337316" watchObservedRunningTime="2025-12-16 13:10:02.699377153 +0000 UTC m=+18.189864810" Dec 16 13:10:03.711947 systemd-networkd[1584]: cilium_host: Link UP Dec 16 13:10:03.713213 systemd-networkd[1584]: cilium_net: Link UP Dec 16 13:10:03.714130 systemd-networkd[1584]: cilium_net: Gained carrier Dec 16 13:10:03.714754 systemd-networkd[1584]: cilium_host: Gained carrier Dec 16 13:10:03.803491 systemd-networkd[1584]: cilium_vxlan: Link UP Dec 16 13:10:03.803507 systemd-networkd[1584]: cilium_vxlan: Gained carrier Dec 16 13:10:03.924770 systemd-networkd[1584]: cilium_net: Gained IPv6LL Dec 16 13:10:04.022628 kernel: NET: Registered PF_ALG protocol family Dec 16 13:10:04.580697 systemd-networkd[1584]: cilium_host: Gained IPv6LL Dec 16 13:10:04.764714 systemd-networkd[1584]: lxc_health: Link UP Dec 16 13:10:04.765104 systemd-networkd[1584]: lxc_health: Gained carrier Dec 16 13:10:05.205549 systemd-networkd[1584]: lxc395f3c80afe2: Link UP Dec 16 13:10:05.216626 kernel: eth0: renamed from tmp91010 Dec 16 13:10:05.227643 systemd-networkd[1584]: lxcce8e878e0a9d: Link UP Dec 16 13:10:05.228457 systemd-networkd[1584]: lxc395f3c80afe2: Gained carrier Dec 16 13:10:05.228630 kernel: eth0: renamed from tmpd0cc2 Dec 16 13:10:05.228914 systemd-networkd[1584]: lxcce8e878e0a9d: Gained carrier Dec 16 13:10:05.348680 systemd-networkd[1584]: cilium_vxlan: Gained IPv6LL Dec 16 13:10:06.244796 systemd-networkd[1584]: lxc_health: Gained IPv6LL Dec 16 13:10:07.012726 systemd-networkd[1584]: lxcce8e878e0a9d: Gained IPv6LL Dec 16 13:10:07.268801 systemd-networkd[1584]: lxc395f3c80afe2: Gained IPv6LL Dec 16 13:10:08.659445 containerd[1770]: time="2025-12-16T13:10:08.659389718Z" level=info msg="connecting to shim d0cc2efa5c198066e4fdb60aa33f6eef324998b57aec8f65ef875b4debead563" address="unix:///run/containerd/s/5f60d1b0dc019dc36d4f89c56ceec60db20ed2e42226724b5804265a00a5d5a4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:10:08.684004 containerd[1770]: time="2025-12-16T13:10:08.683939360Z" level=info msg="connecting to shim 910102cfd94c01a311da788f51bdbb684e0f3e3ea5e101d7ab39a267c602e693" address="unix:///run/containerd/s/fdc45efe9909adbc0a6dbbfb68607e8ef61691d36edf5a279704e92b8522a557" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:10:08.684724 systemd[1]: Started cri-containerd-d0cc2efa5c198066e4fdb60aa33f6eef324998b57aec8f65ef875b4debead563.scope - libcontainer container d0cc2efa5c198066e4fdb60aa33f6eef324998b57aec8f65ef875b4debead563. Dec 16 13:10:08.701077 systemd[1]: Started cri-containerd-910102cfd94c01a311da788f51bdbb684e0f3e3ea5e101d7ab39a267c602e693.scope - libcontainer container 910102cfd94c01a311da788f51bdbb684e0f3e3ea5e101d7ab39a267c602e693. Dec 16 13:10:08.743992 containerd[1770]: time="2025-12-16T13:10:08.743942010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gwgvr,Uid:323b71d7-54e0-417a-a70c-c4cae98650db,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0cc2efa5c198066e4fdb60aa33f6eef324998b57aec8f65ef875b4debead563\"" Dec 16 13:10:08.746136 containerd[1770]: time="2025-12-16T13:10:08.746107524Z" level=info msg="CreateContainer within sandbox \"d0cc2efa5c198066e4fdb60aa33f6eef324998b57aec8f65ef875b4debead563\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:10:08.764204 containerd[1770]: time="2025-12-16T13:10:08.764160443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b29rt,Uid:0df741e9-5db7-4963-afb2-3ce1eadb1d9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"910102cfd94c01a311da788f51bdbb684e0f3e3ea5e101d7ab39a267c602e693\"" Dec 16 13:10:08.766737 containerd[1770]: time="2025-12-16T13:10:08.766707763Z" level=info msg="CreateContainer within sandbox \"910102cfd94c01a311da788f51bdbb684e0f3e3ea5e101d7ab39a267c602e693\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:10:08.769457 containerd[1770]: time="2025-12-16T13:10:08.769419274Z" level=info msg="Container 4bb1c4a4578befa4195e8d9e7df5c67c7f2e5afd561727a04ee0228424b92923: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:08.782699 containerd[1770]: time="2025-12-16T13:10:08.782655221Z" level=info msg="Container acdefda0f22622d68698aa7105c69d4584644878e4fae666d2a15cf43b5c14f1: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:10:08.790467 containerd[1770]: time="2025-12-16T13:10:08.789040085Z" level=info msg="CreateContainer within sandbox \"d0cc2efa5c198066e4fdb60aa33f6eef324998b57aec8f65ef875b4debead563\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4bb1c4a4578befa4195e8d9e7df5c67c7f2e5afd561727a04ee0228424b92923\"" Dec 16 13:10:08.790467 containerd[1770]: time="2025-12-16T13:10:08.789744705Z" level=info msg="StartContainer for \"4bb1c4a4578befa4195e8d9e7df5c67c7f2e5afd561727a04ee0228424b92923\"" Dec 16 13:10:08.791835 containerd[1770]: time="2025-12-16T13:10:08.791803038Z" level=info msg="connecting to shim 4bb1c4a4578befa4195e8d9e7df5c67c7f2e5afd561727a04ee0228424b92923" address="unix:///run/containerd/s/5f60d1b0dc019dc36d4f89c56ceec60db20ed2e42226724b5804265a00a5d5a4" protocol=ttrpc version=3 Dec 16 13:10:08.794446 containerd[1770]: time="2025-12-16T13:10:08.794398581Z" level=info msg="CreateContainer within sandbox \"910102cfd94c01a311da788f51bdbb684e0f3e3ea5e101d7ab39a267c602e693\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"acdefda0f22622d68698aa7105c69d4584644878e4fae666d2a15cf43b5c14f1\"" Dec 16 13:10:08.795774 containerd[1770]: time="2025-12-16T13:10:08.794931604Z" level=info msg="StartContainer for \"acdefda0f22622d68698aa7105c69d4584644878e4fae666d2a15cf43b5c14f1\"" Dec 16 13:10:08.796312 containerd[1770]: time="2025-12-16T13:10:08.796257681Z" level=info msg="connecting to shim acdefda0f22622d68698aa7105c69d4584644878e4fae666d2a15cf43b5c14f1" address="unix:///run/containerd/s/fdc45efe9909adbc0a6dbbfb68607e8ef61691d36edf5a279704e92b8522a557" protocol=ttrpc version=3 Dec 16 13:10:08.811731 systemd[1]: Started cri-containerd-4bb1c4a4578befa4195e8d9e7df5c67c7f2e5afd561727a04ee0228424b92923.scope - libcontainer container 4bb1c4a4578befa4195e8d9e7df5c67c7f2e5afd561727a04ee0228424b92923. Dec 16 13:10:08.814876 systemd[1]: Started cri-containerd-acdefda0f22622d68698aa7105c69d4584644878e4fae666d2a15cf43b5c14f1.scope - libcontainer container acdefda0f22622d68698aa7105c69d4584644878e4fae666d2a15cf43b5c14f1. Dec 16 13:10:08.849880 containerd[1770]: time="2025-12-16T13:10:08.849840206Z" level=info msg="StartContainer for \"4bb1c4a4578befa4195e8d9e7df5c67c7f2e5afd561727a04ee0228424b92923\" returns successfully" Dec 16 13:10:08.850028 containerd[1770]: time="2025-12-16T13:10:08.849958141Z" level=info msg="StartContainer for \"acdefda0f22622d68698aa7105c69d4584644878e4fae666d2a15cf43b5c14f1\" returns successfully" Dec 16 13:10:09.661191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3720351966.mount: Deactivated successfully. Dec 16 13:10:09.702325 kubelet[3042]: I1216 13:10:09.702176 3042 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-b29rt" podStartSLOduration=20.702101496 podStartE2EDuration="20.702101496s" podCreationTimestamp="2025-12-16 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:10:09.701497294 +0000 UTC m=+25.191984918" watchObservedRunningTime="2025-12-16 13:10:09.702101496 +0000 UTC m=+25.192589147" Dec 16 13:10:09.718916 kubelet[3042]: I1216 13:10:09.718096 3042 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gwgvr" podStartSLOduration=20.718057151 podStartE2EDuration="20.718057151s" podCreationTimestamp="2025-12-16 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:10:09.717814996 +0000 UTC m=+25.208302628" watchObservedRunningTime="2025-12-16 13:10:09.718057151 +0000 UTC m=+25.208544731" Dec 16 13:10:10.213673 kubelet[3042]: I1216 13:10:10.213564 3042 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:12:06.441608 systemd[1]: Started sshd@9-10.0.21.22:22-147.75.109.163:38906.service - OpenSSH per-connection server daemon (147.75.109.163:38906). Dec 16 13:12:07.440567 sshd[4448]: Accepted publickey for core from 147.75.109.163 port 38906 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:12:07.444269 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:07.460647 systemd-logind[1751]: New session 10 of user core. Dec 16 13:12:07.475958 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:12:08.190686 sshd[4451]: Connection closed by 147.75.109.163 port 38906 Dec 16 13:12:08.191692 sshd-session[4448]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:08.196997 systemd[1]: sshd@9-10.0.21.22:22-147.75.109.163:38906.service: Deactivated successfully. Dec 16 13:12:08.200593 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:12:08.203436 systemd-logind[1751]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:12:08.206586 systemd-logind[1751]: Removed session 10. Dec 16 13:12:13.365386 systemd[1]: Started sshd@10-10.0.21.22:22-147.75.109.163:43048.service - OpenSSH per-connection server daemon (147.75.109.163:43048). Dec 16 13:12:14.340510 sshd[4482]: Accepted publickey for core from 147.75.109.163 port 43048 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:12:14.342994 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:14.349744 systemd-logind[1751]: New session 11 of user core. Dec 16 13:12:14.359748 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:12:15.073851 sshd[4485]: Connection closed by 147.75.109.163 port 43048 Dec 16 13:12:15.074471 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:15.077924 systemd[1]: sshd@10-10.0.21.22:22-147.75.109.163:43048.service: Deactivated successfully. Dec 16 13:12:15.080961 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:12:15.083043 systemd-logind[1751]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:12:15.085266 systemd-logind[1751]: Removed session 11. Dec 16 13:12:20.244767 systemd[1]: Started sshd@11-10.0.21.22:22-147.75.109.163:43062.service - OpenSSH per-connection server daemon (147.75.109.163:43062). Dec 16 13:12:21.228832 sshd[4503]: Accepted publickey for core from 147.75.109.163 port 43062 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:12:21.230993 sshd-session[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:21.237762 systemd-logind[1751]: New session 12 of user core. Dec 16 13:12:21.246867 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:12:21.965540 sshd[4508]: Connection closed by 147.75.109.163 port 43062 Dec 16 13:12:21.966129 sshd-session[4503]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:21.971415 systemd[1]: sshd@11-10.0.21.22:22-147.75.109.163:43062.service: Deactivated successfully. Dec 16 13:12:21.974327 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:12:21.975975 systemd-logind[1751]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:12:21.977737 systemd-logind[1751]: Removed session 12. Dec 16 13:12:22.134256 systemd[1]: Started sshd@12-10.0.21.22:22-147.75.109.163:43078.service - OpenSSH per-connection server daemon (147.75.109.163:43078). Dec 16 13:12:23.102640 sshd[4526]: Accepted publickey for core from 147.75.109.163 port 43078 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:12:23.104647 sshd-session[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:23.108920 systemd-logind[1751]: New session 13 of user core. Dec 16 13:12:23.130801 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:12:23.878219 sshd[4529]: Connection closed by 147.75.109.163 port 43078 Dec 16 13:12:23.878806 sshd-session[4526]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:23.883231 systemd[1]: sshd@12-10.0.21.22:22-147.75.109.163:43078.service: Deactivated successfully. Dec 16 13:12:23.887489 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:12:23.890348 systemd-logind[1751]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:12:23.892415 systemd-logind[1751]: Removed session 13. Dec 16 13:12:24.048665 systemd[1]: Started sshd@13-10.0.21.22:22-147.75.109.163:52480.service - OpenSSH per-connection server daemon (147.75.109.163:52480). Dec 16 13:12:25.045684 sshd[4544]: Accepted publickey for core from 147.75.109.163 port 52480 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:12:25.048184 sshd-session[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:25.056875 systemd-logind[1751]: New session 14 of user core. Dec 16 13:12:25.072794 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:12:25.782666 sshd[4547]: Connection closed by 147.75.109.163 port 52480 Dec 16 13:12:25.782809 sshd-session[4544]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:25.788460 systemd[1]: sshd@13-10.0.21.22:22-147.75.109.163:52480.service: Deactivated successfully. Dec 16 13:12:25.790378 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:12:25.791395 systemd-logind[1751]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:12:25.792671 systemd-logind[1751]: Removed session 14. Dec 16 13:12:30.950881 systemd[1]: Started sshd@14-10.0.21.22:22-147.75.109.163:52492.service - OpenSSH per-connection server daemon (147.75.109.163:52492). Dec 16 13:12:31.922785 sshd[4564]: Accepted publickey for core from 147.75.109.163 port 52492 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:12:31.924612 sshd-session[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:31.933502 systemd-logind[1751]: New session 15 of user core. Dec 16 13:12:31.951751 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:12:32.676554 sshd[4567]: Connection closed by 147.75.109.163 port 52492 Dec 16 13:12:32.676989 sshd-session[4564]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:32.680572 systemd[1]: sshd@14-10.0.21.22:22-147.75.109.163:52492.service: Deactivated successfully. Dec 16 13:12:32.682334 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:12:32.683054 systemd-logind[1751]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:12:32.684064 systemd-logind[1751]: Removed session 15. Dec 16 13:12:32.849356 systemd[1]: Started sshd@15-10.0.21.22:22-147.75.109.163:35572.service - OpenSSH per-connection server daemon (147.75.109.163:35572). Dec 16 13:12:33.832963 sshd[4584]: Accepted publickey for core from 147.75.109.163 port 35572 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:12:33.834922 sshd-session[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:33.841354 systemd-logind[1751]: New session 16 of user core. Dec 16 13:12:33.847706 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:12:34.635551 sshd[4587]: Connection closed by 147.75.109.163 port 35572 Dec 16 13:12:34.636162 sshd-session[4584]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:34.641327 systemd[1]: sshd@15-10.0.21.22:22-147.75.109.163:35572.service: Deactivated successfully. Dec 16 13:12:34.643714 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:12:34.646240 systemd-logind[1751]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:12:34.648348 systemd-logind[1751]: Removed session 16. Dec 16 13:12:34.822881 systemd[1]: Started sshd@16-10.0.21.22:22-147.75.109.163:35586.service - OpenSSH per-connection server daemon (147.75.109.163:35586). Dec 16 13:12:35.848569 sshd[4602]: Accepted publickey for core from 147.75.109.163 port 35586 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:12:35.851293 sshd-session[4602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:35.862785 systemd-logind[1751]: New session 17 of user core. Dec 16 13:12:35.884961 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:12:37.236335 sshd[4605]: Connection closed by 147.75.109.163 port 35586 Dec 16 13:12:37.237054 sshd-session[4602]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:37.241468 systemd[1]: sshd@16-10.0.21.22:22-147.75.109.163:35586.service: Deactivated successfully. Dec 16 13:12:37.245602 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:12:37.247236 systemd-logind[1751]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:12:37.248416 systemd-logind[1751]: Removed session 17. Dec 16 13:12:37.412994 systemd[1]: Started sshd@17-10.0.21.22:22-147.75.109.163:35588.service - OpenSSH per-connection server daemon (147.75.109.163:35588). Dec 16 13:12:38.407582 sshd[4628]: Accepted publickey for core from 147.75.109.163 port 35588 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:12:38.409377 sshd-session[4628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:38.416297 systemd-logind[1751]: New session 18 of user core. Dec 16 13:12:38.427699 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:12:39.139435 update_engine[1752]: I20251216 13:12:39.139364 1752 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 16 13:12:39.139435 update_engine[1752]: I20251216 13:12:39.139410 1752 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 16 13:12:39.139974 update_engine[1752]: I20251216 13:12:39.139596 1752 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 16 13:12:39.139974 update_engine[1752]: I20251216 13:12:39.139967 1752 omaha_request_params.cc:62] Current group set to stable Dec 16 13:12:39.140097 update_engine[1752]: I20251216 13:12:39.140075 1752 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 16 13:12:39.140097 update_engine[1752]: I20251216 13:12:39.140087 1752 update_attempter.cc:643] Scheduling an action processor start. Dec 16 13:12:39.140162 update_engine[1752]: I20251216 13:12:39.140105 1752 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 13:12:39.140162 update_engine[1752]: I20251216 13:12:39.140128 1752 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 16 13:12:39.140219 update_engine[1752]: I20251216 13:12:39.140177 1752 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 13:12:39.140219 update_engine[1752]: I20251216 13:12:39.140187 1752 omaha_request_action.cc:272] Request: Dec 16 13:12:39.140219 update_engine[1752]: Dec 16 13:12:39.140219 update_engine[1752]: Dec 16 13:12:39.140219 update_engine[1752]: Dec 16 13:12:39.140219 update_engine[1752]: Dec 16 13:12:39.140219 update_engine[1752]: Dec 16 13:12:39.140219 update_engine[1752]: Dec 16 13:12:39.140219 update_engine[1752]: Dec 16 13:12:39.140219 update_engine[1752]: Dec 16 13:12:39.140219 update_engine[1752]: I20251216 13:12:39.140192 1752 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:12:39.141143 update_engine[1752]: I20251216 13:12:39.141110 1752 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:12:39.141239 locksmithd[1802]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 16 13:12:39.141647 update_engine[1752]: I20251216 13:12:39.141614 1752 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:12:39.148760 update_engine[1752]: E20251216 13:12:39.148684 1752 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 13:12:39.148903 update_engine[1752]: I20251216 13:12:39.148779 1752 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 16 13:12:39.273963 sshd[4631]: Connection closed by 147.75.109.163 port 35588 Dec 16 13:12:39.274429 sshd-session[4628]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:39.278369 systemd[1]: sshd@17-10.0.21.22:22-147.75.109.163:35588.service: Deactivated successfully. Dec 16 13:12:39.280672 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:12:39.282208 systemd-logind[1751]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:12:39.283439 systemd-logind[1751]: Removed session 18. Dec 16 13:12:39.445111 systemd[1]: Started sshd@18-10.0.21.22:22-147.75.109.163:35590.service - OpenSSH per-connection server daemon (147.75.109.163:35590). Dec 16 13:12:40.422327 sshd[4646]: Accepted publickey for core from 147.75.109.163 port 35590 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:12:40.423718 sshd-session[4646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:40.430723 systemd-logind[1751]: New session 19 of user core. Dec 16 13:12:40.449774 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:12:41.166695 sshd[4649]: Connection closed by 147.75.109.163 port 35590 Dec 16 13:12:41.167174 sshd-session[4646]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:41.172663 systemd[1]: sshd@18-10.0.21.22:22-147.75.109.163:35590.service: Deactivated successfully. Dec 16 13:12:41.174830 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:12:41.175947 systemd-logind[1751]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:12:41.177632 systemd-logind[1751]: Removed session 19. Dec 16 13:12:46.338259 systemd[1]: Started sshd@19-10.0.21.22:22-147.75.109.163:39044.service - OpenSSH per-connection server daemon (147.75.109.163:39044). Dec 16 13:12:47.334156 sshd[4671]: Accepted publickey for core from 147.75.109.163 port 39044 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:12:47.335792 sshd-session[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:47.340613 systemd-logind[1751]: New session 20 of user core. Dec 16 13:12:47.356802 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:12:48.065689 sshd[4674]: Connection closed by 147.75.109.163 port 39044 Dec 16 13:12:48.066173 sshd-session[4671]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:48.070056 systemd[1]: sshd@19-10.0.21.22:22-147.75.109.163:39044.service: Deactivated successfully. Dec 16 13:12:48.071813 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:12:48.072547 systemd-logind[1751]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:12:48.073769 systemd-logind[1751]: Removed session 20. Dec 16 13:12:49.138734 update_engine[1752]: I20251216 13:12:49.138602 1752 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:12:49.138734 update_engine[1752]: I20251216 13:12:49.138718 1752 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:12:49.139312 update_engine[1752]: I20251216 13:12:49.139086 1752 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:12:49.145418 update_engine[1752]: E20251216 13:12:49.145345 1752 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 13:12:49.145538 update_engine[1752]: I20251216 13:12:49.145448 1752 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 16 13:12:53.237255 systemd[1]: Started sshd@20-10.0.21.22:22-147.75.109.163:33194.service - OpenSSH per-connection server daemon (147.75.109.163:33194). Dec 16 13:12:54.231336 sshd[4694]: Accepted publickey for core from 147.75.109.163 port 33194 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:12:54.234003 sshd-session[4694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:54.245104 systemd-logind[1751]: New session 21 of user core. Dec 16 13:12:54.251966 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:12:54.968791 sshd[4697]: Connection closed by 147.75.109.163 port 33194 Dec 16 13:12:54.969629 sshd-session[4694]: pam_unix(sshd:session): session closed for user core Dec 16 13:12:54.978261 systemd[1]: sshd@20-10.0.21.22:22-147.75.109.163:33194.service: Deactivated successfully. Dec 16 13:12:54.982368 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:12:54.984079 systemd-logind[1751]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:12:54.986140 systemd-logind[1751]: Removed session 21. Dec 16 13:12:55.149507 systemd[1]: Started sshd@21-10.0.21.22:22-147.75.109.163:33210.service - OpenSSH per-connection server daemon (147.75.109.163:33210). Dec 16 13:12:56.219839 sshd[4714]: Accepted publickey for core from 147.75.109.163 port 33210 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:12:56.223181 sshd-session[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:12:56.231922 systemd-logind[1751]: New session 22 of user core. Dec 16 13:12:56.240773 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:12:58.876789 containerd[1770]: time="2025-12-16T13:12:58.876732594Z" level=info msg="StopContainer for \"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\" with timeout 30 (s)" Dec 16 13:12:58.877517 containerd[1770]: time="2025-12-16T13:12:58.877266733Z" level=info msg="Stop container \"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\" with signal terminated" Dec 16 13:12:58.888563 systemd[1]: cri-containerd-caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861.scope: Deactivated successfully. Dec 16 13:12:58.889904 containerd[1770]: time="2025-12-16T13:12:58.889856661Z" level=info msg="received container exit event container_id:\"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\" id:\"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\" pid:3469 exited_at:{seconds:1765890778 nanos:889372394}" Dec 16 13:12:58.906785 containerd[1770]: time="2025-12-16T13:12:58.906712920Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:12:58.909600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861-rootfs.mount: Deactivated successfully. Dec 16 13:12:58.913680 containerd[1770]: time="2025-12-16T13:12:58.913638784Z" level=info msg="StopContainer for \"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\" with timeout 2 (s)" Dec 16 13:12:58.914785 containerd[1770]: time="2025-12-16T13:12:58.913877909Z" level=info msg="Stop container \"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\" with signal terminated" Dec 16 13:12:58.920763 systemd-networkd[1584]: lxc_health: Link DOWN Dec 16 13:12:58.920772 systemd-networkd[1584]: lxc_health: Lost carrier Dec 16 13:12:58.924323 containerd[1770]: time="2025-12-16T13:12:58.924287179Z" level=info msg="StopContainer for \"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\" returns successfully" Dec 16 13:12:58.924939 containerd[1770]: time="2025-12-16T13:12:58.924889756Z" level=info msg="StopPodSandbox for \"ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886\"" Dec 16 13:12:58.925025 containerd[1770]: time="2025-12-16T13:12:58.924953136Z" level=info msg="Container to stop \"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:12:58.933249 systemd[1]: cri-containerd-ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886.scope: Deactivated successfully. Dec 16 13:12:58.934428 containerd[1770]: time="2025-12-16T13:12:58.934381643Z" level=info msg="received sandbox exit event container_id:\"ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886\" id:\"ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886\" exit_status:137 exited_at:{seconds:1765890778 nanos:934068778}" monitor_name=podsandbox Dec 16 13:12:58.942030 systemd[1]: cri-containerd-a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b.scope: Deactivated successfully. Dec 16 13:12:58.942319 systemd[1]: cri-containerd-a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b.scope: Consumed 7.016s CPU time, 136M memory peak, 128K read from disk, 13.3M written to disk. Dec 16 13:12:58.943197 containerd[1770]: time="2025-12-16T13:12:58.942783345Z" level=info msg="received container exit event container_id:\"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\" id:\"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\" pid:3706 exited_at:{seconds:1765890778 nanos:942552343}" Dec 16 13:12:58.957848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886-rootfs.mount: Deactivated successfully. Dec 16 13:12:58.963281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b-rootfs.mount: Deactivated successfully. Dec 16 13:12:58.966010 containerd[1770]: time="2025-12-16T13:12:58.965881742Z" level=info msg="shim disconnected" id=ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886 namespace=k8s.io Dec 16 13:12:58.966010 containerd[1770]: time="2025-12-16T13:12:58.966005704Z" level=warning msg="cleaning up after shim disconnected" id=ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886 namespace=k8s.io Dec 16 13:12:58.966131 containerd[1770]: time="2025-12-16T13:12:58.966013043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:12:58.970091 containerd[1770]: time="2025-12-16T13:12:58.970026213Z" level=info msg="StopContainer for \"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\" returns successfully" Dec 16 13:12:58.970532 containerd[1770]: time="2025-12-16T13:12:58.970501876Z" level=info msg="StopPodSandbox for \"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\"" Dec 16 13:12:58.970577 containerd[1770]: time="2025-12-16T13:12:58.970566751Z" level=info msg="Container to stop \"e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:12:58.970600 containerd[1770]: time="2025-12-16T13:12:58.970577530Z" level=info msg="Container to stop \"0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:12:58.970600 containerd[1770]: time="2025-12-16T13:12:58.970585679Z" level=info msg="Container to stop \"fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:12:58.970600 containerd[1770]: time="2025-12-16T13:12:58.970592574Z" level=info msg="Container to stop \"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:12:58.970600 containerd[1770]: time="2025-12-16T13:12:58.970599302Z" level=info msg="Container to stop \"bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:12:58.976822 systemd[1]: cri-containerd-8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35.scope: Deactivated successfully. Dec 16 13:12:58.977933 containerd[1770]: time="2025-12-16T13:12:58.977896393Z" level=info msg="received sandbox exit event container_id:\"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" id:\"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" exit_status:137 exited_at:{seconds:1765890778 nanos:977716643}" monitor_name=podsandbox Dec 16 13:12:58.980412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886-shm.mount: Deactivated successfully. Dec 16 13:12:58.980784 containerd[1770]: time="2025-12-16T13:12:58.978787158Z" level=info msg="received sandbox container exit event sandbox_id:\"ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886\" exit_status:137 exited_at:{seconds:1765890778 nanos:934068778}" monitor_name=criService Dec 16 13:12:58.980784 containerd[1770]: time="2025-12-16T13:12:58.979755721Z" level=info msg="TearDown network for sandbox \"ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886\" successfully" Dec 16 13:12:58.980784 containerd[1770]: time="2025-12-16T13:12:58.980684197Z" level=info msg="StopPodSandbox for \"ad0c1fdd6e1c80458ebd535c46d1b03819e5c43623f949999e6071b5915bf886\" returns successfully" Dec 16 13:12:58.997935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35-rootfs.mount: Deactivated successfully. Dec 16 13:12:59.001833 containerd[1770]: time="2025-12-16T13:12:59.001793279Z" level=info msg="shim disconnected" id=8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35 namespace=k8s.io Dec 16 13:12:59.001833 containerd[1770]: time="2025-12-16T13:12:59.001827580Z" level=warning msg="cleaning up after shim disconnected" id=8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35 namespace=k8s.io Dec 16 13:12:59.002030 containerd[1770]: time="2025-12-16T13:12:59.001836796Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:12:59.012747 containerd[1770]: time="2025-12-16T13:12:59.012678911Z" level=info msg="received sandbox container exit event sandbox_id:\"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" exit_status:137 exited_at:{seconds:1765890778 nanos:977716643}" monitor_name=criService Dec 16 13:12:59.012910 containerd[1770]: time="2025-12-16T13:12:59.012882955Z" level=info msg="TearDown network for sandbox \"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" successfully" Dec 16 13:12:59.012940 containerd[1770]: time="2025-12-16T13:12:59.012912330Z" level=info msg="StopPodSandbox for \"8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35\" returns successfully" Dec 16 13:12:59.103856 kubelet[3042]: I1216 13:12:59.103700 3042 scope.go:117] "RemoveContainer" containerID="a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b" Dec 16 13:12:59.105692 containerd[1770]: time="2025-12-16T13:12:59.105635229Z" level=info msg="RemoveContainer for \"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\"" Dec 16 13:12:59.114617 containerd[1770]: time="2025-12-16T13:12:59.114517319Z" level=info msg="RemoveContainer for \"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\" returns successfully" Dec 16 13:12:59.114913 kubelet[3042]: I1216 13:12:59.114875 3042 scope.go:117] "RemoveContainer" containerID="0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746" Dec 16 13:12:59.116186 containerd[1770]: time="2025-12-16T13:12:59.116137201Z" level=info msg="RemoveContainer for \"0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746\"" Dec 16 13:12:59.121878 containerd[1770]: time="2025-12-16T13:12:59.121817574Z" level=info msg="RemoveContainer for \"0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746\" returns successfully" Dec 16 13:12:59.122080 kubelet[3042]: I1216 13:12:59.122044 3042 scope.go:117] "RemoveContainer" containerID="e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1" Dec 16 13:12:59.123809 containerd[1770]: time="2025-12-16T13:12:59.123760771Z" level=info msg="RemoveContainer for \"e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1\"" Dec 16 13:12:59.128541 containerd[1770]: time="2025-12-16T13:12:59.128410418Z" level=info msg="RemoveContainer for \"e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1\" returns successfully" Dec 16 13:12:59.128640 kubelet[3042]: I1216 13:12:59.128614 3042 scope.go:117] "RemoveContainer" containerID="bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f" Dec 16 13:12:59.129931 containerd[1770]: time="2025-12-16T13:12:59.129890765Z" level=info msg="RemoveContainer for \"bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f\"" Dec 16 13:12:59.134691 containerd[1770]: time="2025-12-16T13:12:59.134651757Z" level=info msg="RemoveContainer for \"bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f\" returns successfully" Dec 16 13:12:59.134846 kubelet[3042]: I1216 13:12:59.134804 3042 scope.go:117] "RemoveContainer" containerID="fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30" Dec 16 13:12:59.135941 containerd[1770]: time="2025-12-16T13:12:59.135907593Z" level=info msg="RemoveContainer for \"fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30\"" Dec 16 13:12:59.138661 update_engine[1752]: I20251216 13:12:59.138588 1752 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:12:59.138959 update_engine[1752]: I20251216 13:12:59.138671 1752 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:12:59.139055 update_engine[1752]: I20251216 13:12:59.139029 1752 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:12:59.140087 containerd[1770]: time="2025-12-16T13:12:59.140051518Z" level=info msg="RemoveContainer for \"fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30\" returns successfully" Dec 16 13:12:59.140224 kubelet[3042]: I1216 13:12:59.140200 3042 scope.go:117] "RemoveContainer" containerID="a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b" Dec 16 13:12:59.140439 containerd[1770]: time="2025-12-16T13:12:59.140364889Z" level=error msg="ContainerStatus for \"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\": not found" Dec 16 13:12:59.140550 kubelet[3042]: E1216 13:12:59.140520 3042 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\": not found" containerID="a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b" Dec 16 13:12:59.140635 kubelet[3042]: I1216 13:12:59.140560 3042 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b"} err="failed to get container status \"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\": rpc error: code = NotFound desc = an error occurred when try to find container \"a92a9e1792ae8d00edea7de52ad98140724ec212af349227a3673e3331a78d6b\": not found" Dec 16 13:12:59.140665 kubelet[3042]: I1216 13:12:59.140636 3042 scope.go:117] "RemoveContainer" containerID="0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746" Dec 16 13:12:59.140791 containerd[1770]: time="2025-12-16T13:12:59.140761358Z" level=error msg="ContainerStatus for \"0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746\": not found" Dec 16 13:12:59.140884 kubelet[3042]: E1216 13:12:59.140868 3042 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746\": not found" containerID="0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746" Dec 16 13:12:59.140911 kubelet[3042]: I1216 13:12:59.140891 3042 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746"} err="failed to get container status \"0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ffcf7b3b7d6f8e82ad426bf2cd10820cb60f87ad7912cf5fe4ceab5fcb87746\": not found" Dec 16 13:12:59.140911 kubelet[3042]: I1216 13:12:59.140906 3042 scope.go:117] "RemoveContainer" containerID="e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1" Dec 16 13:12:59.141058 containerd[1770]: time="2025-12-16T13:12:59.141031599Z" level=error msg="ContainerStatus for \"e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1\": not found" Dec 16 13:12:59.141120 kubelet[3042]: E1216 13:12:59.141106 3042 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1\": not found" containerID="e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1" Dec 16 13:12:59.141146 kubelet[3042]: I1216 13:12:59.141122 3042 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1"} err="failed to get container status \"e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1\": rpc error: code = NotFound desc = an error occurred when try to find container \"e274c0af6179eaba8bde232694a52fc47539e6d58e6c801fe2b2c549c00afec1\": not found" Dec 16 13:12:59.141146 kubelet[3042]: I1216 13:12:59.141133 3042 scope.go:117] "RemoveContainer" containerID="bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f" Dec 16 13:12:59.141274 containerd[1770]: time="2025-12-16T13:12:59.141248879Z" level=error msg="ContainerStatus for \"bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f\": not found" Dec 16 13:12:59.141366 kubelet[3042]: E1216 13:12:59.141347 3042 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f\": not found" containerID="bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f" Dec 16 13:12:59.141392 kubelet[3042]: I1216 13:12:59.141370 3042 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f"} err="failed to get container status \"bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb1e587cca6b5f8e2fc8a35838441653172e921cb11e3c59f4f0d708dbd3892f\": not found" Dec 16 13:12:59.141392 kubelet[3042]: I1216 13:12:59.141385 3042 scope.go:117] "RemoveContainer" containerID="fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30" Dec 16 13:12:59.141594 containerd[1770]: time="2025-12-16T13:12:59.141571016Z" level=error msg="ContainerStatus for \"fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30\": not found" Dec 16 13:12:59.141677 kubelet[3042]: E1216 13:12:59.141662 3042 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30\": not found" containerID="fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30" Dec 16 13:12:59.141705 kubelet[3042]: I1216 13:12:59.141681 3042 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30"} err="failed to get container status \"fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe5362e714628f975aebd0c2f1119266022a863afc0c1e457ef61f35ea38ec30\": not found" Dec 16 13:12:59.141705 kubelet[3042]: I1216 13:12:59.141694 3042 scope.go:117] "RemoveContainer" containerID="caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861" Dec 16 13:12:59.142833 containerd[1770]: time="2025-12-16T13:12:59.142798284Z" level=info msg="RemoveContainer for \"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\"" Dec 16 13:12:59.145206 update_engine[1752]: E20251216 13:12:59.145160 1752 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 13:12:59.145273 update_engine[1752]: I20251216 13:12:59.145244 1752 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 16 13:12:59.147613 containerd[1770]: time="2025-12-16T13:12:59.147584003Z" level=info msg="RemoveContainer for \"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\" returns successfully" Dec 16 13:12:59.147768 kubelet[3042]: I1216 13:12:59.147748 3042 scope.go:117] "RemoveContainer" containerID="caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861" Dec 16 13:12:59.147940 containerd[1770]: time="2025-12-16T13:12:59.147893712Z" level=error msg="ContainerStatus for \"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\": not found" Dec 16 13:12:59.148019 kubelet[3042]: E1216 13:12:59.148004 3042 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\": not found" containerID="caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861" Dec 16 13:12:59.148056 kubelet[3042]: I1216 13:12:59.148033 3042 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861"} err="failed to get container status \"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\": rpc error: code = NotFound desc = an error occurred when try to find container \"caa6c7c871831f6c997f90e9bd31fad354519c48a4494eafe836da8d8fb90861\": not found" Dec 16 13:12:59.160281 kubelet[3042]: I1216 13:12:59.160229 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cni-path\") pod \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " Dec 16 13:12:59.160281 kubelet[3042]: I1216 13:12:59.160266 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-etc-cni-netd\") pod \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " Dec 16 13:12:59.160281 kubelet[3042]: I1216 13:12:59.160285 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-host-proc-sys-net\") pod \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " Dec 16 13:12:59.160432 kubelet[3042]: I1216 13:12:59.160297 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cni-path" (OuterVolumeSpecName: "cni-path") pod "fb0b880f-daf0-41ea-87f4-0c02499c98ed" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:12:59.160432 kubelet[3042]: I1216 13:12:59.160324 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fb0b880f-daf0-41ea-87f4-0c02499c98ed" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:12:59.160432 kubelet[3042]: I1216 13:12:59.160345 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fb0b880f-daf0-41ea-87f4-0c02499c98ed" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:12:59.160432 kubelet[3042]: I1216 13:12:59.160310 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb0b880f-daf0-41ea-87f4-0c02499c98ed-clustermesh-secrets\") pod \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " Dec 16 13:12:59.160432 kubelet[3042]: I1216 13:12:59.160382 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cilium-cgroup\") pod \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " Dec 16 13:12:59.160562 kubelet[3042]: I1216 13:12:59.160402 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cilium-config-path\") pod \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " Dec 16 13:12:59.160562 kubelet[3042]: I1216 13:12:59.160418 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-hostproc\") pod \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " Dec 16 13:12:59.160562 kubelet[3042]: I1216 13:12:59.160436 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5xwx\" (UniqueName: \"kubernetes.io/projected/61722c5a-7be5-40ef-a95b-4c8300fb4e98-kube-api-access-l5xwx\") pod \"61722c5a-7be5-40ef-a95b-4c8300fb4e98\" (UID: \"61722c5a-7be5-40ef-a95b-4c8300fb4e98\") " Dec 16 13:12:59.160562 kubelet[3042]: I1216 13:12:59.160454 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb0b880f-daf0-41ea-87f4-0c02499c98ed-hubble-tls\") pod \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " Dec 16 13:12:59.160562 kubelet[3042]: I1216 13:12:59.160468 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cilium-run\") pod \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " Dec 16 13:12:59.160562 kubelet[3042]: I1216 13:12:59.160481 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-host-proc-sys-kernel\") pod \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " Dec 16 13:12:59.160684 kubelet[3042]: I1216 13:12:59.160496 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmp6n\" (UniqueName: \"kubernetes.io/projected/fb0b880f-daf0-41ea-87f4-0c02499c98ed-kube-api-access-zmp6n\") pod \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " Dec 16 13:12:59.160684 kubelet[3042]: I1216 13:12:59.160508 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-bpf-maps\") pod \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " Dec 16 13:12:59.160684 kubelet[3042]: I1216 13:12:59.160521 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-lib-modules\") pod \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " Dec 16 13:12:59.160684 kubelet[3042]: I1216 13:12:59.160560 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-xtables-lock\") pod \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\" (UID: \"fb0b880f-daf0-41ea-87f4-0c02499c98ed\") " Dec 16 13:12:59.160684 kubelet[3042]: I1216 13:12:59.160577 3042 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61722c5a-7be5-40ef-a95b-4c8300fb4e98-cilium-config-path\") pod \"61722c5a-7be5-40ef-a95b-4c8300fb4e98\" (UID: \"61722c5a-7be5-40ef-a95b-4c8300fb4e98\") " Dec 16 13:12:59.160684 kubelet[3042]: I1216 13:12:59.160400 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fb0b880f-daf0-41ea-87f4-0c02499c98ed" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:12:59.160809 kubelet[3042]: I1216 13:12:59.160614 3042 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cni-path\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.160809 kubelet[3042]: I1216 13:12:59.160590 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fb0b880f-daf0-41ea-87f4-0c02499c98ed" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:12:59.160809 kubelet[3042]: I1216 13:12:59.160623 3042 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-etc-cni-netd\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.160809 kubelet[3042]: I1216 13:12:59.160633 3042 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-host-proc-sys-net\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.160809 kubelet[3042]: I1216 13:12:59.160606 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-hostproc" (OuterVolumeSpecName: "hostproc") pod "fb0b880f-daf0-41ea-87f4-0c02499c98ed" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:12:59.160809 kubelet[3042]: I1216 13:12:59.160655 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fb0b880f-daf0-41ea-87f4-0c02499c98ed" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:12:59.161196 kubelet[3042]: I1216 13:12:59.160963 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fb0b880f-daf0-41ea-87f4-0c02499c98ed" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:12:59.161196 kubelet[3042]: I1216 13:12:59.160971 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fb0b880f-daf0-41ea-87f4-0c02499c98ed" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:12:59.161196 kubelet[3042]: I1216 13:12:59.160987 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fb0b880f-daf0-41ea-87f4-0c02499c98ed" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:12:59.162484 kubelet[3042]: I1216 13:12:59.162430 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fb0b880f-daf0-41ea-87f4-0c02499c98ed" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:12:59.162684 kubelet[3042]: I1216 13:12:59.162658 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61722c5a-7be5-40ef-a95b-4c8300fb4e98-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "61722c5a-7be5-40ef-a95b-4c8300fb4e98" (UID: "61722c5a-7be5-40ef-a95b-4c8300fb4e98"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:12:59.162833 kubelet[3042]: I1216 13:12:59.162804 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0b880f-daf0-41ea-87f4-0c02499c98ed-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fb0b880f-daf0-41ea-87f4-0c02499c98ed" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:12:59.162905 kubelet[3042]: I1216 13:12:59.162889 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61722c5a-7be5-40ef-a95b-4c8300fb4e98-kube-api-access-l5xwx" (OuterVolumeSpecName: "kube-api-access-l5xwx") pod "61722c5a-7be5-40ef-a95b-4c8300fb4e98" (UID: "61722c5a-7be5-40ef-a95b-4c8300fb4e98"). InnerVolumeSpecName "kube-api-access-l5xwx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:12:59.163004 kubelet[3042]: I1216 13:12:59.162976 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb0b880f-daf0-41ea-87f4-0c02499c98ed-kube-api-access-zmp6n" (OuterVolumeSpecName: "kube-api-access-zmp6n") pod "fb0b880f-daf0-41ea-87f4-0c02499c98ed" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed"). InnerVolumeSpecName "kube-api-access-zmp6n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:12:59.163176 kubelet[3042]: I1216 13:12:59.163138 3042 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb0b880f-daf0-41ea-87f4-0c02499c98ed-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fb0b880f-daf0-41ea-87f4-0c02499c98ed" (UID: "fb0b880f-daf0-41ea-87f4-0c02499c98ed"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:12:59.261201 kubelet[3042]: I1216 13:12:59.261146 3042 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb0b880f-daf0-41ea-87f4-0c02499c98ed-clustermesh-secrets\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.261201 kubelet[3042]: I1216 13:12:59.261177 3042 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-hostproc\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.261201 kubelet[3042]: I1216 13:12:59.261201 3042 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l5xwx\" (UniqueName: \"kubernetes.io/projected/61722c5a-7be5-40ef-a95b-4c8300fb4e98-kube-api-access-l5xwx\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.261201 kubelet[3042]: I1216 13:12:59.261211 3042 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cilium-cgroup\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.261201 kubelet[3042]: I1216 13:12:59.261220 3042 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cilium-config-path\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.261458 kubelet[3042]: I1216 13:12:59.261228 3042 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zmp6n\" (UniqueName: \"kubernetes.io/projected/fb0b880f-daf0-41ea-87f4-0c02499c98ed-kube-api-access-zmp6n\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.261458 kubelet[3042]: I1216 13:12:59.261237 3042 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-bpf-maps\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.261458 kubelet[3042]: I1216 13:12:59.261245 3042 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-lib-modules\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.261458 kubelet[3042]: I1216 13:12:59.261253 3042 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb0b880f-daf0-41ea-87f4-0c02499c98ed-hubble-tls\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.261458 kubelet[3042]: I1216 13:12:59.261261 3042 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-cilium-run\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.261458 kubelet[3042]: I1216 13:12:59.261269 3042 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-host-proc-sys-kernel\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.261458 kubelet[3042]: I1216 13:12:59.261278 3042 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb0b880f-daf0-41ea-87f4-0c02499c98ed-xtables-lock\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.261458 kubelet[3042]: I1216 13:12:59.261286 3042 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61722c5a-7be5-40ef-a95b-4c8300fb4e98-cilium-config-path\") on node \"ci-4459-2-2-3-ab2e4a938e\" DevicePath \"\"" Dec 16 13:12:59.414331 systemd[1]: Removed slice kubepods-besteffort-pod61722c5a_7be5_40ef_a95b_4c8300fb4e98.slice - libcontainer container kubepods-besteffort-pod61722c5a_7be5_40ef_a95b_4c8300fb4e98.slice. Dec 16 13:12:59.419172 systemd[1]: Removed slice kubepods-burstable-podfb0b880f_daf0_41ea_87f4_0c02499c98ed.slice - libcontainer container kubepods-burstable-podfb0b880f_daf0_41ea_87f4_0c02499c98ed.slice. Dec 16 13:12:59.419483 systemd[1]: kubepods-burstable-podfb0b880f_daf0_41ea_87f4_0c02499c98ed.slice: Consumed 7.157s CPU time, 136.5M memory peak, 128K read from disk, 13.3M written to disk. Dec 16 13:12:59.675091 kubelet[3042]: E1216 13:12:59.674929 3042 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:12:59.910738 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f2e5ca368c7bb7c616183ad939d28c178dcc7e73efdfb7e4a37c9fdb38c3f35-shm.mount: Deactivated successfully. Dec 16 13:12:59.910907 systemd[1]: var-lib-kubelet-pods-fb0b880f\x2ddaf0\x2d41ea\x2d87f4\x2d0c02499c98ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzmp6n.mount: Deactivated successfully. Dec 16 13:12:59.911012 systemd[1]: var-lib-kubelet-pods-61722c5a\x2d7be5\x2d40ef\x2da95b\x2d4c8300fb4e98-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl5xwx.mount: Deactivated successfully. Dec 16 13:12:59.911109 systemd[1]: var-lib-kubelet-pods-fb0b880f\x2ddaf0\x2d41ea\x2d87f4\x2d0c02499c98ed-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 13:12:59.911224 systemd[1]: var-lib-kubelet-pods-fb0b880f\x2ddaf0\x2d41ea\x2d87f4\x2d0c02499c98ed-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 13:13:00.601645 kubelet[3042]: I1216 13:13:00.601503 3042 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61722c5a-7be5-40ef-a95b-4c8300fb4e98" path="/var/lib/kubelet/pods/61722c5a-7be5-40ef-a95b-4c8300fb4e98/volumes" Dec 16 13:13:00.602691 kubelet[3042]: I1216 13:13:00.602639 3042 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb0b880f-daf0-41ea-87f4-0c02499c98ed" path="/var/lib/kubelet/pods/fb0b880f-daf0-41ea-87f4-0c02499c98ed/volumes" Dec 16 13:13:00.996100 sshd[4717]: Connection closed by 147.75.109.163 port 33210 Dec 16 13:13:00.996612 sshd-session[4714]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:01.002312 systemd[1]: sshd@21-10.0.21.22:22-147.75.109.163:33210.service: Deactivated successfully. Dec 16 13:13:01.004653 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:13:01.004920 systemd[1]: session-22.scope: Consumed 1.550s CPU time, 27.8M memory peak. Dec 16 13:13:01.005846 systemd-logind[1751]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:13:01.007079 systemd-logind[1751]: Removed session 22. Dec 16 13:13:01.172151 systemd[1]: Started sshd@22-10.0.21.22:22-147.75.109.163:33214.service - OpenSSH per-connection server daemon (147.75.109.163:33214). Dec 16 13:13:02.197138 sshd[4863]: Accepted publickey for core from 147.75.109.163 port 33214 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:13:02.198384 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:02.202424 systemd-logind[1751]: New session 23 of user core. Dec 16 13:13:02.210669 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:13:03.285932 kubelet[3042]: I1216 13:13:03.285894 3042 memory_manager.go:355] "RemoveStaleState removing state" podUID="61722c5a-7be5-40ef-a95b-4c8300fb4e98" containerName="cilium-operator" Dec 16 13:13:03.285932 kubelet[3042]: I1216 13:13:03.285918 3042 memory_manager.go:355] "RemoveStaleState removing state" podUID="fb0b880f-daf0-41ea-87f4-0c02499c98ed" containerName="cilium-agent" Dec 16 13:13:03.297923 systemd[1]: Created slice kubepods-burstable-pod4843cbc3_22fb_48b1_8b68_c13fdd1a9c00.slice - libcontainer container kubepods-burstable-pod4843cbc3_22fb_48b1_8b68_c13fdd1a9c00.slice. Dec 16 13:13:03.393221 kubelet[3042]: I1216 13:13:03.393148 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-bpf-maps\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.393221 kubelet[3042]: I1216 13:13:03.393195 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-cilium-ipsec-secrets\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.393221 kubelet[3042]: I1216 13:13:03.393211 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-cilium-cgroup\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.393221 kubelet[3042]: I1216 13:13:03.393227 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-cilium-run\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.393221 kubelet[3042]: I1216 13:13:03.393242 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-lib-modules\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.393486 kubelet[3042]: I1216 13:13:03.393256 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-host-proc-sys-net\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.393486 kubelet[3042]: I1216 13:13:03.393270 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-etc-cni-netd\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.393486 kubelet[3042]: I1216 13:13:03.393329 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-xtables-lock\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.393486 kubelet[3042]: I1216 13:13:03.393360 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg268\" (UniqueName: \"kubernetes.io/projected/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-kube-api-access-hg268\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.393486 kubelet[3042]: I1216 13:13:03.393385 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-hostproc\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.393486 kubelet[3042]: I1216 13:13:03.393402 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-host-proc-sys-kernel\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.393634 kubelet[3042]: I1216 13:13:03.393417 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-cilium-config-path\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.393634 kubelet[3042]: I1216 13:13:03.393431 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-hubble-tls\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.393634 kubelet[3042]: I1216 13:13:03.393444 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-clustermesh-secrets\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.393634 kubelet[3042]: I1216 13:13:03.393458 3042 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4843cbc3-22fb-48b1-8b68-c13fdd1a9c00-cni-path\") pod \"cilium-zrs5w\" (UID: \"4843cbc3-22fb-48b1-8b68-c13fdd1a9c00\") " pod="kube-system/cilium-zrs5w" Dec 16 13:13:03.506489 sshd[4866]: Connection closed by 147.75.109.163 port 33214 Dec 16 13:13:03.506886 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:03.511690 systemd[1]: sshd@22-10.0.21.22:22-147.75.109.163:33214.service: Deactivated successfully. Dec 16 13:13:03.513594 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:13:03.514305 systemd-logind[1751]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:13:03.515273 systemd-logind[1751]: Removed session 23. Dec 16 13:13:03.600886 containerd[1770]: time="2025-12-16T13:13:03.600752853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zrs5w,Uid:4843cbc3-22fb-48b1-8b68-c13fdd1a9c00,Namespace:kube-system,Attempt:0,}" Dec 16 13:13:03.622546 containerd[1770]: time="2025-12-16T13:13:03.622243191Z" level=info msg="connecting to shim f7e3e0252ccdedfd1e559f2d6fc459f4a919625a25969bfbb305e5215cda5087" address="unix:///run/containerd/s/1433edd2467080cc6dfadcd95af6d0335a45491114e972c566047aba1ffe54e7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:13:03.648767 systemd[1]: Started cri-containerd-f7e3e0252ccdedfd1e559f2d6fc459f4a919625a25969bfbb305e5215cda5087.scope - libcontainer container f7e3e0252ccdedfd1e559f2d6fc459f4a919625a25969bfbb305e5215cda5087. Dec 16 13:13:03.668919 systemd[1]: Started sshd@23-10.0.21.22:22-147.75.109.163:40516.service - OpenSSH per-connection server daemon (147.75.109.163:40516). Dec 16 13:13:03.673783 containerd[1770]: time="2025-12-16T13:13:03.673752563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zrs5w,Uid:4843cbc3-22fb-48b1-8b68-c13fdd1a9c00,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7e3e0252ccdedfd1e559f2d6fc459f4a919625a25969bfbb305e5215cda5087\"" Dec 16 13:13:03.676798 containerd[1770]: time="2025-12-16T13:13:03.676742770Z" level=info msg="CreateContainer within sandbox \"f7e3e0252ccdedfd1e559f2d6fc459f4a919625a25969bfbb305e5215cda5087\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:13:03.685550 containerd[1770]: time="2025-12-16T13:13:03.685487282Z" level=info msg="Container 94e2ab114f4eb16ed394d4ee3af3cf5d3c9b35b0baaf2bc15457201cd32444fd: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:03.695823 containerd[1770]: time="2025-12-16T13:13:03.695781420Z" level=info msg="CreateContainer within sandbox \"f7e3e0252ccdedfd1e559f2d6fc459f4a919625a25969bfbb305e5215cda5087\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"94e2ab114f4eb16ed394d4ee3af3cf5d3c9b35b0baaf2bc15457201cd32444fd\"" Dec 16 13:13:03.696628 containerd[1770]: time="2025-12-16T13:13:03.696608755Z" level=info msg="StartContainer for \"94e2ab114f4eb16ed394d4ee3af3cf5d3c9b35b0baaf2bc15457201cd32444fd\"" Dec 16 13:13:03.697462 containerd[1770]: time="2025-12-16T13:13:03.697411065Z" level=info msg="connecting to shim 94e2ab114f4eb16ed394d4ee3af3cf5d3c9b35b0baaf2bc15457201cd32444fd" address="unix:///run/containerd/s/1433edd2467080cc6dfadcd95af6d0335a45491114e972c566047aba1ffe54e7" protocol=ttrpc version=3 Dec 16 13:13:03.723828 systemd[1]: Started cri-containerd-94e2ab114f4eb16ed394d4ee3af3cf5d3c9b35b0baaf2bc15457201cd32444fd.scope - libcontainer container 94e2ab114f4eb16ed394d4ee3af3cf5d3c9b35b0baaf2bc15457201cd32444fd. Dec 16 13:13:03.751461 containerd[1770]: time="2025-12-16T13:13:03.751409125Z" level=info msg="StartContainer for \"94e2ab114f4eb16ed394d4ee3af3cf5d3c9b35b0baaf2bc15457201cd32444fd\" returns successfully" Dec 16 13:13:03.757395 systemd[1]: cri-containerd-94e2ab114f4eb16ed394d4ee3af3cf5d3c9b35b0baaf2bc15457201cd32444fd.scope: Deactivated successfully. Dec 16 13:13:03.759684 containerd[1770]: time="2025-12-16T13:13:03.759641142Z" level=info msg="received container exit event container_id:\"94e2ab114f4eb16ed394d4ee3af3cf5d3c9b35b0baaf2bc15457201cd32444fd\" id:\"94e2ab114f4eb16ed394d4ee3af3cf5d3c9b35b0baaf2bc15457201cd32444fd\" pid:4947 exited_at:{seconds:1765890783 nanos:759344040}" Dec 16 13:13:04.127085 containerd[1770]: time="2025-12-16T13:13:04.126481177Z" level=info msg="CreateContainer within sandbox \"f7e3e0252ccdedfd1e559f2d6fc459f4a919625a25969bfbb305e5215cda5087\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:13:04.138682 containerd[1770]: time="2025-12-16T13:13:04.138597579Z" level=info msg="Container df7792d432095eb38c9153ae142d670f7cd94034cadf5c582a4f13e4514c626b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:04.147516 containerd[1770]: time="2025-12-16T13:13:04.147451519Z" level=info msg="CreateContainer within sandbox \"f7e3e0252ccdedfd1e559f2d6fc459f4a919625a25969bfbb305e5215cda5087\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"df7792d432095eb38c9153ae142d670f7cd94034cadf5c582a4f13e4514c626b\"" Dec 16 13:13:04.149035 containerd[1770]: time="2025-12-16T13:13:04.148972106Z" level=info msg="StartContainer for \"df7792d432095eb38c9153ae142d670f7cd94034cadf5c582a4f13e4514c626b\"" Dec 16 13:13:04.150100 containerd[1770]: time="2025-12-16T13:13:04.149995781Z" level=info msg="connecting to shim df7792d432095eb38c9153ae142d670f7cd94034cadf5c582a4f13e4514c626b" address="unix:///run/containerd/s/1433edd2467080cc6dfadcd95af6d0335a45491114e972c566047aba1ffe54e7" protocol=ttrpc version=3 Dec 16 13:13:04.194860 systemd[1]: Started cri-containerd-df7792d432095eb38c9153ae142d670f7cd94034cadf5c582a4f13e4514c626b.scope - libcontainer container df7792d432095eb38c9153ae142d670f7cd94034cadf5c582a4f13e4514c626b. Dec 16 13:13:04.236212 containerd[1770]: time="2025-12-16T13:13:04.236130918Z" level=info msg="StartContainer for \"df7792d432095eb38c9153ae142d670f7cd94034cadf5c582a4f13e4514c626b\" returns successfully" Dec 16 13:13:04.243365 systemd[1]: cri-containerd-df7792d432095eb38c9153ae142d670f7cd94034cadf5c582a4f13e4514c626b.scope: Deactivated successfully. Dec 16 13:13:04.244095 containerd[1770]: time="2025-12-16T13:13:04.244017572Z" level=info msg="received container exit event container_id:\"df7792d432095eb38c9153ae142d670f7cd94034cadf5c582a4f13e4514c626b\" id:\"df7792d432095eb38c9153ae142d670f7cd94034cadf5c582a4f13e4514c626b\" pid:4995 exited_at:{seconds:1765890784 nanos:243807099}" Dec 16 13:13:04.656968 sshd[4931]: Accepted publickey for core from 147.75.109.163 port 40516 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:13:04.659283 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:04.665071 systemd-logind[1751]: New session 24 of user core. Dec 16 13:13:04.675839 kubelet[3042]: E1216 13:13:04.675770 3042 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:13:04.688767 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:13:05.133963 containerd[1770]: time="2025-12-16T13:13:05.133721739Z" level=info msg="CreateContainer within sandbox \"f7e3e0252ccdedfd1e559f2d6fc459f4a919625a25969bfbb305e5215cda5087\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:13:05.154620 containerd[1770]: time="2025-12-16T13:13:05.154354348Z" level=info msg="Container 2a490d0c0549e4b3174e065e3209748214bb2e7c7fdf96f8f174f3f8ca8f538e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:05.157093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount631180944.mount: Deactivated successfully. Dec 16 13:13:05.168571 containerd[1770]: time="2025-12-16T13:13:05.168491229Z" level=info msg="CreateContainer within sandbox \"f7e3e0252ccdedfd1e559f2d6fc459f4a919625a25969bfbb305e5215cda5087\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2a490d0c0549e4b3174e065e3209748214bb2e7c7fdf96f8f174f3f8ca8f538e\"" Dec 16 13:13:05.169144 containerd[1770]: time="2025-12-16T13:13:05.169080890Z" level=info msg="StartContainer for \"2a490d0c0549e4b3174e065e3209748214bb2e7c7fdf96f8f174f3f8ca8f538e\"" Dec 16 13:13:05.172471 containerd[1770]: time="2025-12-16T13:13:05.172272096Z" level=info msg="connecting to shim 2a490d0c0549e4b3174e065e3209748214bb2e7c7fdf96f8f174f3f8ca8f538e" address="unix:///run/containerd/s/1433edd2467080cc6dfadcd95af6d0335a45491114e972c566047aba1ffe54e7" protocol=ttrpc version=3 Dec 16 13:13:05.196743 systemd[1]: Started cri-containerd-2a490d0c0549e4b3174e065e3209748214bb2e7c7fdf96f8f174f3f8ca8f538e.scope - libcontainer container 2a490d0c0549e4b3174e065e3209748214bb2e7c7fdf96f8f174f3f8ca8f538e. Dec 16 13:13:05.283366 containerd[1770]: time="2025-12-16T13:13:05.283190420Z" level=info msg="StartContainer for \"2a490d0c0549e4b3174e065e3209748214bb2e7c7fdf96f8f174f3f8ca8f538e\" returns successfully" Dec 16 13:13:05.283477 systemd[1]: cri-containerd-2a490d0c0549e4b3174e065e3209748214bb2e7c7fdf96f8f174f3f8ca8f538e.scope: Deactivated successfully. Dec 16 13:13:05.284465 containerd[1770]: time="2025-12-16T13:13:05.284413670Z" level=info msg="received container exit event container_id:\"2a490d0c0549e4b3174e065e3209748214bb2e7c7fdf96f8f174f3f8ca8f538e\" id:\"2a490d0c0549e4b3174e065e3209748214bb2e7c7fdf96f8f174f3f8ca8f538e\" pid:5042 exited_at:{seconds:1765890785 nanos:284254494}" Dec 16 13:13:05.304327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a490d0c0549e4b3174e065e3209748214bb2e7c7fdf96f8f174f3f8ca8f538e-rootfs.mount: Deactivated successfully. Dec 16 13:13:05.329720 sshd[5027]: Connection closed by 147.75.109.163 port 40516 Dec 16 13:13:05.330379 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:05.334005 systemd[1]: sshd@23-10.0.21.22:22-147.75.109.163:40516.service: Deactivated successfully. Dec 16 13:13:05.335499 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:13:05.336120 systemd-logind[1751]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:13:05.336969 systemd-logind[1751]: Removed session 24. Dec 16 13:13:05.506000 systemd[1]: Started sshd@24-10.0.21.22:22-147.75.109.163:40532.service - OpenSSH per-connection server daemon (147.75.109.163:40532). Dec 16 13:13:06.137061 containerd[1770]: time="2025-12-16T13:13:06.136976739Z" level=info msg="CreateContainer within sandbox \"f7e3e0252ccdedfd1e559f2d6fc459f4a919625a25969bfbb305e5215cda5087\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:13:06.146578 containerd[1770]: time="2025-12-16T13:13:06.146015873Z" level=info msg="Container 26e157be88b779db24e0c814dfea85d8262de5f8b96ff03dc3e51b87280c31a0: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:06.157160 containerd[1770]: time="2025-12-16T13:13:06.157124109Z" level=info msg="CreateContainer within sandbox \"f7e3e0252ccdedfd1e559f2d6fc459f4a919625a25969bfbb305e5215cda5087\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"26e157be88b779db24e0c814dfea85d8262de5f8b96ff03dc3e51b87280c31a0\"" Dec 16 13:13:06.157792 containerd[1770]: time="2025-12-16T13:13:06.157772513Z" level=info msg="StartContainer for \"26e157be88b779db24e0c814dfea85d8262de5f8b96ff03dc3e51b87280c31a0\"" Dec 16 13:13:06.158568 containerd[1770]: time="2025-12-16T13:13:06.158549059Z" level=info msg="connecting to shim 26e157be88b779db24e0c814dfea85d8262de5f8b96ff03dc3e51b87280c31a0" address="unix:///run/containerd/s/1433edd2467080cc6dfadcd95af6d0335a45491114e972c566047aba1ffe54e7" protocol=ttrpc version=3 Dec 16 13:13:06.183719 systemd[1]: Started cri-containerd-26e157be88b779db24e0c814dfea85d8262de5f8b96ff03dc3e51b87280c31a0.scope - libcontainer container 26e157be88b779db24e0c814dfea85d8262de5f8b96ff03dc3e51b87280c31a0. Dec 16 13:13:06.210742 systemd[1]: cri-containerd-26e157be88b779db24e0c814dfea85d8262de5f8b96ff03dc3e51b87280c31a0.scope: Deactivated successfully. Dec 16 13:13:06.213434 containerd[1770]: time="2025-12-16T13:13:06.213312879Z" level=info msg="received container exit event container_id:\"26e157be88b779db24e0c814dfea85d8262de5f8b96ff03dc3e51b87280c31a0\" id:\"26e157be88b779db24e0c814dfea85d8262de5f8b96ff03dc3e51b87280c31a0\" pid:5089 exited_at:{seconds:1765890786 nanos:210953550}" Dec 16 13:13:06.220561 containerd[1770]: time="2025-12-16T13:13:06.220452075Z" level=info msg="StartContainer for \"26e157be88b779db24e0c814dfea85d8262de5f8b96ff03dc3e51b87280c31a0\" returns successfully" Dec 16 13:13:06.231023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26e157be88b779db24e0c814dfea85d8262de5f8b96ff03dc3e51b87280c31a0-rootfs.mount: Deactivated successfully. Dec 16 13:13:06.513020 sshd[5073]: Accepted publickey for core from 147.75.109.163 port 40532 ssh2: RSA SHA256:cQMxipPJJowRbk5dGSaUREuCPMqg33hAu2Zl+Athpig Dec 16 13:13:06.514879 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:06.521828 systemd-logind[1751]: New session 25 of user core. Dec 16 13:13:06.535825 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:13:07.151726 containerd[1770]: time="2025-12-16T13:13:07.151641766Z" level=info msg="CreateContainer within sandbox \"f7e3e0252ccdedfd1e559f2d6fc459f4a919625a25969bfbb305e5215cda5087\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:13:07.164490 containerd[1770]: time="2025-12-16T13:13:07.164422511Z" level=info msg="Container 4d24376c990f16c335322d018dedfec34bb23a8b5d1806612f669745c20f6207: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:13:07.175488 containerd[1770]: time="2025-12-16T13:13:07.175433752Z" level=info msg="CreateContainer within sandbox \"f7e3e0252ccdedfd1e559f2d6fc459f4a919625a25969bfbb305e5215cda5087\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d24376c990f16c335322d018dedfec34bb23a8b5d1806612f669745c20f6207\"" Dec 16 13:13:07.176083 containerd[1770]: time="2025-12-16T13:13:07.176049281Z" level=info msg="StartContainer for \"4d24376c990f16c335322d018dedfec34bb23a8b5d1806612f669745c20f6207\"" Dec 16 13:13:07.177338 containerd[1770]: time="2025-12-16T13:13:07.177298692Z" level=info msg="connecting to shim 4d24376c990f16c335322d018dedfec34bb23a8b5d1806612f669745c20f6207" address="unix:///run/containerd/s/1433edd2467080cc6dfadcd95af6d0335a45491114e972c566047aba1ffe54e7" protocol=ttrpc version=3 Dec 16 13:13:07.207761 systemd[1]: Started cri-containerd-4d24376c990f16c335322d018dedfec34bb23a8b5d1806612f669745c20f6207.scope - libcontainer container 4d24376c990f16c335322d018dedfec34bb23a8b5d1806612f669745c20f6207. Dec 16 13:13:07.253227 containerd[1770]: time="2025-12-16T13:13:07.253119937Z" level=info msg="StartContainer for \"4d24376c990f16c335322d018dedfec34bb23a8b5d1806612f669745c20f6207\" returns successfully" Dec 16 13:13:07.525556 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_256)) Dec 16 13:13:08.633681 kubelet[3042]: I1216 13:13:08.633588 3042 setters.go:602] "Node became not ready" node="ci-4459-2-2-3-ab2e4a938e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T13:13:08Z","lastTransitionTime":"2025-12-16T13:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 13:13:09.140587 update_engine[1752]: I20251216 13:13:09.140496 1752 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:13:09.141400 update_engine[1752]: I20251216 13:13:09.141006 1752 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:13:09.141400 update_engine[1752]: I20251216 13:13:09.141353 1752 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:13:09.147817 update_engine[1752]: E20251216 13:13:09.147750 1752 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 13:13:09.147967 update_engine[1752]: I20251216 13:13:09.147846 1752 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 13:13:09.147967 update_engine[1752]: I20251216 13:13:09.147855 1752 omaha_request_action.cc:617] Omaha request response: Dec 16 13:13:09.147967 update_engine[1752]: E20251216 13:13:09.147941 1752 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 16 13:13:09.147967 update_engine[1752]: I20251216 13:13:09.147962 1752 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 16 13:13:09.147967 update_engine[1752]: I20251216 13:13:09.147969 1752 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:13:09.148147 update_engine[1752]: I20251216 13:13:09.147974 1752 update_attempter.cc:306] Processing Done. Dec 16 13:13:09.148147 update_engine[1752]: E20251216 13:13:09.147987 1752 update_attempter.cc:619] Update failed. Dec 16 13:13:09.148147 update_engine[1752]: I20251216 13:13:09.147992 1752 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 16 13:13:09.148147 update_engine[1752]: I20251216 13:13:09.147997 1752 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 16 13:13:09.148147 update_engine[1752]: I20251216 13:13:09.148002 1752 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 16 13:13:09.148147 update_engine[1752]: I20251216 13:13:09.148077 1752 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 13:13:09.148147 update_engine[1752]: I20251216 13:13:09.148105 1752 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 13:13:09.148147 update_engine[1752]: I20251216 13:13:09.148110 1752 omaha_request_action.cc:272] Request: Dec 16 13:13:09.148147 update_engine[1752]: Dec 16 13:13:09.148147 update_engine[1752]: Dec 16 13:13:09.148147 update_engine[1752]: Dec 16 13:13:09.148147 update_engine[1752]: Dec 16 13:13:09.148147 update_engine[1752]: Dec 16 13:13:09.148147 update_engine[1752]: Dec 16 13:13:09.148147 update_engine[1752]: I20251216 13:13:09.148116 1752 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:13:09.148147 update_engine[1752]: I20251216 13:13:09.148136 1752 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:13:09.148656 update_engine[1752]: I20251216 13:13:09.148397 1752 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:13:09.148694 locksmithd[1802]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 16 13:13:09.154998 update_engine[1752]: E20251216 13:13:09.154933 1752 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 13:13:09.155128 update_engine[1752]: I20251216 13:13:09.155014 1752 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 13:13:09.155128 update_engine[1752]: I20251216 13:13:09.155023 1752 omaha_request_action.cc:617] Omaha request response: Dec 16 13:13:09.155128 update_engine[1752]: I20251216 13:13:09.155031 1752 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:13:09.155128 update_engine[1752]: I20251216 13:13:09.155036 1752 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:13:09.155128 update_engine[1752]: I20251216 13:13:09.155040 1752 update_attempter.cc:306] Processing Done. Dec 16 13:13:09.155128 update_engine[1752]: I20251216 13:13:09.155047 1752 update_attempter.cc:310] Error event sent. Dec 16 13:13:09.155128 update_engine[1752]: I20251216 13:13:09.155056 1752 update_check_scheduler.cc:74] Next update check in 44m27s Dec 16 13:13:09.155989 locksmithd[1802]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 16 13:13:10.947761 systemd-networkd[1584]: lxc_health: Link UP Dec 16 13:13:10.948205 systemd-networkd[1584]: lxc_health: Gained carrier Dec 16 13:13:11.623376 kubelet[3042]: I1216 13:13:11.623298 3042 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zrs5w" podStartSLOduration=8.623269351 podStartE2EDuration="8.623269351s" podCreationTimestamp="2025-12-16 13:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:13:08.182955493 +0000 UTC m=+203.673443165" watchObservedRunningTime="2025-12-16 13:13:11.623269351 +0000 UTC m=+207.113756971" Dec 16 13:13:12.868740 systemd-networkd[1584]: lxc_health: Gained IPv6LL Dec 16 13:13:17.834260 sshd[5113]: Connection closed by 147.75.109.163 port 40532 Dec 16 13:13:17.834737 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:17.838583 systemd[1]: sshd@24-10.0.21.22:22-147.75.109.163:40532.service: Deactivated successfully. Dec 16 13:13:17.841295 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:13:17.846491 systemd-logind[1751]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:13:17.848216 systemd-logind[1751]: Removed session 25.