Jan 23 17:56:25.766085 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 17:56:25.766106 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Jan 23 16:10:02 -00 2026 Jan 23 17:56:25.766116 kernel: KASLR enabled Jan 23 17:56:25.766121 kernel: efi: EFI v2.7 by EDK II Jan 23 17:56:25.766127 kernel: efi: SMBIOS 3.0=0x43bed0000 MEMATTR=0x43a714018 ACPI 2.0=0x438430018 RNG=0x43843e818 MEMRESERVE=0x438351218 Jan 23 17:56:25.766132 kernel: random: crng init done Jan 23 17:56:25.766139 kernel: secureboot: Secure boot disabled Jan 23 17:56:25.766145 kernel: ACPI: Early table checksum verification disabled Jan 23 17:56:25.766151 kernel: ACPI: RSDP 0x0000000438430018 000024 (v02 BOCHS ) Jan 23 17:56:25.766157 kernel: ACPI: XSDT 0x000000043843FE98 000074 (v01 BOCHS BXPC 00000001 01000013) Jan 23 17:56:25.766164 kernel: ACPI: FACP 0x000000043843FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:56:25.766170 kernel: ACPI: DSDT 0x0000000438437518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:56:25.766176 kernel: ACPI: APIC 0x000000043843FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:56:25.766182 kernel: ACPI: PPTT 0x000000043843D898 000114 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:56:25.766189 kernel: ACPI: GTDT 0x000000043843E898 000068 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:56:25.766195 kernel: ACPI: MCFG 0x000000043843FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:56:25.766203 kernel: ACPI: SPCR 0x000000043843E498 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:56:25.766209 kernel: ACPI: DBG2 0x000000043843E798 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:56:25.766215 kernel: ACPI: SRAT 0x000000043843E518 0000A0 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:56:25.766221 kernel: ACPI: IORT 0x000000043843E618 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:56:25.766227 kernel: ACPI: BGRT 0x000000043843E718 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 17:56:25.766234 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 23 17:56:25.766240 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 17:56:25.766246 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000-0x43fffffff] Jan 23 17:56:25.766252 kernel: NODE_DATA(0) allocated [mem 0x43dff1a00-0x43dff8fff] Jan 23 17:56:25.766258 kernel: Zone ranges: Jan 23 17:56:25.766265 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 17:56:25.766271 kernel: DMA32 empty Jan 23 17:56:25.766277 kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Jan 23 17:56:25.766283 kernel: Device empty Jan 23 17:56:25.766289 kernel: Movable zone start for each node Jan 23 17:56:25.766295 kernel: Early memory node ranges Jan 23 17:56:25.766301 kernel: node 0: [mem 0x0000000040000000-0x000000043843ffff] Jan 23 17:56:25.766308 kernel: node 0: [mem 0x0000000438440000-0x000000043872ffff] Jan 23 17:56:25.766314 kernel: node 0: [mem 0x0000000438730000-0x000000043bbfffff] Jan 23 17:56:25.766320 kernel: node 0: [mem 0x000000043bc00000-0x000000043bfdffff] Jan 23 17:56:25.766326 kernel: node 0: [mem 0x000000043bfe0000-0x000000043fffffff] Jan 23 17:56:25.766332 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x000000043fffffff] Jan 23 17:56:25.766340 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 23 17:56:25.766346 kernel: psci: probing for conduit method from ACPI. Jan 23 17:56:25.766354 kernel: psci: PSCIv1.3 detected in firmware. Jan 23 17:56:25.766361 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 17:56:25.766368 kernel: psci: Trusted OS migration not required Jan 23 17:56:25.766376 kernel: psci: SMC Calling Convention v1.1 Jan 23 17:56:25.766382 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 23 17:56:25.766389 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 17:56:25.766395 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 17:56:25.766402 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x2 -> Node 0 Jan 23 17:56:25.766408 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x3 -> Node 0 Jan 23 17:56:25.766415 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 17:56:25.766421 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 17:56:25.766428 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 23 17:56:25.766434 kernel: Detected PIPT I-cache on CPU0 Jan 23 17:56:25.766441 kernel: CPU features: detected: GIC system register CPU interface Jan 23 17:56:25.766447 kernel: CPU features: detected: Spectre-v4 Jan 23 17:56:25.766455 kernel: CPU features: detected: Spectre-BHB Jan 23 17:56:25.766462 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 17:56:25.766468 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 17:56:25.766475 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 17:56:25.766481 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 17:56:25.766487 kernel: alternatives: applying boot alternatives Jan 23 17:56:25.766497 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:56:25.766504 kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 23 17:56:25.766512 kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 23 17:56:25.766518 kernel: Fallback order for Node 0: 0 Jan 23 17:56:25.766527 kernel: Built 1 zonelists, mobility grouping on. Total pages: 4194304 Jan 23 17:56:25.766534 kernel: Policy zone: Normal Jan 23 17:56:25.766541 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 17:56:25.766547 kernel: software IO TLB: area num 4. Jan 23 17:56:25.766554 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 23 17:56:25.766560 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 17:56:25.766567 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 17:56:25.766574 kernel: rcu: RCU event tracing is enabled. Jan 23 17:56:25.766581 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 17:56:25.766587 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 17:56:25.766594 kernel: Tracing variant of Tasks RCU enabled. Jan 23 17:56:25.766600 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 17:56:25.766608 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 17:56:25.766615 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 17:56:25.766622 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 17:56:25.766628 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 17:56:25.766635 kernel: GICv3: 256 SPIs implemented Jan 23 17:56:25.766641 kernel: GICv3: 0 Extended SPIs implemented Jan 23 17:56:25.766648 kernel: Root IRQ handler: gic_handle_irq Jan 23 17:56:25.766654 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 23 17:56:25.766661 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 17:56:25.766667 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 23 17:56:25.766674 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 23 17:56:25.766680 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100110000 (indirect, esz 8, psz 64K, shr 1) Jan 23 17:56:25.766688 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100120000 (flat, esz 8, psz 64K, shr 1) Jan 23 17:56:25.766700 kernel: GICv3: using LPI property table @0x0000000100130000 Jan 23 17:56:25.766708 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100140000 Jan 23 17:56:25.766715 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 17:56:25.766721 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 17:56:25.766728 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 17:56:25.766734 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 17:56:25.766741 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 17:56:25.766748 kernel: arm-pv: using stolen time PV Jan 23 17:56:25.766754 kernel: Console: colour dummy device 80x25 Jan 23 17:56:25.766763 kernel: ACPI: Core revision 20240827 Jan 23 17:56:25.766770 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 17:56:25.766777 kernel: pid_max: default: 32768 minimum: 301 Jan 23 17:56:25.766783 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 17:56:25.766790 kernel: landlock: Up and running. Jan 23 17:56:25.766796 kernel: SELinux: Initializing. Jan 23 17:56:25.766803 kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:56:25.766810 kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:56:25.766816 kernel: rcu: Hierarchical SRCU implementation. Jan 23 17:56:25.766823 kernel: rcu: Max phase no-delay instances is 400. Jan 23 17:56:25.766831 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 17:56:25.766841 kernel: Remapping and enabling EFI services. Jan 23 17:56:25.766848 kernel: smp: Bringing up secondary CPUs ... Jan 23 17:56:25.766854 kernel: Detected PIPT I-cache on CPU1 Jan 23 17:56:25.766861 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 23 17:56:25.766868 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100150000 Jan 23 17:56:25.766875 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 17:56:25.766882 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 17:56:25.766888 kernel: Detected PIPT I-cache on CPU2 Jan 23 17:56:25.766922 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 23 17:56:25.766930 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000100160000 Jan 23 17:56:25.766937 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 17:56:25.766946 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 23 17:56:25.766953 kernel: Detected PIPT I-cache on CPU3 Jan 23 17:56:25.766960 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 23 17:56:25.766967 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000100170000 Jan 23 17:56:25.766974 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 17:56:25.766982 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 23 17:56:25.766989 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 17:56:25.766996 kernel: SMP: Total of 4 processors activated. Jan 23 17:56:25.767003 kernel: CPU: All CPU(s) started at EL1 Jan 23 17:56:25.767010 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 17:56:25.767017 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 17:56:25.767024 kernel: CPU features: detected: Common not Private translations Jan 23 17:56:25.767031 kernel: CPU features: detected: CRC32 instructions Jan 23 17:56:25.767038 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 23 17:56:25.767047 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 17:56:25.767054 kernel: CPU features: detected: LSE atomic instructions Jan 23 17:56:25.767061 kernel: CPU features: detected: Privileged Access Never Jan 23 17:56:25.767068 kernel: CPU features: detected: RAS Extension Support Jan 23 17:56:25.767075 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 23 17:56:25.767082 kernel: alternatives: applying system-wide alternatives Jan 23 17:56:25.767089 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jan 23 17:56:25.767097 kernel: Memory: 16297360K/16777216K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 457072K reserved, 16384K cma-reserved) Jan 23 17:56:25.767104 kernel: devtmpfs: initialized Jan 23 17:56:25.767112 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 17:56:25.767119 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 17:56:25.767126 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 17:56:25.767133 kernel: 0 pages in range for non-PLT usage Jan 23 17:56:25.767140 kernel: 508400 pages in range for PLT usage Jan 23 17:56:25.767147 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 17:56:25.767154 kernel: SMBIOS 3.0.0 present. Jan 23 17:56:25.767161 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jan 23 17:56:25.767168 kernel: DMI: Memory slots populated: 1/1 Jan 23 17:56:25.767177 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 17:56:25.767184 kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Jan 23 17:56:25.767191 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 17:56:25.767198 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 17:56:25.767205 kernel: audit: initializing netlink subsys (disabled) Jan 23 17:56:25.767213 kernel: audit: type=2000 audit(0.039:1): state=initialized audit_enabled=0 res=1 Jan 23 17:56:25.767220 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 17:56:25.767227 kernel: cpuidle: using governor menu Jan 23 17:56:25.767234 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 17:56:25.767243 kernel: ASID allocator initialised with 32768 entries Jan 23 17:56:25.767250 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 17:56:25.767257 kernel: Serial: AMBA PL011 UART driver Jan 23 17:56:25.767264 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 17:56:25.767272 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 17:56:25.767279 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 17:56:25.767286 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 17:56:25.767293 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 17:56:25.767300 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 17:56:25.767309 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 17:56:25.767316 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 17:56:25.767323 kernel: ACPI: Added _OSI(Module Device) Jan 23 17:56:25.767330 kernel: ACPI: Added _OSI(Processor Device) Jan 23 17:56:25.767337 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 17:56:25.767344 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 17:56:25.767351 kernel: ACPI: Interpreter enabled Jan 23 17:56:25.767358 kernel: ACPI: Using GIC for interrupt routing Jan 23 17:56:25.767365 kernel: ACPI: MCFG table detected, 1 entries Jan 23 17:56:25.767374 kernel: ACPI: CPU0 has been hot-added Jan 23 17:56:25.767380 kernel: ACPI: CPU1 has been hot-added Jan 23 17:56:25.767400 kernel: ACPI: CPU2 has been hot-added Jan 23 17:56:25.767407 kernel: ACPI: CPU3 has been hot-added Jan 23 17:56:25.767415 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 23 17:56:25.767422 kernel: printk: legacy console [ttyAMA0] enabled Jan 23 17:56:25.767429 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 17:56:25.767562 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 17:56:25.767640 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 17:56:25.767701 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 17:56:25.767759 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 23 17:56:25.767817 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 23 17:56:25.767826 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 23 17:56:25.767833 kernel: PCI host bridge to bus 0000:00 Jan 23 17:56:25.767940 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 23 17:56:25.768003 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 17:56:25.768062 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 23 17:56:25.768122 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 17:56:25.768203 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 23 17:56:25.768275 kernel: pci 0000:00:01.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.768337 kernel: pci 0000:00:01.0: BAR 0 [mem 0x125a0000-0x125a0fff] Jan 23 17:56:25.768397 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 23 17:56:25.768460 kernel: pci 0000:00:01.0: bridge window [mem 0x12400000-0x124fffff] Jan 23 17:56:25.768520 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 23 17:56:25.768601 kernel: pci 0000:00:01.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.768670 kernel: pci 0000:00:01.1: BAR 0 [mem 0x1259f000-0x1259ffff] Jan 23 17:56:25.768740 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 23 17:56:25.768802 kernel: pci 0000:00:01.1: bridge window [mem 0x12300000-0x123fffff] Jan 23 17:56:25.768881 kernel: pci 0000:00:01.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.768974 kernel: pci 0000:00:01.2: BAR 0 [mem 0x1259e000-0x1259efff] Jan 23 17:56:25.769047 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Jan 23 17:56:25.769111 kernel: pci 0000:00:01.2: bridge window [mem 0x12200000-0x122fffff] Jan 23 17:56:25.769172 kernel: pci 0000:00:01.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 23 17:56:25.769239 kernel: pci 0000:00:01.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.769300 kernel: pci 0000:00:01.3: BAR 0 [mem 0x1259d000-0x1259dfff] Jan 23 17:56:25.769361 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Jan 23 17:56:25.769426 kernel: pci 0000:00:01.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 23 17:56:25.769495 kernel: pci 0000:00:01.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.769565 kernel: pci 0000:00:01.4: BAR 0 [mem 0x1259c000-0x1259cfff] Jan 23 17:56:25.769634 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Jan 23 17:56:25.769702 kernel: pci 0000:00:01.4: bridge window [mem 0x12100000-0x121fffff] Jan 23 17:56:25.769763 kernel: pci 0000:00:01.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 23 17:56:25.769831 kernel: pci 0000:00:01.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.769895 kernel: pci 0000:00:01.5: BAR 0 [mem 0x1259b000-0x1259bfff] Jan 23 17:56:25.769990 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Jan 23 17:56:25.770053 kernel: pci 0000:00:01.5: bridge window [mem 0x12000000-0x120fffff] Jan 23 17:56:25.770113 kernel: pci 0000:00:01.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 23 17:56:25.770180 kernel: pci 0000:00:01.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.770241 kernel: pci 0000:00:01.6: BAR 0 [mem 0x1259a000-0x1259afff] Jan 23 17:56:25.770300 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Jan 23 17:56:25.770372 kernel: pci 0000:00:01.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.770433 kernel: pci 0000:00:01.7: BAR 0 [mem 0x12599000-0x12599fff] Jan 23 17:56:25.770492 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Jan 23 17:56:25.770559 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.770619 kernel: pci 0000:00:02.0: BAR 0 [mem 0x12598000-0x12598fff] Jan 23 17:56:25.770679 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Jan 23 17:56:25.770745 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.770808 kernel: pci 0000:00:02.1: BAR 0 [mem 0x12597000-0x12597fff] Jan 23 17:56:25.770868 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Jan 23 17:56:25.770964 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.771029 kernel: pci 0000:00:02.2: BAR 0 [mem 0x12596000-0x12596fff] Jan 23 17:56:25.771089 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Jan 23 17:56:25.771159 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.771219 kernel: pci 0000:00:02.3: BAR 0 [mem 0x12595000-0x12595fff] Jan 23 17:56:25.771280 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Jan 23 17:56:25.771347 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.771432 kernel: pci 0000:00:02.4: BAR 0 [mem 0x12594000-0x12594fff] Jan 23 17:56:25.771503 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Jan 23 17:56:25.771570 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.771634 kernel: pci 0000:00:02.5: BAR 0 [mem 0x12593000-0x12593fff] Jan 23 17:56:25.771694 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Jan 23 17:56:25.771760 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.771821 kernel: pci 0000:00:02.6: BAR 0 [mem 0x12592000-0x12592fff] Jan 23 17:56:25.771881 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Jan 23 17:56:25.771974 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.772038 kernel: pci 0000:00:02.7: BAR 0 [mem 0x12591000-0x12591fff] Jan 23 17:56:25.772102 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Jan 23 17:56:25.772168 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.772230 kernel: pci 0000:00:03.0: BAR 0 [mem 0x12590000-0x12590fff] Jan 23 17:56:25.772292 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Jan 23 17:56:25.772369 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.772438 kernel: pci 0000:00:03.1: BAR 0 [mem 0x1258f000-0x1258ffff] Jan 23 17:56:25.772507 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Jan 23 17:56:25.772582 kernel: pci 0000:00:03.1: bridge window [io 0xf000-0xffff] Jan 23 17:56:25.772655 kernel: pci 0000:00:03.1: bridge window [mem 0x11e00000-0x11ffffff] Jan 23 17:56:25.772726 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.772790 kernel: pci 0000:00:03.2: BAR 0 [mem 0x1258e000-0x1258efff] Jan 23 17:56:25.772877 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Jan 23 17:56:25.772956 kernel: pci 0000:00:03.2: bridge window [io 0xe000-0xefff] Jan 23 17:56:25.773019 kernel: pci 0000:00:03.2: bridge window [mem 0x11c00000-0x11dfffff] Jan 23 17:56:25.773090 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.773150 kernel: pci 0000:00:03.3: BAR 0 [mem 0x1258d000-0x1258dfff] Jan 23 17:56:25.773210 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Jan 23 17:56:25.773270 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff] Jan 23 17:56:25.773329 kernel: pci 0000:00:03.3: bridge window [mem 0x11a00000-0x11bfffff] Jan 23 17:56:25.773394 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.773455 kernel: pci 0000:00:03.4: BAR 0 [mem 0x1258c000-0x1258cfff] Jan 23 17:56:25.773517 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Jan 23 17:56:25.773576 kernel: pci 0000:00:03.4: bridge window [io 0xc000-0xcfff] Jan 23 17:56:25.773636 kernel: pci 0000:00:03.4: bridge window [mem 0x11800000-0x119fffff] Jan 23 17:56:25.773703 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.773763 kernel: pci 0000:00:03.5: BAR 0 [mem 0x1258b000-0x1258bfff] Jan 23 17:56:25.773824 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Jan 23 17:56:25.773883 kernel: pci 0000:00:03.5: bridge window [io 0xb000-0xbfff] Jan 23 17:56:25.773955 kernel: pci 0000:00:03.5: bridge window [mem 0x11600000-0x117fffff] Jan 23 17:56:25.774025 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.774085 kernel: pci 0000:00:03.6: BAR 0 [mem 0x1258a000-0x1258afff] Jan 23 17:56:25.774145 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Jan 23 17:56:25.774204 kernel: pci 0000:00:03.6: bridge window [io 0xa000-0xafff] Jan 23 17:56:25.774264 kernel: pci 0000:00:03.6: bridge window [mem 0x11400000-0x115fffff] Jan 23 17:56:25.774337 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.774398 kernel: pci 0000:00:03.7: BAR 0 [mem 0x12589000-0x12589fff] Jan 23 17:56:25.774461 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Jan 23 17:56:25.774521 kernel: pci 0000:00:03.7: bridge window [io 0x9000-0x9fff] Jan 23 17:56:25.774581 kernel: pci 0000:00:03.7: bridge window [mem 0x11200000-0x113fffff] Jan 23 17:56:25.774647 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.774707 kernel: pci 0000:00:04.0: BAR 0 [mem 0x12588000-0x12588fff] Jan 23 17:56:25.774779 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Jan 23 17:56:25.774840 kernel: pci 0000:00:04.0: bridge window [io 0x8000-0x8fff] Jan 23 17:56:25.774926 kernel: pci 0000:00:04.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 17:56:25.775002 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.775063 kernel: pci 0000:00:04.1: BAR 0 [mem 0x12587000-0x12587fff] Jan 23 17:56:25.775124 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Jan 23 17:56:25.775183 kernel: pci 0000:00:04.1: bridge window [io 0x7000-0x7fff] Jan 23 17:56:25.775243 kernel: pci 0000:00:04.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 17:56:25.775311 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.775376 kernel: pci 0000:00:04.2: BAR 0 [mem 0x12586000-0x12586fff] Jan 23 17:56:25.775452 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Jan 23 17:56:25.775514 kernel: pci 0000:00:04.2: bridge window [io 0x6000-0x6fff] Jan 23 17:56:25.775573 kernel: pci 0000:00:04.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 17:56:25.775646 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.775732 kernel: pci 0000:00:04.3: BAR 0 [mem 0x12585000-0x12585fff] Jan 23 17:56:25.775821 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Jan 23 17:56:25.775889 kernel: pci 0000:00:04.3: bridge window [io 0x5000-0x5fff] Jan 23 17:56:25.775978 kernel: pci 0000:00:04.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 17:56:25.776049 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.776110 kernel: pci 0000:00:04.4: BAR 0 [mem 0x12584000-0x12584fff] Jan 23 17:56:25.776175 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Jan 23 17:56:25.776236 kernel: pci 0000:00:04.4: bridge window [io 0x4000-0x4fff] Jan 23 17:56:25.776327 kernel: pci 0000:00:04.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 17:56:25.776435 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.776498 kernel: pci 0000:00:04.5: BAR 0 [mem 0x12583000-0x12583fff] Jan 23 17:56:25.776574 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Jan 23 17:56:25.776639 kernel: pci 0000:00:04.5: bridge window [io 0x3000-0x3fff] Jan 23 17:56:25.776699 kernel: pci 0000:00:04.5: bridge window [mem 0x10600000-0x107fffff] Jan 23 17:56:25.776770 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.776831 kernel: pci 0000:00:04.6: BAR 0 [mem 0x12582000-0x12582fff] Jan 23 17:56:25.776911 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Jan 23 17:56:25.776981 kernel: pci 0000:00:04.6: bridge window [io 0x2000-0x2fff] Jan 23 17:56:25.777041 kernel: pci 0000:00:04.6: bridge window [mem 0x10400000-0x105fffff] Jan 23 17:56:25.777120 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.777214 kernel: pci 0000:00:04.7: BAR 0 [mem 0x12581000-0x12581fff] Jan 23 17:56:25.777293 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Jan 23 17:56:25.777354 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x1fff] Jan 23 17:56:25.777414 kernel: pci 0000:00:04.7: bridge window [mem 0x10200000-0x103fffff] Jan 23 17:56:25.777507 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:56:25.777573 kernel: pci 0000:00:05.0: BAR 0 [mem 0x12580000-0x12580fff] Jan 23 17:56:25.777639 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Jan 23 17:56:25.777702 kernel: pci 0000:00:05.0: bridge window [io 0x0000-0x0fff] Jan 23 17:56:25.777771 kernel: pci 0000:00:05.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 17:56:25.777843 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 17:56:25.777929 kernel: pci 0000:01:00.0: BAR 1 [mem 0x12400000-0x12400fff] Jan 23 17:56:25.777998 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 23 17:56:25.778061 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 17:56:25.778159 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 17:56:25.778239 kernel: pci 0000:02:00.0: BAR 0 [mem 0x12300000-0x12303fff 64bit] Jan 23 17:56:25.778328 kernel: pci 0000:03:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 23 17:56:25.778392 kernel: pci 0000:03:00.0: BAR 1 [mem 0x12200000-0x12200fff] Jan 23 17:56:25.778454 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 23 17:56:25.778529 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 17:56:25.778592 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 23 17:56:25.778666 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 17:56:25.778748 kernel: pci 0000:05:00.0: BAR 1 [mem 0x12100000-0x12100fff] Jan 23 17:56:25.778844 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 23 17:56:25.778935 kernel: pci 0000:06:00.0: [1af4:1050] type 00 class 0x038000 PCIe Endpoint Jan 23 17:56:25.779003 kernel: pci 0000:06:00.0: BAR 1 [mem 0x12000000-0x12000fff] Jan 23 17:56:25.779065 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 23 17:56:25.779134 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 23 17:56:25.779204 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 23 17:56:25.779265 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 23 17:56:25.779329 kernel: pci 0000:00:01.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 23 17:56:25.779437 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 23 17:56:25.779508 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 23 17:56:25.779588 kernel: pci 0000:00:01.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 17:56:25.779660 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 23 17:56:25.779747 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 23 17:56:25.779817 kernel: pci 0000:00:01.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 17:56:25.779880 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 23 17:56:25.779973 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 23 17:56:25.780058 kernel: pci 0000:00:01.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 17:56:25.780127 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 23 17:56:25.780190 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 23 17:56:25.780254 kernel: pci 0000:00:01.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 17:56:25.780316 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 23 17:56:25.780376 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 23 17:56:25.780440 kernel: pci 0000:00:01.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 17:56:25.780500 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 07] add_size 200000 add_align 100000 Jan 23 17:56:25.780563 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff] to [bus 07] add_size 200000 add_align 100000 Jan 23 17:56:25.780633 kernel: pci 0000:00:01.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 17:56:25.780697 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 23 17:56:25.780757 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 23 17:56:25.780821 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 17:56:25.780888 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 23 17:56:25.780964 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 23 17:56:25.781036 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 23 17:56:25.781098 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0a] add_size 200000 add_align 100000 Jan 23 17:56:25.781159 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff] to [bus 0a] add_size 200000 add_align 100000 Jan 23 17:56:25.781224 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 23 17:56:25.781286 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 23 17:56:25.781347 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff] to [bus 0b] add_size 200000 add_align 100000 Jan 23 17:56:25.781413 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 23 17:56:25.781474 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0c] add_size 200000 add_align 100000 Jan 23 17:56:25.781535 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 0c] add_size 200000 add_align 100000 Jan 23 17:56:25.781600 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 23 17:56:25.781661 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0d] add_size 200000 add_align 100000 Jan 23 17:56:25.781730 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 0d] add_size 200000 add_align 100000 Jan 23 17:56:25.781796 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 23 17:56:25.781861 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0e] add_size 200000 add_align 100000 Jan 23 17:56:25.781942 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff] to [bus 0e] add_size 200000 add_align 100000 Jan 23 17:56:25.782012 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 23 17:56:25.782075 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0f] add_size 200000 add_align 100000 Jan 23 17:56:25.782137 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff] to [bus 0f] add_size 200000 add_align 100000 Jan 23 17:56:25.782202 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 23 17:56:25.782262 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 10] add_size 200000 add_align 100000 Jan 23 17:56:25.782327 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 10] add_size 200000 add_align 100000 Jan 23 17:56:25.782393 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 23 17:56:25.782454 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 11] add_size 200000 add_align 100000 Jan 23 17:56:25.782515 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 11] add_size 200000 add_align 100000 Jan 23 17:56:25.782580 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 23 17:56:25.782641 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 12] add_size 200000 add_align 100000 Jan 23 17:56:25.782705 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff] to [bus 12] add_size 200000 add_align 100000 Jan 23 17:56:25.782768 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 23 17:56:25.782829 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 13] add_size 200000 add_align 100000 Jan 23 17:56:25.782890 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff] to [bus 13] add_size 200000 add_align 100000 Jan 23 17:56:25.782981 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 23 17:56:25.783045 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 14] add_size 200000 add_align 100000 Jan 23 17:56:25.783105 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff] to [bus 14] add_size 200000 add_align 100000 Jan 23 17:56:25.783175 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 23 17:56:25.783238 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 15] add_size 200000 add_align 100000 Jan 23 17:56:25.783298 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff] to [bus 15] add_size 200000 add_align 100000 Jan 23 17:56:25.783363 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 23 17:56:25.783446 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 16] add_size 200000 add_align 100000 Jan 23 17:56:25.783512 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff] to [bus 16] add_size 200000 add_align 100000 Jan 23 17:56:25.783579 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 23 17:56:25.783646 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 17] add_size 200000 add_align 100000 Jan 23 17:56:25.783709 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff] to [bus 17] add_size 200000 add_align 100000 Jan 23 17:56:25.783791 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 23 17:56:25.783854 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 18] add_size 200000 add_align 100000 Jan 23 17:56:25.783958 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff] to [bus 18] add_size 200000 add_align 100000 Jan 23 17:56:25.784028 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 23 17:56:25.784096 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 19] add_size 200000 add_align 100000 Jan 23 17:56:25.784157 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 19] add_size 200000 add_align 100000 Jan 23 17:56:25.784221 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 23 17:56:25.784282 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1a] add_size 200000 add_align 100000 Jan 23 17:56:25.784343 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff] to [bus 1a] add_size 200000 add_align 100000 Jan 23 17:56:25.784406 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 23 17:56:25.784467 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1b] add_size 200000 add_align 100000 Jan 23 17:56:25.784529 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff] to [bus 1b] add_size 200000 add_align 100000 Jan 23 17:56:25.784601 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 23 17:56:25.784664 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1c] add_size 200000 add_align 100000 Jan 23 17:56:25.784728 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff] to [bus 1c] add_size 200000 add_align 100000 Jan 23 17:56:25.784792 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 23 17:56:25.784854 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1d] add_size 200000 add_align 100000 Jan 23 17:56:25.784931 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff] to [bus 1d] add_size 200000 add_align 100000 Jan 23 17:56:25.785002 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 23 17:56:25.785064 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1e] add_size 200000 add_align 100000 Jan 23 17:56:25.785126 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff] to [bus 1e] add_size 200000 add_align 100000 Jan 23 17:56:25.785207 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 23 17:56:25.785269 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1f] add_size 200000 add_align 100000 Jan 23 17:56:25.785330 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff] to [bus 1f] add_size 200000 add_align 100000 Jan 23 17:56:25.785394 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 23 17:56:25.785457 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 20] add_size 200000 add_align 100000 Jan 23 17:56:25.785518 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff] to [bus 20] add_size 200000 add_align 100000 Jan 23 17:56:25.785582 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 23 17:56:25.785646 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 21] add_size 200000 add_align 100000 Jan 23 17:56:25.785707 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 21] add_size 200000 add_align 100000 Jan 23 17:56:25.785770 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 23 17:56:25.785831 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 23 17:56:25.785908 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 23 17:56:25.785994 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 23 17:56:25.786059 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 23 17:56:25.786121 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 23 17:56:25.786184 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 23 17:56:25.786246 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 23 17:56:25.786310 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 23 17:56:25.786377 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 23 17:56:25.786442 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 23 17:56:25.786520 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 23 17:56:25.786584 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 23 17:56:25.786647 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 23 17:56:25.786715 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 23 17:56:25.786776 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 23 17:56:25.786837 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 23 17:56:25.786913 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 23 17:56:25.786984 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff]: assigned Jan 23 17:56:25.787050 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref]: assigned Jan 23 17:56:25.787113 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff]: assigned Jan 23 17:56:25.787175 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref]: assigned Jan 23 17:56:25.787237 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff]: assigned Jan 23 17:56:25.787307 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref]: assigned Jan 23 17:56:25.787412 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff]: assigned Jan 23 17:56:25.787489 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref]: assigned Jan 23 17:56:25.787552 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff]: assigned Jan 23 17:56:25.787613 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref]: assigned Jan 23 17:56:25.787675 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff]: assigned Jan 23 17:56:25.787736 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref]: assigned Jan 23 17:56:25.787798 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff]: assigned Jan 23 17:56:25.787873 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref]: assigned Jan 23 17:56:25.787955 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff]: assigned Jan 23 17:56:25.788043 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref]: assigned Jan 23 17:56:25.788113 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff]: assigned Jan 23 17:56:25.788174 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref]: assigned Jan 23 17:56:25.788237 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff]: assigned Jan 23 17:56:25.788308 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref]: assigned Jan 23 17:56:25.788373 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff]: assigned Jan 23 17:56:25.788434 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref]: assigned Jan 23 17:56:25.788499 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff]: assigned Jan 23 17:56:25.788581 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref]: assigned Jan 23 17:56:25.788645 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff]: assigned Jan 23 17:56:25.788707 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref]: assigned Jan 23 17:56:25.788769 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff]: assigned Jan 23 17:56:25.788842 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref]: assigned Jan 23 17:56:25.788923 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff]: assigned Jan 23 17:56:25.788995 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref]: assigned Jan 23 17:56:25.789063 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff]: assigned Jan 23 17:56:25.789137 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref]: assigned Jan 23 17:56:25.789201 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff]: assigned Jan 23 17:56:25.789263 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref]: assigned Jan 23 17:56:25.789324 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff]: assigned Jan 23 17:56:25.789384 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref]: assigned Jan 23 17:56:25.789447 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff]: assigned Jan 23 17:56:25.789508 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref]: assigned Jan 23 17:56:25.789581 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff]: assigned Jan 23 17:56:25.789645 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref]: assigned Jan 23 17:56:25.789735 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff]: assigned Jan 23 17:56:25.789798 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref]: assigned Jan 23 17:56:25.789861 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff]: assigned Jan 23 17:56:25.789947 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref]: assigned Jan 23 17:56:25.790013 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff]: assigned Jan 23 17:56:25.790074 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref]: assigned Jan 23 17:56:25.790138 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff]: assigned Jan 23 17:56:25.790198 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref]: assigned Jan 23 17:56:25.790261 kernel: pci 0000:00:01.0: BAR 0 [mem 0x14200000-0x14200fff]: assigned Jan 23 17:56:25.790321 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x1fff]: assigned Jan 23 17:56:25.790382 kernel: pci 0000:00:01.1: BAR 0 [mem 0x14201000-0x14201fff]: assigned Jan 23 17:56:25.790442 kernel: pci 0000:00:01.1: bridge window [io 0x2000-0x2fff]: assigned Jan 23 17:56:25.790503 kernel: pci 0000:00:01.2: BAR 0 [mem 0x14202000-0x14202fff]: assigned Jan 23 17:56:25.790563 kernel: pci 0000:00:01.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 17:56:25.790627 kernel: pci 0000:00:01.3: BAR 0 [mem 0x14203000-0x14203fff]: assigned Jan 23 17:56:25.790687 kernel: pci 0000:00:01.3: bridge window [io 0x4000-0x4fff]: assigned Jan 23 17:56:25.790748 kernel: pci 0000:00:01.4: BAR 0 [mem 0x14204000-0x14204fff]: assigned Jan 23 17:56:25.790816 kernel: pci 0000:00:01.4: bridge window [io 0x5000-0x5fff]: assigned Jan 23 17:56:25.790878 kernel: pci 0000:00:01.5: BAR 0 [mem 0x14205000-0x14205fff]: assigned Jan 23 17:56:25.790962 kernel: pci 0000:00:01.5: bridge window [io 0x6000-0x6fff]: assigned Jan 23 17:56:25.791026 kernel: pci 0000:00:01.6: BAR 0 [mem 0x14206000-0x14206fff]: assigned Jan 23 17:56:25.791088 kernel: pci 0000:00:01.6: bridge window [io 0x7000-0x7fff]: assigned Jan 23 17:56:25.791152 kernel: pci 0000:00:01.7: BAR 0 [mem 0x14207000-0x14207fff]: assigned Jan 23 17:56:25.791213 kernel: pci 0000:00:01.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 17:56:25.791275 kernel: pci 0000:00:02.0: BAR 0 [mem 0x14208000-0x14208fff]: assigned Jan 23 17:56:25.791335 kernel: pci 0000:00:02.0: bridge window [io 0x9000-0x9fff]: assigned Jan 23 17:56:25.791412 kernel: pci 0000:00:02.1: BAR 0 [mem 0x14209000-0x14209fff]: assigned Jan 23 17:56:25.791477 kernel: pci 0000:00:02.1: bridge window [io 0xa000-0xafff]: assigned Jan 23 17:56:25.791539 kernel: pci 0000:00:02.2: BAR 0 [mem 0x1420a000-0x1420afff]: assigned Jan 23 17:56:25.791610 kernel: pci 0000:00:02.2: bridge window [io 0xb000-0xbfff]: assigned Jan 23 17:56:25.791677 kernel: pci 0000:00:02.3: BAR 0 [mem 0x1420b000-0x1420bfff]: assigned Jan 23 17:56:25.791738 kernel: pci 0000:00:02.3: bridge window [io 0xc000-0xcfff]: assigned Jan 23 17:56:25.791800 kernel: pci 0000:00:02.4: BAR 0 [mem 0x1420c000-0x1420cfff]: assigned Jan 23 17:56:25.791860 kernel: pci 0000:00:02.4: bridge window [io 0xd000-0xdfff]: assigned Jan 23 17:56:25.791940 kernel: pci 0000:00:02.5: BAR 0 [mem 0x1420d000-0x1420dfff]: assigned Jan 23 17:56:25.792005 kernel: pci 0000:00:02.5: bridge window [io 0xe000-0xefff]: assigned Jan 23 17:56:25.792067 kernel: pci 0000:00:02.6: BAR 0 [mem 0x1420e000-0x1420efff]: assigned Jan 23 17:56:25.792130 kernel: pci 0000:00:02.6: bridge window [io 0xf000-0xffff]: assigned Jan 23 17:56:25.792192 kernel: pci 0000:00:02.7: BAR 0 [mem 0x1420f000-0x1420ffff]: assigned Jan 23 17:56:25.792253 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.792313 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.792374 kernel: pci 0000:00:03.0: BAR 0 [mem 0x14210000-0x14210fff]: assigned Jan 23 17:56:25.792435 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.792497 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.792559 kernel: pci 0000:00:03.1: BAR 0 [mem 0x14211000-0x14211fff]: assigned Jan 23 17:56:25.792619 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.792679 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.792740 kernel: pci 0000:00:03.2: BAR 0 [mem 0x14212000-0x14212fff]: assigned Jan 23 17:56:25.792801 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.792861 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.792945 kernel: pci 0000:00:03.3: BAR 0 [mem 0x14213000-0x14213fff]: assigned Jan 23 17:56:25.793009 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.793070 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.793132 kernel: pci 0000:00:03.4: BAR 0 [mem 0x14214000-0x14214fff]: assigned Jan 23 17:56:25.793192 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.793252 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.793314 kernel: pci 0000:00:03.5: BAR 0 [mem 0x14215000-0x14215fff]: assigned Jan 23 17:56:25.793374 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.793438 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.793499 kernel: pci 0000:00:03.6: BAR 0 [mem 0x14216000-0x14216fff]: assigned Jan 23 17:56:25.793559 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.793620 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.793682 kernel: pci 0000:00:03.7: BAR 0 [mem 0x14217000-0x14217fff]: assigned Jan 23 17:56:25.793743 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.793803 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.793864 kernel: pci 0000:00:04.0: BAR 0 [mem 0x14218000-0x14218fff]: assigned Jan 23 17:56:25.793938 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.794001 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.794063 kernel: pci 0000:00:04.1: BAR 0 [mem 0x14219000-0x14219fff]: assigned Jan 23 17:56:25.794123 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.794183 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.794246 kernel: pci 0000:00:04.2: BAR 0 [mem 0x1421a000-0x1421afff]: assigned Jan 23 17:56:25.794307 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.794377 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.794444 kernel: pci 0000:00:04.3: BAR 0 [mem 0x1421b000-0x1421bfff]: assigned Jan 23 17:56:25.794505 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.794565 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.794626 kernel: pci 0000:00:04.4: BAR 0 [mem 0x1421c000-0x1421cfff]: assigned Jan 23 17:56:25.794686 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.794747 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.794808 kernel: pci 0000:00:04.5: BAR 0 [mem 0x1421d000-0x1421dfff]: assigned Jan 23 17:56:25.794869 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.794963 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.795028 kernel: pci 0000:00:04.6: BAR 0 [mem 0x1421e000-0x1421efff]: assigned Jan 23 17:56:25.795097 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.795161 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.795226 kernel: pci 0000:00:04.7: BAR 0 [mem 0x1421f000-0x1421ffff]: assigned Jan 23 17:56:25.795287 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.795347 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.795430 kernel: pci 0000:00:05.0: BAR 0 [mem 0x14220000-0x14220fff]: assigned Jan 23 17:56:25.795507 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.795569 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.795631 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff]: assigned Jan 23 17:56:25.795693 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff]: assigned Jan 23 17:56:25.795797 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff]: assigned Jan 23 17:56:25.795874 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff]: assigned Jan 23 17:56:25.795952 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff]: assigned Jan 23 17:56:25.796040 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff]: assigned Jan 23 17:56:25.796112 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff]: assigned Jan 23 17:56:25.796180 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff]: assigned Jan 23 17:56:25.796248 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff]: assigned Jan 23 17:56:25.796310 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff]: assigned Jan 23 17:56:25.796374 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff]: assigned Jan 23 17:56:25.796436 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff]: assigned Jan 23 17:56:25.796501 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff]: assigned Jan 23 17:56:25.796563 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff]: assigned Jan 23 17:56:25.796625 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff]: assigned Jan 23 17:56:25.796686 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.796750 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.796821 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.796882 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.796985 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.797051 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.797117 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.797186 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.797258 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.797325 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.797387 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.797448 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.797509 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.797578 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.797643 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.797706 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.797768 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.797835 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.797918 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.797987 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.798060 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.798144 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.798215 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.798290 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.798358 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.798426 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.798491 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.798569 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.798633 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.798697 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.798758 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.798820 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.798888 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.798967 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.799031 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jan 23 17:56:25.799092 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jan 23 17:56:25.799160 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 23 17:56:25.799232 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 23 17:56:25.799304 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 23 17:56:25.799368 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 23 17:56:25.799447 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 17:56:25.799510 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 17:56:25.799588 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 23 17:56:25.799651 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 23 17:56:25.799715 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff] Jan 23 17:56:25.799776 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 17:56:25.799844 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 23 17:56:25.799939 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 23 17:56:25.800022 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Jan 23 17:56:25.800088 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff] Jan 23 17:56:25.800153 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 17:56:25.800224 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 23 17:56:25.800289 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Jan 23 17:56:25.800351 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff] Jan 23 17:56:25.800412 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 17:56:25.800481 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 23 17:56:25.800552 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 23 17:56:25.800624 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Jan 23 17:56:25.800693 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 17:56:25.800759 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 17:56:25.800828 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 23 17:56:25.800894 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 23 17:56:25.800978 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Jan 23 17:56:25.801046 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 17:56:25.801108 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 17:56:25.801170 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Jan 23 17:56:25.801231 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 17:56:25.801295 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 17:56:25.801356 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Jan 23 17:56:25.801417 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 17:56:25.801477 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 17:56:25.801540 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Jan 23 17:56:25.801602 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 17:56:25.801666 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 17:56:25.801727 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Jan 23 17:56:25.801788 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff] Jan 23 17:56:25.801848 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref] Jan 23 17:56:25.801921 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Jan 23 17:56:25.801985 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff] Jan 23 17:56:25.802046 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref] Jan 23 17:56:25.802110 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Jan 23 17:56:25.802172 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff] Jan 23 17:56:25.802249 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref] Jan 23 17:56:25.802314 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Jan 23 17:56:25.802402 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff] Jan 23 17:56:25.802467 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref] Jan 23 17:56:25.802545 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Jan 23 17:56:25.802609 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff] Jan 23 17:56:25.802671 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref] Jan 23 17:56:25.802732 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Jan 23 17:56:25.802813 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff] Jan 23 17:56:25.802877 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref] Jan 23 17:56:25.802982 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Jan 23 17:56:25.803047 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff] Jan 23 17:56:25.803107 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref] Jan 23 17:56:25.803169 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Jan 23 17:56:25.803231 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff] Jan 23 17:56:25.803290 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref] Jan 23 17:56:25.803352 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Jan 23 17:56:25.803453 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff] Jan 23 17:56:25.803520 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref] Jan 23 17:56:25.803584 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Jan 23 17:56:25.803648 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff] Jan 23 17:56:25.803708 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff] Jan 23 17:56:25.803768 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref] Jan 23 17:56:25.803831 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Jan 23 17:56:25.803892 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff] Jan 23 17:56:25.803977 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff] Jan 23 17:56:25.804040 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref] Jan 23 17:56:25.804104 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Jan 23 17:56:25.804167 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff] Jan 23 17:56:25.804229 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff] Jan 23 17:56:25.804291 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref] Jan 23 17:56:25.804375 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Jan 23 17:56:25.804446 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff] Jan 23 17:56:25.804513 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff] Jan 23 17:56:25.804575 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref] Jan 23 17:56:25.804649 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Jan 23 17:56:25.804713 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff] Jan 23 17:56:25.804776 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff] Jan 23 17:56:25.804838 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref] Jan 23 17:56:25.804931 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Jan 23 17:56:25.805009 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff] Jan 23 17:56:25.805074 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff] Jan 23 17:56:25.805140 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref] Jan 23 17:56:25.805202 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Jan 23 17:56:25.805263 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff] Jan 23 17:56:25.805331 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff] Jan 23 17:56:25.805394 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref] Jan 23 17:56:25.805457 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Jan 23 17:56:25.805519 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff] Jan 23 17:56:25.805581 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff] Jan 23 17:56:25.805646 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref] Jan 23 17:56:25.805709 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Jan 23 17:56:25.805775 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff] Jan 23 17:56:25.805838 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff] Jan 23 17:56:25.805925 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref] Jan 23 17:56:25.806001 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Jan 23 17:56:25.806065 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff] Jan 23 17:56:25.806127 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff] Jan 23 17:56:25.806192 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref] Jan 23 17:56:25.806264 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Jan 23 17:56:25.806330 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff] Jan 23 17:56:25.806397 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff] Jan 23 17:56:25.806459 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref] Jan 23 17:56:25.806527 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Jan 23 17:56:25.806589 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff] Jan 23 17:56:25.806650 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff] Jan 23 17:56:25.806711 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref] Jan 23 17:56:25.806777 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Jan 23 17:56:25.806841 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff] Jan 23 17:56:25.806913 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff] Jan 23 17:56:25.806994 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref] Jan 23 17:56:25.807063 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Jan 23 17:56:25.807131 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff] Jan 23 17:56:25.807195 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff] Jan 23 17:56:25.807257 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref] Jan 23 17:56:25.807324 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Jan 23 17:56:25.807407 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff] Jan 23 17:56:25.807474 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff] Jan 23 17:56:25.807542 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref] Jan 23 17:56:25.807609 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 23 17:56:25.807673 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 17:56:25.807735 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 23 17:56:25.807816 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 23 17:56:25.807876 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 17:56:25.807959 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 23 17:56:25.808018 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 17:56:25.808082 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 23 17:56:25.808144 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 17:56:25.808212 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 23 17:56:25.808279 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 17:56:25.808342 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 23 17:56:25.808405 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 17:56:25.808485 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 23 17:56:25.808543 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 17:56:25.808607 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 23 17:56:25.808667 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 17:56:25.808747 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 23 17:56:25.808811 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 17:56:25.808884 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 23 17:56:25.808962 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 17:56:25.809027 kernel: pci_bus 0000:0a: resource 1 [mem 0x11200000-0x113fffff] Jan 23 17:56:25.809087 kernel: pci_bus 0000:0a: resource 2 [mem 0x8001200000-0x80013fffff 64bit pref] Jan 23 17:56:25.809155 kernel: pci_bus 0000:0b: resource 1 [mem 0x11400000-0x115fffff] Jan 23 17:56:25.809212 kernel: pci_bus 0000:0b: resource 2 [mem 0x8001400000-0x80015fffff 64bit pref] Jan 23 17:56:25.809274 kernel: pci_bus 0000:0c: resource 1 [mem 0x11600000-0x117fffff] Jan 23 17:56:25.809330 kernel: pci_bus 0000:0c: resource 2 [mem 0x8001600000-0x80017fffff 64bit pref] Jan 23 17:56:25.809392 kernel: pci_bus 0000:0d: resource 1 [mem 0x11800000-0x119fffff] Jan 23 17:56:25.809451 kernel: pci_bus 0000:0d: resource 2 [mem 0x8001800000-0x80019fffff 64bit pref] Jan 23 17:56:25.809518 kernel: pci_bus 0000:0e: resource 1 [mem 0x11a00000-0x11bfffff] Jan 23 17:56:25.809583 kernel: pci_bus 0000:0e: resource 2 [mem 0x8001a00000-0x8001bfffff 64bit pref] Jan 23 17:56:25.809652 kernel: pci_bus 0000:0f: resource 1 [mem 0x11c00000-0x11dfffff] Jan 23 17:56:25.809709 kernel: pci_bus 0000:0f: resource 2 [mem 0x8001c00000-0x8001dfffff 64bit pref] Jan 23 17:56:25.809774 kernel: pci_bus 0000:10: resource 1 [mem 0x11e00000-0x11ffffff] Jan 23 17:56:25.809841 kernel: pci_bus 0000:10: resource 2 [mem 0x8001e00000-0x8001ffffff 64bit pref] Jan 23 17:56:25.809933 kernel: pci_bus 0000:11: resource 1 [mem 0x12000000-0x121fffff] Jan 23 17:56:25.809994 kernel: pci_bus 0000:11: resource 2 [mem 0x8002000000-0x80021fffff 64bit pref] Jan 23 17:56:25.810057 kernel: pci_bus 0000:12: resource 1 [mem 0x12200000-0x123fffff] Jan 23 17:56:25.810116 kernel: pci_bus 0000:12: resource 2 [mem 0x8002200000-0x80023fffff 64bit pref] Jan 23 17:56:25.810183 kernel: pci_bus 0000:13: resource 0 [io 0xf000-0xffff] Jan 23 17:56:25.810240 kernel: pci_bus 0000:13: resource 1 [mem 0x12400000-0x125fffff] Jan 23 17:56:25.810296 kernel: pci_bus 0000:13: resource 2 [mem 0x8002400000-0x80025fffff 64bit pref] Jan 23 17:56:25.810358 kernel: pci_bus 0000:14: resource 0 [io 0xe000-0xefff] Jan 23 17:56:25.810416 kernel: pci_bus 0000:14: resource 1 [mem 0x12600000-0x127fffff] Jan 23 17:56:25.810475 kernel: pci_bus 0000:14: resource 2 [mem 0x8002600000-0x80027fffff 64bit pref] Jan 23 17:56:25.810538 kernel: pci_bus 0000:15: resource 0 [io 0xd000-0xdfff] Jan 23 17:56:25.810597 kernel: pci_bus 0000:15: resource 1 [mem 0x12800000-0x129fffff] Jan 23 17:56:25.810652 kernel: pci_bus 0000:15: resource 2 [mem 0x8002800000-0x80029fffff 64bit pref] Jan 23 17:56:25.810716 kernel: pci_bus 0000:16: resource 0 [io 0xc000-0xcfff] Jan 23 17:56:25.810772 kernel: pci_bus 0000:16: resource 1 [mem 0x12a00000-0x12bfffff] Jan 23 17:56:25.810828 kernel: pci_bus 0000:16: resource 2 [mem 0x8002a00000-0x8002bfffff 64bit pref] Jan 23 17:56:25.810932 kernel: pci_bus 0000:17: resource 0 [io 0xb000-0xbfff] Jan 23 17:56:25.811001 kernel: pci_bus 0000:17: resource 1 [mem 0x12c00000-0x12dfffff] Jan 23 17:56:25.811063 kernel: pci_bus 0000:17: resource 2 [mem 0x8002c00000-0x8002dfffff 64bit pref] Jan 23 17:56:25.811132 kernel: pci_bus 0000:18: resource 0 [io 0xa000-0xafff] Jan 23 17:56:25.811189 kernel: pci_bus 0000:18: resource 1 [mem 0x12e00000-0x12ffffff] Jan 23 17:56:25.811245 kernel: pci_bus 0000:18: resource 2 [mem 0x8002e00000-0x8002ffffff 64bit pref] Jan 23 17:56:25.811313 kernel: pci_bus 0000:19: resource 0 [io 0x9000-0x9fff] Jan 23 17:56:25.811376 kernel: pci_bus 0000:19: resource 1 [mem 0x13000000-0x131fffff] Jan 23 17:56:25.811454 kernel: pci_bus 0000:19: resource 2 [mem 0x8003000000-0x80031fffff 64bit pref] Jan 23 17:56:25.811520 kernel: pci_bus 0000:1a: resource 0 [io 0x8000-0x8fff] Jan 23 17:56:25.811577 kernel: pci_bus 0000:1a: resource 1 [mem 0x13200000-0x133fffff] Jan 23 17:56:25.811636 kernel: pci_bus 0000:1a: resource 2 [mem 0x8003200000-0x80033fffff 64bit pref] Jan 23 17:56:25.811700 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 23 17:56:25.811766 kernel: pci_bus 0000:1b: resource 1 [mem 0x13400000-0x135fffff] Jan 23 17:56:25.811826 kernel: pci_bus 0000:1b: resource 2 [mem 0x8003400000-0x80035fffff 64bit pref] Jan 23 17:56:25.811892 kernel: pci_bus 0000:1c: resource 0 [io 0x6000-0x6fff] Jan 23 17:56:25.811996 kernel: pci_bus 0000:1c: resource 1 [mem 0x13600000-0x137fffff] Jan 23 17:56:25.812055 kernel: pci_bus 0000:1c: resource 2 [mem 0x8003600000-0x80037fffff 64bit pref] Jan 23 17:56:25.812121 kernel: pci_bus 0000:1d: resource 0 [io 0x5000-0x5fff] Jan 23 17:56:25.812180 kernel: pci_bus 0000:1d: resource 1 [mem 0x13800000-0x139fffff] Jan 23 17:56:25.812235 kernel: pci_bus 0000:1d: resource 2 [mem 0x8003800000-0x80039fffff 64bit pref] Jan 23 17:56:25.812298 kernel: pci_bus 0000:1e: resource 0 [io 0x4000-0x4fff] Jan 23 17:56:25.812358 kernel: pci_bus 0000:1e: resource 1 [mem 0x13a00000-0x13bfffff] Jan 23 17:56:25.812414 kernel: pci_bus 0000:1e: resource 2 [mem 0x8003a00000-0x8003bfffff 64bit pref] Jan 23 17:56:25.812477 kernel: pci_bus 0000:1f: resource 0 [io 0x3000-0x3fff] Jan 23 17:56:25.812535 kernel: pci_bus 0000:1f: resource 1 [mem 0x13c00000-0x13dfffff] Jan 23 17:56:25.812591 kernel: pci_bus 0000:1f: resource 2 [mem 0x8003c00000-0x8003dfffff 64bit pref] Jan 23 17:56:25.812659 kernel: pci_bus 0000:20: resource 0 [io 0x2000-0x2fff] Jan 23 17:56:25.812716 kernel: pci_bus 0000:20: resource 1 [mem 0x13e00000-0x13ffffff] Jan 23 17:56:25.812774 kernel: pci_bus 0000:20: resource 2 [mem 0x8003e00000-0x8003ffffff 64bit pref] Jan 23 17:56:25.812835 kernel: pci_bus 0000:21: resource 0 [io 0x1000-0x1fff] Jan 23 17:56:25.812892 kernel: pci_bus 0000:21: resource 1 [mem 0x14000000-0x141fffff] Jan 23 17:56:25.812972 kernel: pci_bus 0000:21: resource 2 [mem 0x8004000000-0x80041fffff 64bit pref] Jan 23 17:56:25.812983 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 17:56:25.812990 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 17:56:25.812998 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 17:56:25.813008 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 17:56:25.813015 kernel: iommu: Default domain type: Translated Jan 23 17:56:25.813023 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 17:56:25.813030 kernel: efivars: Registered efivars operations Jan 23 17:56:25.813038 kernel: vgaarb: loaded Jan 23 17:56:25.813045 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 17:56:25.813053 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 17:56:25.813060 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 17:56:25.813068 kernel: pnp: PnP ACPI init Jan 23 17:56:25.813138 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 23 17:56:25.813151 kernel: pnp: PnP ACPI: found 1 devices Jan 23 17:56:25.813159 kernel: NET: Registered PF_INET protocol family Jan 23 17:56:25.813166 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 17:56:25.813174 kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Jan 23 17:56:25.813182 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 17:56:25.813189 kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 17:56:25.813197 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 23 17:56:25.813205 kernel: TCP: Hash tables configured (established 131072 bind 65536) Jan 23 17:56:25.813214 kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Jan 23 17:56:25.813221 kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Jan 23 17:56:25.813229 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 17:56:25.813303 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 23 17:56:25.813315 kernel: PCI: CLS 0 bytes, default 64 Jan 23 17:56:25.813323 kernel: kvm [1]: HYP mode not available Jan 23 17:56:25.813330 kernel: Initialise system trusted keyrings Jan 23 17:56:25.813338 kernel: workingset: timestamp_bits=39 max_order=22 bucket_order=0 Jan 23 17:56:25.813347 kernel: Key type asymmetric registered Jan 23 17:56:25.813355 kernel: Asymmetric key parser 'x509' registered Jan 23 17:56:25.813362 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 17:56:25.813370 kernel: io scheduler mq-deadline registered Jan 23 17:56:25.813377 kernel: io scheduler kyber registered Jan 23 17:56:25.813384 kernel: io scheduler bfq registered Jan 23 17:56:25.813392 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 17:56:25.813462 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 50 Jan 23 17:56:25.813527 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 50 Jan 23 17:56:25.813591 kernel: pcieport 0000:00:01.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.813654 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 51 Jan 23 17:56:25.813715 kernel: pcieport 0000:00:01.1: AER: enabled with IRQ 51 Jan 23 17:56:25.813775 kernel: pcieport 0000:00:01.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.813837 kernel: pcieport 0000:00:01.2: PME: Signaling with IRQ 52 Jan 23 17:56:25.813914 kernel: pcieport 0000:00:01.2: AER: enabled with IRQ 52 Jan 23 17:56:25.813983 kernel: pcieport 0000:00:01.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.814046 kernel: pcieport 0000:00:01.3: PME: Signaling with IRQ 53 Jan 23 17:56:25.814111 kernel: pcieport 0000:00:01.3: AER: enabled with IRQ 53 Jan 23 17:56:25.814172 kernel: pcieport 0000:00:01.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.814234 kernel: pcieport 0000:00:01.4: PME: Signaling with IRQ 54 Jan 23 17:56:25.814294 kernel: pcieport 0000:00:01.4: AER: enabled with IRQ 54 Jan 23 17:56:25.814355 kernel: pcieport 0000:00:01.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.814417 kernel: pcieport 0000:00:01.5: PME: Signaling with IRQ 55 Jan 23 17:56:25.814478 kernel: pcieport 0000:00:01.5: AER: enabled with IRQ 55 Jan 23 17:56:25.814539 kernel: pcieport 0000:00:01.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.814603 kernel: pcieport 0000:00:01.6: PME: Signaling with IRQ 56 Jan 23 17:56:25.814664 kernel: pcieport 0000:00:01.6: AER: enabled with IRQ 56 Jan 23 17:56:25.814749 kernel: pcieport 0000:00:01.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.814813 kernel: pcieport 0000:00:01.7: PME: Signaling with IRQ 57 Jan 23 17:56:25.814874 kernel: pcieport 0000:00:01.7: AER: enabled with IRQ 57 Jan 23 17:56:25.814962 kernel: pcieport 0000:00:01.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.814974 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 17:56:25.815036 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 58 Jan 23 17:56:25.815102 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 58 Jan 23 17:56:25.815165 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.815232 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 59 Jan 23 17:56:25.815293 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 59 Jan 23 17:56:25.815355 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.815448 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 60 Jan 23 17:56:25.815524 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 60 Jan 23 17:56:25.815587 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.815655 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 61 Jan 23 17:56:25.815717 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 61 Jan 23 17:56:25.815779 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.815841 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 62 Jan 23 17:56:25.815917 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 62 Jan 23 17:56:25.815988 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.816053 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 63 Jan 23 17:56:25.816118 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 63 Jan 23 17:56:25.816179 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.816243 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 64 Jan 23 17:56:25.816304 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 64 Jan 23 17:56:25.816367 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.816431 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 65 Jan 23 17:56:25.816494 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 65 Jan 23 17:56:25.816572 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.816601 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 23 17:56:25.816671 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 66 Jan 23 17:56:25.816734 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 66 Jan 23 17:56:25.816796 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.816858 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 67 Jan 23 17:56:25.816949 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 67 Jan 23 17:56:25.817013 kernel: pcieport 0000:00:03.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.817075 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 68 Jan 23 17:56:25.817140 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 68 Jan 23 17:56:25.817203 kernel: pcieport 0000:00:03.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.817267 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 69 Jan 23 17:56:25.817327 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 69 Jan 23 17:56:25.817389 kernel: pcieport 0000:00:03.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.817451 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 70 Jan 23 17:56:25.817512 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 70 Jan 23 17:56:25.817589 kernel: pcieport 0000:00:03.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.817659 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 71 Jan 23 17:56:25.817723 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 71 Jan 23 17:56:25.817783 kernel: pcieport 0000:00:03.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.817845 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 72 Jan 23 17:56:25.817926 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 72 Jan 23 17:56:25.818013 kernel: pcieport 0000:00:03.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.818079 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 73 Jan 23 17:56:25.818144 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 73 Jan 23 17:56:25.818223 kernel: pcieport 0000:00:03.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.818234 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 17:56:25.818294 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 74 Jan 23 17:56:25.818355 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 74 Jan 23 17:56:25.818425 kernel: pcieport 0000:00:04.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.818489 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 75 Jan 23 17:56:25.818557 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 75 Jan 23 17:56:25.818629 kernel: pcieport 0000:00:04.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.818693 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 76 Jan 23 17:56:25.818754 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 76 Jan 23 17:56:25.818820 kernel: pcieport 0000:00:04.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.818885 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 77 Jan 23 17:56:25.818972 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 77 Jan 23 17:56:25.819042 kernel: pcieport 0000:00:04.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.819107 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 78 Jan 23 17:56:25.819172 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 78 Jan 23 17:56:25.819238 kernel: pcieport 0000:00:04.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.819301 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 79 Jan 23 17:56:25.819369 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 79 Jan 23 17:56:25.819457 kernel: pcieport 0000:00:04.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.819527 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 80 Jan 23 17:56:25.819595 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 80 Jan 23 17:56:25.819656 kernel: pcieport 0000:00:04.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.819733 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 81 Jan 23 17:56:25.819795 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 81 Jan 23 17:56:25.819855 kernel: pcieport 0000:00:04.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.819940 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 82 Jan 23 17:56:25.820004 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 82 Jan 23 17:56:25.820065 kernel: pcieport 0000:00:05.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:56:25.820075 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 17:56:25.820085 kernel: ACPI: button: Power Button [PWRB] Jan 23 17:56:25.820152 kernel: virtio-pci 0000:01:00.0: enabling device (0000 -> 0002) Jan 23 17:56:25.820219 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 23 17:56:25.820230 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 17:56:25.820238 kernel: thunder_xcv, ver 1.0 Jan 23 17:56:25.820245 kernel: thunder_bgx, ver 1.0 Jan 23 17:56:25.820253 kernel: nicpf, ver 1.0 Jan 23 17:56:25.820260 kernel: nicvf, ver 1.0 Jan 23 17:56:25.820329 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 17:56:25.820391 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T17:56:25 UTC (1769190985) Jan 23 17:56:25.820401 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 17:56:25.820409 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 23 17:56:25.820416 kernel: watchdog: NMI not fully supported Jan 23 17:56:25.820424 kernel: watchdog: Hard watchdog permanently disabled Jan 23 17:56:25.820431 kernel: NET: Registered PF_INET6 protocol family Jan 23 17:56:25.820439 kernel: Segment Routing with IPv6 Jan 23 17:56:25.820446 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 17:56:25.820455 kernel: NET: Registered PF_PACKET protocol family Jan 23 17:56:25.820463 kernel: Key type dns_resolver registered Jan 23 17:56:25.820470 kernel: registered taskstats version 1 Jan 23 17:56:25.820478 kernel: Loading compiled-in X.509 certificates Jan 23 17:56:25.820485 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3b281aa2bfe49764dd224485ec54e6070c82b8fb' Jan 23 17:56:25.820493 kernel: Demotion targets for Node 0: null Jan 23 17:56:25.820501 kernel: Key type .fscrypt registered Jan 23 17:56:25.820508 kernel: Key type fscrypt-provisioning registered Jan 23 17:56:25.820515 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 17:56:25.820524 kernel: ima: Allocated hash algorithm: sha1 Jan 23 17:56:25.820531 kernel: ima: No architecture policies found Jan 23 17:56:25.820539 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 17:56:25.820546 kernel: clk: Disabling unused clocks Jan 23 17:56:25.820554 kernel: PM: genpd: Disabling unused power domains Jan 23 17:56:25.820561 kernel: Warning: unable to open an initial console. Jan 23 17:56:25.820569 kernel: Freeing unused kernel memory: 39552K Jan 23 17:56:25.820577 kernel: Run /init as init process Jan 23 17:56:25.820584 kernel: with arguments: Jan 23 17:56:25.820591 kernel: /init Jan 23 17:56:25.820606 kernel: with environment: Jan 23 17:56:25.820614 kernel: HOME=/ Jan 23 17:56:25.820622 kernel: TERM=linux Jan 23 17:56:25.820634 systemd[1]: Successfully made /usr/ read-only. Jan 23 17:56:25.820645 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:56:25.820654 systemd[1]: Detected virtualization kvm. Jan 23 17:56:25.820662 systemd[1]: Detected architecture arm64. Jan 23 17:56:25.820671 systemd[1]: Running in initrd. Jan 23 17:56:25.820679 systemd[1]: No hostname configured, using default hostname. Jan 23 17:56:25.820688 systemd[1]: Hostname set to . Jan 23 17:56:25.820695 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:56:25.820703 systemd[1]: Queued start job for default target initrd.target. Jan 23 17:56:25.820711 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:56:25.820727 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:56:25.820737 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 17:56:25.820745 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:56:25.820754 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 17:56:25.820764 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 17:56:25.820773 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 17:56:25.820782 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 17:56:25.820790 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:56:25.820798 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:56:25.820807 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:56:25.820815 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:56:25.820825 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:56:25.820833 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:56:25.820841 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:56:25.820849 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:56:25.820858 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 17:56:25.820866 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 17:56:25.820874 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:56:25.820883 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:56:25.820891 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:56:25.820927 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:56:25.820936 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 17:56:25.820947 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:56:25.820955 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 17:56:25.820964 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 17:56:25.820972 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 17:56:25.820980 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:56:25.820990 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:56:25.820999 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:56:25.821007 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 17:56:25.821016 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:56:25.821024 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 17:56:25.821034 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:56:25.821068 systemd-journald[312]: Collecting audit messages is disabled. Jan 23 17:56:25.821089 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 17:56:25.821099 kernel: Bridge firewalling registered Jan 23 17:56:25.821108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:25.821118 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:56:25.821127 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 17:56:25.821135 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:56:25.821144 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:56:25.821153 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:56:25.821161 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:56:25.821171 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:56:25.821180 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:56:25.821188 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 17:56:25.821197 systemd-journald[312]: Journal started Jan 23 17:56:25.821215 systemd-journald[312]: Runtime Journal (/run/log/journal/9e125c98fe484f0791911a2d3c5abba8) is 8M, max 319.5M, 311.5M free. Jan 23 17:56:25.761087 systemd-modules-load[313]: Inserted module 'overlay' Jan 23 17:56:25.776344 systemd-modules-load[313]: Inserted module 'br_netfilter' Jan 23 17:56:25.838478 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:56:25.838925 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:56:25.846810 systemd-tmpfiles[349]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 17:56:25.848338 dracut-cmdline[345]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:56:25.850439 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:56:25.855729 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:56:25.889601 systemd-resolved[379]: Positive Trust Anchors: Jan 23 17:56:25.889619 systemd-resolved[379]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:56:25.889651 systemd-resolved[379]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:56:25.895310 systemd-resolved[379]: Defaulting to hostname 'linux'. Jan 23 17:56:25.896342 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:56:25.898253 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:56:25.922943 kernel: SCSI subsystem initialized Jan 23 17:56:25.927922 kernel: Loading iSCSI transport class v2.0-870. Jan 23 17:56:25.935939 kernel: iscsi: registered transport (tcp) Jan 23 17:56:25.948929 kernel: iscsi: registered transport (qla4xxx) Jan 23 17:56:25.948947 kernel: QLogic iSCSI HBA Driver Jan 23 17:56:25.964584 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:56:25.980354 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:56:25.982787 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:56:26.025602 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 17:56:26.027535 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 17:56:26.096951 kernel: raid6: neonx8 gen() 15736 MB/s Jan 23 17:56:26.113920 kernel: raid6: neonx4 gen() 15725 MB/s Jan 23 17:56:26.130943 kernel: raid6: neonx2 gen() 13129 MB/s Jan 23 17:56:26.147920 kernel: raid6: neonx1 gen() 10403 MB/s Jan 23 17:56:26.164917 kernel: raid6: int64x8 gen() 6883 MB/s Jan 23 17:56:26.181934 kernel: raid6: int64x4 gen() 7300 MB/s Jan 23 17:56:26.198942 kernel: raid6: int64x2 gen() 6083 MB/s Jan 23 17:56:26.215928 kernel: raid6: int64x1 gen() 5021 MB/s Jan 23 17:56:26.215944 kernel: raid6: using algorithm neonx8 gen() 15736 MB/s Jan 23 17:56:26.232918 kernel: raid6: .... xor() 11952 MB/s, rmw enabled Jan 23 17:56:26.232934 kernel: raid6: using neon recovery algorithm Jan 23 17:56:26.238399 kernel: xor: measuring software checksum speed Jan 23 17:56:26.238454 kernel: 8regs : 21653 MB/sec Jan 23 17:56:26.238968 kernel: 32regs : 21647 MB/sec Jan 23 17:56:26.240070 kernel: arm64_neon : 28041 MB/sec Jan 23 17:56:26.240085 kernel: xor: using function: arm64_neon (28041 MB/sec) Jan 23 17:56:26.292942 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 17:56:26.299026 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:56:26.301403 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:56:26.337165 systemd-udevd[566]: Using default interface naming scheme 'v255'. Jan 23 17:56:26.341392 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:56:26.343295 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 17:56:26.367477 dracut-pre-trigger[572]: rd.md=0: removing MD RAID activation Jan 23 17:56:26.388834 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:56:26.391120 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:56:26.475424 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:56:26.479095 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 17:56:26.536935 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 17:56:26.537173 kernel: virtio_blk virtio1: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 23 17:56:26.539283 kernel: ACPI: bus type USB registered Jan 23 17:56:26.540354 kernel: usbcore: registered new interface driver usbfs Jan 23 17:56:26.541099 kernel: usbcore: registered new interface driver hub Jan 23 17:56:26.541361 kernel: usbcore: registered new device driver usb Jan 23 17:56:26.549191 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 17:56:26.549238 kernel: GPT:17805311 != 104857599 Jan 23 17:56:26.549250 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 17:56:26.550165 kernel: GPT:17805311 != 104857599 Jan 23 17:56:26.550182 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 17:56:26.551182 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 17:56:26.561408 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 17:56:26.561614 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 23 17:56:26.564918 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 17:56:26.569232 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 17:56:26.569385 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 23 17:56:26.570130 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:56:26.572474 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 23 17:56:26.572617 kernel: hub 1-0:1.0: USB hub found Jan 23 17:56:26.572717 kernel: hub 1-0:1.0: 4 ports detected Jan 23 17:56:26.570252 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:26.576329 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 17:56:26.576481 kernel: hub 2-0:1.0: USB hub found Jan 23 17:56:26.576567 kernel: hub 2-0:1.0: 4 ports detected Jan 23 17:56:26.576344 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:56:26.578962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:56:26.604488 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:26.629185 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 17:56:26.636004 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 17:56:26.643731 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 17:56:26.652284 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 17:56:26.658891 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 17:56:26.660030 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 17:56:26.662855 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:56:26.664992 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:56:26.666856 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:56:26.669476 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 17:56:26.671124 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 17:56:26.691474 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:56:26.695762 disk-uuid[666]: Primary Header is updated. Jan 23 17:56:26.695762 disk-uuid[666]: Secondary Entries is updated. Jan 23 17:56:26.695762 disk-uuid[666]: Secondary Header is updated. Jan 23 17:56:26.704926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 17:56:26.811938 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 17:56:26.942696 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 23 17:56:26.942759 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 23 17:56:26.942945 kernel: usbcore: registered new interface driver usbhid Jan 23 17:56:26.943351 kernel: usbhid: USB HID core driver Jan 23 17:56:27.048942 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 23 17:56:27.173971 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 23 17:56:27.225979 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 23 17:56:27.719932 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 17:56:27.719987 disk-uuid[674]: The operation has completed successfully. Jan 23 17:56:27.767482 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 17:56:27.768938 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 17:56:27.796546 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 17:56:27.820612 sh[689]: Success Jan 23 17:56:27.834470 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 17:56:27.834511 kernel: device-mapper: uevent: version 1.0.3 Jan 23 17:56:27.834522 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 17:56:27.840933 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 17:56:27.893517 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:56:27.896220 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 17:56:27.910424 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 17:56:27.924990 kernel: BTRFS: device fsid 8784b097-3924-47e8-98b3-06e8cbe78a64 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (701) Jan 23 17:56:27.926925 kernel: BTRFS info (device dm-0): first mount of filesystem 8784b097-3924-47e8-98b3-06e8cbe78a64 Jan 23 17:56:27.926954 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:56:27.939100 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 17:56:27.939143 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 17:56:27.941038 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 17:56:27.942209 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:56:27.943831 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 17:56:27.944596 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 17:56:27.946027 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 17:56:27.975927 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (732) Jan 23 17:56:27.979498 kernel: BTRFS info (device vda6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:27.979543 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:56:27.983937 kernel: BTRFS info (device vda6): turning on async discard Jan 23 17:56:27.983969 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 17:56:27.987922 kernel: BTRFS info (device vda6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:27.989419 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 17:56:27.991289 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 17:56:28.044814 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:56:28.049747 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:56:28.091502 systemd-networkd[872]: lo: Link UP Jan 23 17:56:28.091514 systemd-networkd[872]: lo: Gained carrier Jan 23 17:56:28.092731 systemd-networkd[872]: Enumeration completed Jan 23 17:56:28.092856 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:56:28.093300 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:56:28.093303 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:56:28.094122 systemd-networkd[872]: eth0: Link UP Jan 23 17:56:28.094207 systemd-networkd[872]: eth0: Gained carrier Jan 23 17:56:28.094216 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:56:28.094229 systemd[1]: Reached target network.target - Network. Jan 23 17:56:28.122969 systemd-networkd[872]: eth0: DHCPv4 address 10.0.0.108/25, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 17:56:28.131690 ignition[795]: Ignition 2.22.0 Jan 23 17:56:28.131705 ignition[795]: Stage: fetch-offline Jan 23 17:56:28.131737 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:28.131745 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 17:56:28.131818 ignition[795]: parsed url from cmdline: "" Jan 23 17:56:28.135418 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:56:28.131821 ignition[795]: no config URL provided Jan 23 17:56:28.137776 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 17:56:28.131826 ignition[795]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:56:28.131832 ignition[795]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:56:28.131836 ignition[795]: failed to fetch config: resource requires networking Jan 23 17:56:28.131986 ignition[795]: Ignition finished successfully Jan 23 17:56:28.167205 ignition[884]: Ignition 2.22.0 Jan 23 17:56:28.167225 ignition[884]: Stage: fetch Jan 23 17:56:28.167363 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:28.167372 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 17:56:28.167461 ignition[884]: parsed url from cmdline: "" Jan 23 17:56:28.167464 ignition[884]: no config URL provided Jan 23 17:56:28.167469 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:56:28.167475 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:56:28.167701 ignition[884]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 17:56:28.167973 ignition[884]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 17:56:28.167989 ignition[884]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 23 17:56:29.168011 ignition[884]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 17:56:29.168136 ignition[884]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 17:56:29.440275 systemd-networkd[872]: eth0: Gained IPv6LL Jan 23 17:56:30.168687 ignition[884]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 17:56:30.168747 ignition[884]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 17:56:31.169228 ignition[884]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 17:56:31.169277 ignition[884]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 17:56:31.739330 ignition[884]: GET result: OK Jan 23 17:56:31.739639 ignition[884]: parsing config with SHA512: 84473758ba68dd8df91caaf84d2fb07b2f793a549831f7d39b14843783ec479d8a1f361ece1be1cd66c24d4c6b8fe1ab946f18900e862d811eab908ef2945152 Jan 23 17:56:31.744930 unknown[884]: fetched base config from "system" Jan 23 17:56:31.744940 unknown[884]: fetched base config from "system" Jan 23 17:56:31.745337 ignition[884]: fetch: fetch complete Jan 23 17:56:31.744945 unknown[884]: fetched user config from "openstack" Jan 23 17:56:31.745342 ignition[884]: fetch: fetch passed Jan 23 17:56:31.747927 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 17:56:31.745382 ignition[884]: Ignition finished successfully Jan 23 17:56:31.750167 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 17:56:31.779636 ignition[892]: Ignition 2.22.0 Jan 23 17:56:31.779654 ignition[892]: Stage: kargs Jan 23 17:56:31.779785 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:31.779793 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 17:56:31.780519 ignition[892]: kargs: kargs passed Jan 23 17:56:31.780560 ignition[892]: Ignition finished successfully Jan 23 17:56:31.783954 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 17:56:31.786610 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 17:56:31.823863 ignition[900]: Ignition 2.22.0 Jan 23 17:56:31.823885 ignition[900]: Stage: disks Jan 23 17:56:31.824067 ignition[900]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:31.824075 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 17:56:31.827142 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 17:56:31.824756 ignition[900]: disks: disks passed Jan 23 17:56:31.829180 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 17:56:31.824796 ignition[900]: Ignition finished successfully Jan 23 17:56:31.831072 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 17:56:31.832535 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:56:31.834115 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:56:31.835492 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:56:31.837987 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 17:56:31.874516 systemd-fsck[910]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 17:56:31.878969 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 17:56:31.881765 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 17:56:31.986925 kernel: EXT4-fs (vda9): mounted filesystem 5f1f19a2-81b4-48e9-bfdb-d3843ff70e8e r/w with ordered data mode. Quota mode: none. Jan 23 17:56:31.987123 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 17:56:31.988265 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 17:56:31.991046 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:56:31.992689 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 17:56:31.993635 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 17:56:31.994229 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 23 17:56:31.996677 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 17:56:31.996720 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:56:32.013535 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 17:56:32.016698 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 17:56:32.024931 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (919) Jan 23 17:56:32.026952 kernel: BTRFS info (device vda6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:32.026988 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:56:32.031469 kernel: BTRFS info (device vda6): turning on async discard Jan 23 17:56:32.031521 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 17:56:32.033244 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:56:32.055935 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:56:32.060835 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 17:56:32.065029 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Jan 23 17:56:32.068346 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 17:56:32.071537 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 17:56:32.147045 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 17:56:32.149208 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 17:56:32.152130 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 17:56:32.174444 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 17:56:32.176442 kernel: BTRFS info (device vda6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:32.191999 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 17:56:32.205419 ignition[1037]: INFO : Ignition 2.22.0 Jan 23 17:56:32.205419 ignition[1037]: INFO : Stage: mount Jan 23 17:56:32.206915 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:32.206915 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 17:56:32.206915 ignition[1037]: INFO : mount: mount passed Jan 23 17:56:32.206915 ignition[1037]: INFO : Ignition finished successfully Jan 23 17:56:32.208382 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 17:56:33.090953 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:56:35.098941 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:56:39.104022 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:56:39.107681 coreos-metadata[921]: Jan 23 17:56:39.107 WARN failed to locate config-drive, using the metadata service API instead Jan 23 17:56:39.124255 coreos-metadata[921]: Jan 23 17:56:39.124 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 17:56:40.739934 coreos-metadata[921]: Jan 23 17:56:40.739 INFO Fetch successful Jan 23 17:56:40.741510 coreos-metadata[921]: Jan 23 17:56:40.741 INFO wrote hostname ci-4459-2-3-a-575e6c418a to /sysroot/etc/hostname Jan 23 17:56:40.742889 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 23 17:56:40.743970 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 23 17:56:40.746886 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 17:56:40.765557 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:56:40.787931 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1056) Jan 23 17:56:40.790415 kernel: BTRFS info (device vda6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:56:40.790445 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:56:40.794498 kernel: BTRFS info (device vda6): turning on async discard Jan 23 17:56:40.794570 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 17:56:40.796144 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:56:40.827015 ignition[1074]: INFO : Ignition 2.22.0 Jan 23 17:56:40.827015 ignition[1074]: INFO : Stage: files Jan 23 17:56:40.828566 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:40.828566 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 17:56:40.828566 ignition[1074]: DEBUG : files: compiled without relabeling support, skipping Jan 23 17:56:40.831736 ignition[1074]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 17:56:40.831736 ignition[1074]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 17:56:40.831736 ignition[1074]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 17:56:40.831736 ignition[1074]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 17:56:40.836717 ignition[1074]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 17:56:40.836717 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 17:56:40.836717 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 17:56:40.831827 unknown[1074]: wrote ssh authorized keys file for user: core Jan 23 17:56:40.887290 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 17:56:40.987719 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 17:56:40.987719 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 17:56:40.991287 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 23 17:56:41.202598 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 17:56:41.374253 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 17:56:41.374253 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 17:56:41.377918 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 17:56:41.377918 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:56:41.377918 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:56:41.377918 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:56:41.377918 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:56:41.377918 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:56:41.377918 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:56:41.377918 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:56:41.377918 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:56:41.377918 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:56:41.377918 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:56:41.377918 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:56:41.377918 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 17:56:41.431304 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 17:56:42.046179 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:56:42.046179 ignition[1074]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 17:56:42.051989 ignition[1074]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:56:42.051989 ignition[1074]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:56:42.051989 ignition[1074]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 17:56:42.051989 ignition[1074]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 17:56:42.051989 ignition[1074]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 17:56:42.051989 ignition[1074]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:56:42.051989 ignition[1074]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:56:42.051989 ignition[1074]: INFO : files: files passed Jan 23 17:56:42.051989 ignition[1074]: INFO : Ignition finished successfully Jan 23 17:56:42.054466 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 17:56:42.057495 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 17:56:42.059042 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 17:56:42.079298 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 17:56:42.079433 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 17:56:42.086743 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:56:42.086743 initrd-setup-root-after-ignition[1105]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:56:42.089595 initrd-setup-root-after-ignition[1109]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:56:42.090309 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:56:42.092146 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 17:56:42.094561 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 17:56:42.157066 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 17:56:42.157205 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 17:56:42.159209 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 17:56:42.160745 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 17:56:42.162371 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 17:56:42.163123 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 17:56:42.192925 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:56:42.195149 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 17:56:42.230315 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:56:42.231462 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:56:42.233233 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 17:56:42.234790 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 17:56:42.234926 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:56:42.237214 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 17:56:42.238880 systemd[1]: Stopped target basic.target - Basic System. Jan 23 17:56:42.240387 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 17:56:42.241963 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:56:42.243793 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 17:56:42.245605 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:56:42.247364 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 17:56:42.248937 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:56:42.250727 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 17:56:42.252484 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 17:56:42.253979 systemd[1]: Stopped target swap.target - Swaps. Jan 23 17:56:42.255329 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 17:56:42.255483 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:56:42.257559 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:56:42.259235 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:56:42.260951 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 17:56:42.261070 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:56:42.262895 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 17:56:42.263021 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 17:56:42.265620 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 17:56:42.265745 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:56:42.267405 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 17:56:42.267519 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 17:56:42.269718 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 17:56:42.271544 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 17:56:42.271678 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:56:42.279766 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 17:56:42.280573 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 17:56:42.280687 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:56:42.282375 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 17:56:42.282499 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:56:42.287957 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 17:56:42.288048 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 17:56:42.296152 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 17:56:42.297020 ignition[1129]: INFO : Ignition 2.22.0 Jan 23 17:56:42.297020 ignition[1129]: INFO : Stage: umount Jan 23 17:56:42.297020 ignition[1129]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:56:42.297020 ignition[1129]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 17:56:42.301747 ignition[1129]: INFO : umount: umount passed Jan 23 17:56:42.301747 ignition[1129]: INFO : Ignition finished successfully Jan 23 17:56:42.299564 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 17:56:42.299692 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 17:56:42.300888 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 17:56:42.300976 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 17:56:42.303134 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 17:56:42.303178 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 17:56:42.304513 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 17:56:42.304552 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 17:56:42.305926 systemd[1]: Stopped target network.target - Network. Jan 23 17:56:42.307688 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 17:56:42.307741 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:56:42.309413 systemd[1]: Stopped target paths.target - Path Units. Jan 23 17:56:42.310736 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 17:56:42.313971 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:56:42.315017 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 17:56:42.316772 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 17:56:42.318362 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 17:56:42.318401 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:56:42.320367 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 17:56:42.320395 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:56:42.321932 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 17:56:42.321985 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 17:56:42.324341 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 17:56:42.324381 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 17:56:42.325981 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 17:56:42.327990 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 17:56:42.333617 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 17:56:42.333741 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 17:56:42.337527 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 17:56:42.337771 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 17:56:42.337822 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:56:42.340500 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:56:42.354164 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 17:56:42.354282 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 17:56:42.359102 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 17:56:42.359186 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 17:56:42.361653 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 17:56:42.361701 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:56:42.364525 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 17:56:42.365576 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 17:56:42.365630 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:56:42.367528 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:56:42.367574 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:56:42.369911 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 17:56:42.369953 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 17:56:42.371708 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:56:42.374365 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 17:56:42.383429 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 17:56:42.383563 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 17:56:42.385370 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 17:56:42.385519 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:56:42.387307 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 17:56:42.387385 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 17:56:42.389062 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 17:56:42.389093 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:56:42.389961 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 17:56:42.390005 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:56:42.390959 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 17:56:42.391003 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 17:56:42.393919 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 17:56:42.393967 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:56:42.398311 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 17:56:42.399275 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 17:56:42.399340 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:56:42.405415 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 17:56:42.405472 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:56:42.408069 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:56:42.408114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:42.416043 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 17:56:42.417927 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 17:56:42.510389 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 17:56:42.510503 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 17:56:42.512285 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 17:56:42.513607 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 17:56:42.513665 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 17:56:42.516014 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 17:56:42.534658 systemd[1]: Switching root. Jan 23 17:56:42.676734 systemd-journald[312]: Journal stopped Jan 23 17:56:44.004999 systemd-journald[312]: Received SIGTERM from PID 1 (systemd). Jan 23 17:56:44.005094 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 17:56:44.005107 kernel: SELinux: policy capability open_perms=1 Jan 23 17:56:44.005122 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 17:56:44.005135 kernel: SELinux: policy capability always_check_network=0 Jan 23 17:56:44.005145 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 17:56:44.005157 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 17:56:44.005167 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 17:56:44.005179 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 17:56:44.005188 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 17:56:44.005198 systemd[1]: Successfully loaded SELinux policy in 60.451ms. Jan 23 17:56:44.005219 kernel: audit: type=1403 audit(1769191003.313:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 17:56:44.005236 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.572ms. Jan 23 17:56:44.005247 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:56:44.005258 systemd[1]: Detected virtualization kvm. Jan 23 17:56:44.005268 systemd[1]: Detected architecture arm64. Jan 23 17:56:44.005278 systemd[1]: Detected first boot. Jan 23 17:56:44.005288 systemd[1]: Hostname set to . Jan 23 17:56:44.005298 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:56:44.005316 zram_generator::config[1176]: No configuration found. Jan 23 17:56:44.005328 kernel: NET: Registered PF_VSOCK protocol family Jan 23 17:56:44.005337 systemd[1]: Populated /etc with preset unit settings. Jan 23 17:56:44.005348 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 17:56:44.005358 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 17:56:44.005370 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 17:56:44.005379 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 17:56:44.005390 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 17:56:44.005400 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 17:56:44.005411 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 17:56:44.005421 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 17:56:44.005431 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 17:56:44.005444 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 17:56:44.005455 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 17:56:44.005465 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 17:56:44.005475 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:56:44.005485 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:56:44.005496 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 17:56:44.005507 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 17:56:44.005518 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 17:56:44.005528 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:56:44.005538 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 17:56:44.005548 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:56:44.005558 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:56:44.005570 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 17:56:44.005580 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 17:56:44.005590 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 17:56:44.005600 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 17:56:44.005610 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:56:44.005624 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:56:44.005634 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:56:44.005648 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:56:44.005658 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 17:56:44.005669 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 17:56:44.005679 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 17:56:44.005689 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:56:44.005699 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:56:44.005709 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:56:44.005720 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 17:56:44.005730 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 17:56:44.005740 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 17:56:44.005750 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 17:56:44.005761 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 17:56:44.005772 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 17:56:44.005782 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 17:56:44.005792 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 17:56:44.005803 systemd[1]: Reached target machines.target - Containers. Jan 23 17:56:44.005813 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 17:56:44.005823 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:56:44.005833 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:56:44.005843 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 17:56:44.005855 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:56:44.005865 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:56:44.005875 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:56:44.005885 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 17:56:44.005895 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:56:44.005917 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 17:56:44.005928 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 17:56:44.005939 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 17:56:44.005951 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 17:56:44.005962 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 17:56:44.005973 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:56:44.005983 kernel: fuse: init (API version 7.41) Jan 23 17:56:44.005994 kernel: loop: module loaded Jan 23 17:56:44.006004 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:56:44.006014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:56:44.006024 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:56:44.006035 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 17:56:44.006045 kernel: ACPI: bus type drm_connector registered Jan 23 17:56:44.006054 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 17:56:44.006065 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:56:44.006075 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 17:56:44.006085 systemd[1]: Stopped verity-setup.service. Jan 23 17:56:44.006096 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 17:56:44.006107 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 17:56:44.006118 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 17:56:44.006128 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 17:56:44.006166 systemd-journald[1247]: Collecting audit messages is disabled. Jan 23 17:56:44.006194 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 17:56:44.006204 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 17:56:44.006215 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 17:56:44.006225 systemd-journald[1247]: Journal started Jan 23 17:56:44.006246 systemd-journald[1247]: Runtime Journal (/run/log/journal/9e125c98fe484f0791911a2d3c5abba8) is 8M, max 319.5M, 311.5M free. Jan 23 17:56:43.788839 systemd[1]: Queued start job for default target multi-user.target. Jan 23 17:56:43.807968 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 17:56:43.808378 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 17:56:44.009509 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:56:44.011932 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:56:44.013321 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 17:56:44.013488 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 17:56:44.014783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:56:44.014969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:56:44.016191 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:56:44.016351 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:56:44.017542 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:56:44.017693 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:56:44.019075 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 17:56:44.019237 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 17:56:44.020407 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:56:44.020568 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:56:44.023300 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:56:44.024561 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:56:44.026243 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 17:56:44.027616 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 17:56:44.038947 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:56:44.041220 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 17:56:44.043054 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 17:56:44.044145 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 17:56:44.044198 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:56:44.045861 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 17:56:44.055079 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 17:56:44.056108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:56:44.057675 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 17:56:44.059774 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 17:56:44.060929 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:56:44.064059 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 17:56:44.066034 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:56:44.067083 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:56:44.069087 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 17:56:44.079559 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 17:56:44.084243 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 17:56:44.085623 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 17:56:44.090490 systemd-journald[1247]: Time spent on flushing to /var/log/journal/9e125c98fe484f0791911a2d3c5abba8 is 36.697ms for 1693 entries. Jan 23 17:56:44.090490 systemd-journald[1247]: System Journal (/var/log/journal/9e125c98fe484f0791911a2d3c5abba8) is 8M, max 584.8M, 576.8M free. Jan 23 17:56:44.145013 systemd-journald[1247]: Received client request to flush runtime journal. Jan 23 17:56:44.145075 kernel: loop0: detected capacity change from 0 to 100632 Jan 23 17:56:44.145094 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 17:56:44.091935 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:56:44.099129 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 17:56:44.100591 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 17:56:44.105207 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 17:56:44.117000 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:56:44.143832 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 17:56:44.147541 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 17:56:44.153056 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:56:44.156300 kernel: loop1: detected capacity change from 0 to 119840 Jan 23 17:56:44.157293 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 17:56:44.223934 kernel: loop2: detected capacity change from 0 to 207008 Jan 23 17:56:44.232351 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jan 23 17:56:44.232375 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jan 23 17:56:44.236445 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:56:44.388939 kernel: loop3: detected capacity change from 0 to 1632 Jan 23 17:56:44.456071 kernel: loop4: detected capacity change from 0 to 100632 Jan 23 17:56:44.538942 kernel: loop5: detected capacity change from 0 to 119840 Jan 23 17:56:44.551217 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 17:56:44.554973 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:56:44.591515 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jan 23 17:56:44.604035 kernel: loop6: detected capacity change from 0 to 207008 Jan 23 17:56:44.722959 kernel: loop7: detected capacity change from 0 to 1632 Jan 23 17:56:44.766475 (sd-merge)[1323]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-stackit'. Jan 23 17:56:44.766895 (sd-merge)[1323]: Merged extensions into '/usr'. Jan 23 17:56:44.780516 systemd[1]: Reload requested from client PID 1296 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 17:56:44.780538 systemd[1]: Reloading... Jan 23 17:56:44.849944 zram_generator::config[1397]: No configuration found. Jan 23 17:56:44.956933 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 17:56:45.021371 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 17:56:45.021617 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 23 17:56:45.021979 systemd[1]: Reloading finished in 241 ms. Jan 23 17:56:45.042468 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:56:45.046458 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 17:56:45.075526 systemd[1]: Starting ensure-sysext.service... Jan 23 17:56:45.078013 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:56:45.087435 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:56:45.089838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:56:45.102520 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 17:56:45.103268 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 17:56:45.103299 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 17:56:45.103577 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 17:56:45.103782 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 17:56:45.103817 systemd[1]: Reload requested from client PID 1441 ('systemctl') (unit ensure-sysext.service)... Jan 23 17:56:45.103827 systemd[1]: Reloading... Jan 23 17:56:45.105329 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 17:56:45.105690 systemd-tmpfiles[1443]: ACLs are not supported, ignoring. Jan 23 17:56:45.105805 systemd-tmpfiles[1443]: ACLs are not supported, ignoring. Jan 23 17:56:45.140087 systemd-tmpfiles[1443]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:56:45.140873 systemd-tmpfiles[1443]: Skipping /boot Jan 23 17:56:45.152039 systemd-tmpfiles[1443]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:56:45.152154 systemd-tmpfiles[1443]: Skipping /boot Jan 23 17:56:45.169495 kernel: [drm] pci: virtio-gpu-pci detected at 0000:06:00.0 Jan 23 17:56:45.169567 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 17:56:45.169585 kernel: [drm] features: -context_init Jan 23 17:56:45.172943 kernel: [drm] number of scanouts: 1 Jan 23 17:56:45.173021 kernel: [drm] number of cap sets: 0 Jan 23 17:56:45.175090 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:06:00.0 on minor 0 Jan 23 17:56:45.179921 zram_generator::config[1485]: No configuration found. Jan 23 17:56:45.181948 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 17:56:45.182020 kernel: virtio-pci 0000:06:00.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 17:56:45.236984 systemd-networkd[1442]: lo: Link UP Jan 23 17:56:45.237265 systemd-networkd[1442]: lo: Gained carrier Jan 23 17:56:45.238324 systemd-networkd[1442]: Enumeration completed Jan 23 17:56:45.238873 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:56:45.238880 systemd-networkd[1442]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:56:45.239605 systemd-networkd[1442]: eth0: Link UP Jan 23 17:56:45.239707 systemd-networkd[1442]: eth0: Gained carrier Jan 23 17:56:45.239722 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:56:45.257954 systemd-networkd[1442]: eth0: DHCPv4 address 10.0.0.108/25, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 17:56:45.340227 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 17:56:45.341481 systemd[1]: Reloading finished in 237 ms. Jan 23 17:56:45.356758 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 17:56:45.357990 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:56:45.375571 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:56:45.378293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:45.409546 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:56:45.421790 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 17:56:45.423214 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:56:45.424514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:56:45.435116 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:56:45.437790 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:56:45.439922 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:56:45.442539 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 23 17:56:45.443909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:56:45.445960 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 17:56:45.448166 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:56:45.450469 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 17:56:45.452690 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 17:56:45.455210 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 17:56:45.459230 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:56:45.460202 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 17:56:45.462199 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 17:56:45.463323 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:56:45.463487 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:45.465263 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:56:45.468257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:56:45.471523 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:56:45.473203 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:56:45.476159 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 17:56:45.476211 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 17:56:45.479081 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:56:45.479937 kernel: PTP clock support registered Jan 23 17:56:45.480628 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:56:45.480924 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:56:45.482555 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:56:45.482686 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:56:45.487100 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:56:45.487261 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:56:45.488632 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 23 17:56:45.488768 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 23 17:56:45.490612 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 17:56:45.497018 systemd[1]: Finished ensure-sysext.service. Jan 23 17:56:45.510551 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 17:56:45.513015 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:56:45.515100 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:56:45.515191 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:56:45.520576 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 17:56:45.545182 augenrules[1578]: No rules Jan 23 17:56:45.546349 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:56:45.546570 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:56:45.563223 systemd-resolved[1553]: Positive Trust Anchors: Jan 23 17:56:45.563243 systemd-resolved[1553]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:56:45.563274 systemd-resolved[1553]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:56:45.567004 systemd-resolved[1553]: Using system hostname 'ci-4459-2-3-a-575e6c418a'. Jan 23 17:56:45.568431 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:56:45.569542 systemd[1]: Reached target network.target - Network. Jan 23 17:56:45.570369 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:56:45.587958 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 17:56:45.798735 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 17:56:45.800264 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 17:56:45.832503 ldconfig[1291]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 17:56:45.837965 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 17:56:45.840447 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 17:56:45.857939 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 17:56:45.859238 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:56:45.860347 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 17:56:45.861446 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 17:56:45.862714 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 17:56:45.863800 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 17:56:45.864977 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 17:56:45.865997 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 17:56:45.866037 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:56:45.866776 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:56:45.868550 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 17:56:45.870817 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 17:56:45.873513 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 17:56:45.874809 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 17:56:45.876006 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 17:56:45.887003 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 17:56:45.888301 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 17:56:45.889957 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 17:56:45.890953 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:56:45.891762 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:56:45.892662 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:56:45.892695 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:56:45.895922 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 17:56:45.897549 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 17:56:45.899625 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 17:56:45.901434 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 17:56:45.906918 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:56:45.907036 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 17:56:45.909000 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 17:56:45.911039 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 17:56:45.914315 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 17:56:45.915533 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 17:56:45.919016 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 17:56:45.920465 jq[1601]: false Jan 23 17:56:45.921047 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 17:56:45.923143 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 17:56:45.935200 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 17:56:45.937027 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 17:56:45.937145 extend-filesystems[1602]: Found /dev/vda6 Jan 23 17:56:45.937533 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 17:56:45.941155 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 17:56:45.945531 extend-filesystems[1602]: Found /dev/vda9 Jan 23 17:56:45.945761 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 17:56:45.947682 extend-filesystems[1602]: Checking size of /dev/vda9 Jan 23 17:56:45.950937 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 17:56:45.954075 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 17:56:45.954817 chronyd[1594]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 17:56:45.956754 chronyd[1594]: Loaded seccomp filter (level 2) Jan 23 17:56:45.958247 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 17:56:45.958546 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 17:56:45.960164 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 17:56:45.960306 extend-filesystems[1602]: Resized partition /dev/vda9 Jan 23 17:56:45.963038 extend-filesystems[1629]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 17:56:45.960973 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 17:56:45.965512 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 17:56:45.965712 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 17:56:45.973926 jq[1621]: true Jan 23 17:56:45.982079 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 12499963 blocks Jan 23 17:56:45.982314 (ntainerd)[1632]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 17:56:46.000318 tar[1631]: linux-arm64/LICENSE Jan 23 17:56:46.000609 tar[1631]: linux-arm64/helm Jan 23 17:56:46.002097 jq[1641]: true Jan 23 17:56:46.002573 update_engine[1618]: I20260123 17:56:46.001506 1618 main.cc:92] Flatcar Update Engine starting Jan 23 17:56:46.053031 systemd-logind[1612]: New seat seat0. Jan 23 17:56:46.066478 systemd-logind[1612]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 17:56:46.066510 systemd-logind[1612]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 23 17:56:46.102713 dbus-daemon[1597]: [system] SELinux support is enabled Jan 23 17:56:46.127834 update_engine[1618]: I20260123 17:56:46.107256 1618 update_check_scheduler.cc:74] Next update check in 5m29s Jan 23 17:56:46.066954 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 17:56:46.106790 dbus-daemon[1597]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 17:56:46.103091 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 17:56:46.106099 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 17:56:46.106121 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 17:56:46.107243 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 17:56:46.107256 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 17:56:46.109093 systemd[1]: Started update-engine.service - Update Engine. Jan 23 17:56:46.111618 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 17:56:46.192295 locksmithd[1661]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 17:56:46.374635 containerd[1632]: time="2026-01-23T17:56:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 17:56:46.375442 containerd[1632]: time="2026-01-23T17:56:46.375390560Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 17:56:46.387463 containerd[1632]: time="2026-01-23T17:56:46.387413280Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.88µs" Jan 23 17:56:46.408510 containerd[1632]: time="2026-01-23T17:56:46.387565680Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 17:56:46.408510 containerd[1632]: time="2026-01-23T17:56:46.387589160Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 17:56:46.410936 bash[1660]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:56:46.411083 containerd[1632]: time="2026-01-23T17:56:46.409723960Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 17:56:46.411083 containerd[1632]: time="2026-01-23T17:56:46.410365360Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 17:56:46.411083 containerd[1632]: time="2026-01-23T17:56:46.410399200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:56:46.411083 containerd[1632]: time="2026-01-23T17:56:46.410470920Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:56:46.411083 containerd[1632]: time="2026-01-23T17:56:46.410482560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:56:46.411083 containerd[1632]: time="2026-01-23T17:56:46.410702120Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:56:46.411083 containerd[1632]: time="2026-01-23T17:56:46.410718360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:56:46.411083 containerd[1632]: time="2026-01-23T17:56:46.410729040Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:56:46.411083 containerd[1632]: time="2026-01-23T17:56:46.410738200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 17:56:46.411083 containerd[1632]: time="2026-01-23T17:56:46.410801960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 17:56:46.411915 containerd[1632]: time="2026-01-23T17:56:46.411749080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:56:46.411915 containerd[1632]: time="2026-01-23T17:56:46.411866120Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:56:46.411915 containerd[1632]: time="2026-01-23T17:56:46.411878960Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 17:56:46.411795 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 17:56:46.412954 containerd[1632]: time="2026-01-23T17:56:46.412104920Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 17:56:46.413125 containerd[1632]: time="2026-01-23T17:56:46.413092800Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 17:56:46.413438 containerd[1632]: time="2026-01-23T17:56:46.413284600Z" level=info msg="metadata content store policy set" policy=shared Jan 23 17:56:46.417200 systemd[1]: Starting sshkeys.service... Jan 23 17:56:46.451560 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 17:56:46.454175 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 17:56:46.480943 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:56:46.518635 containerd[1632]: time="2026-01-23T17:56:46.518580920Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 17:56:46.518762 containerd[1632]: time="2026-01-23T17:56:46.518662800Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 17:56:46.518762 containerd[1632]: time="2026-01-23T17:56:46.518688120Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 17:56:46.518762 containerd[1632]: time="2026-01-23T17:56:46.518701840Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 17:56:46.518762 containerd[1632]: time="2026-01-23T17:56:46.518713720Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 17:56:46.518762 containerd[1632]: time="2026-01-23T17:56:46.518723960Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 17:56:46.518762 containerd[1632]: time="2026-01-23T17:56:46.518740160Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 17:56:46.518860 containerd[1632]: time="2026-01-23T17:56:46.518768200Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 17:56:46.518860 containerd[1632]: time="2026-01-23T17:56:46.518779160Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 17:56:46.518860 containerd[1632]: time="2026-01-23T17:56:46.518789240Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 17:56:46.518860 containerd[1632]: time="2026-01-23T17:56:46.518800720Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 17:56:46.518860 containerd[1632]: time="2026-01-23T17:56:46.518815280Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 17:56:46.519249 containerd[1632]: time="2026-01-23T17:56:46.519177040Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 17:56:46.519283 containerd[1632]: time="2026-01-23T17:56:46.519258880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 17:56:46.519302 containerd[1632]: time="2026-01-23T17:56:46.519283040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 17:56:46.519302 containerd[1632]: time="2026-01-23T17:56:46.519296640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 17:56:46.519444 containerd[1632]: time="2026-01-23T17:56:46.519307080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 17:56:46.519444 containerd[1632]: time="2026-01-23T17:56:46.519318440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 17:56:46.519444 containerd[1632]: time="2026-01-23T17:56:46.519329400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 17:56:46.519444 containerd[1632]: time="2026-01-23T17:56:46.519352560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 17:56:46.519444 containerd[1632]: time="2026-01-23T17:56:46.519366080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 17:56:46.519444 containerd[1632]: time="2026-01-23T17:56:46.519376800Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 17:56:46.519444 containerd[1632]: time="2026-01-23T17:56:46.519387760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 17:56:46.521169 containerd[1632]: time="2026-01-23T17:56:46.520827080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 17:56:46.521169 containerd[1632]: time="2026-01-23T17:56:46.521012360Z" level=info msg="Start snapshots syncer" Jan 23 17:56:46.521219 containerd[1632]: time="2026-01-23T17:56:46.521174280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 17:56:46.523979 containerd[1632]: time="2026-01-23T17:56:46.523364640Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 17:56:46.523979 containerd[1632]: time="2026-01-23T17:56:46.523430400Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 17:56:46.524124 containerd[1632]: time="2026-01-23T17:56:46.523495920Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 17:56:46.524124 containerd[1632]: time="2026-01-23T17:56:46.523624840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 17:56:46.524124 containerd[1632]: time="2026-01-23T17:56:46.523645680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 17:56:46.524124 containerd[1632]: time="2026-01-23T17:56:46.523657400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 17:56:46.524124 containerd[1632]: time="2026-01-23T17:56:46.523668000Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 17:56:46.524124 containerd[1632]: time="2026-01-23T17:56:46.523679760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 17:56:46.524124 containerd[1632]: time="2026-01-23T17:56:46.523690240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 17:56:46.524124 containerd[1632]: time="2026-01-23T17:56:46.523701280Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 17:56:46.524124 containerd[1632]: time="2026-01-23T17:56:46.523726600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 17:56:46.524124 containerd[1632]: time="2026-01-23T17:56:46.523738600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 17:56:46.524124 containerd[1632]: time="2026-01-23T17:56:46.523750440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 17:56:46.524124 containerd[1632]: time="2026-01-23T17:56:46.523784880Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:56:46.524124 containerd[1632]: time="2026-01-23T17:56:46.523799560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:56:46.524124 containerd[1632]: time="2026-01-23T17:56:46.523808240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:56:46.524379 containerd[1632]: time="2026-01-23T17:56:46.523818840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:56:46.524379 containerd[1632]: time="2026-01-23T17:56:46.523826640Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 17:56:46.524379 containerd[1632]: time="2026-01-23T17:56:46.523836000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 17:56:46.524379 containerd[1632]: time="2026-01-23T17:56:46.523846560Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 17:56:46.525013 containerd[1632]: time="2026-01-23T17:56:46.524981120Z" level=info msg="runtime interface created" Jan 23 17:56:46.525013 containerd[1632]: time="2026-01-23T17:56:46.525008040Z" level=info msg="created NRI interface" Jan 23 17:56:46.525192 containerd[1632]: time="2026-01-23T17:56:46.525023000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 17:56:46.525192 containerd[1632]: time="2026-01-23T17:56:46.525040920Z" level=info msg="Connect containerd service" Jan 23 17:56:46.525192 containerd[1632]: time="2026-01-23T17:56:46.525078080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 17:56:46.525918 containerd[1632]: time="2026-01-23T17:56:46.525877280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:56:46.640242 containerd[1632]: time="2026-01-23T17:56:46.639977800Z" level=info msg="Start subscribing containerd event" Jan 23 17:56:46.640242 containerd[1632]: time="2026-01-23T17:56:46.640067120Z" level=info msg="Start recovering state" Jan 23 17:56:46.640689 containerd[1632]: time="2026-01-23T17:56:46.640255520Z" level=info msg="Start event monitor" Jan 23 17:56:46.640689 containerd[1632]: time="2026-01-23T17:56:46.640277160Z" level=info msg="Start cni network conf syncer for default" Jan 23 17:56:46.640689 containerd[1632]: time="2026-01-23T17:56:46.640286160Z" level=info msg="Start streaming server" Jan 23 17:56:46.640689 containerd[1632]: time="2026-01-23T17:56:46.640294840Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 17:56:46.640689 containerd[1632]: time="2026-01-23T17:56:46.640302240Z" level=info msg="runtime interface starting up..." Jan 23 17:56:46.640689 containerd[1632]: time="2026-01-23T17:56:46.640307080Z" level=info msg="starting plugins..." Jan 23 17:56:46.640689 containerd[1632]: time="2026-01-23T17:56:46.640321320Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 17:56:46.640689 containerd[1632]: time="2026-01-23T17:56:46.640262880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 17:56:46.640689 containerd[1632]: time="2026-01-23T17:56:46.640446080Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 17:56:46.642728 containerd[1632]: time="2026-01-23T17:56:46.640660760Z" level=info msg="containerd successfully booted in 0.409664s" Jan 23 17:56:46.641088 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 17:56:46.713220 tar[1631]: linux-arm64/README.md Jan 23 17:56:46.729727 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 17:56:46.784970 systemd-networkd[1442]: eth0: Gained IPv6LL Jan 23 17:56:46.787138 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 17:56:46.788889 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 17:56:46.791663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:56:46.794093 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 17:56:46.821959 sshd_keygen[1623]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 17:56:46.823018 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 17:56:46.847678 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 17:56:46.850932 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 17:56:46.866663 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 17:56:46.866920 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 17:56:46.869885 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 17:56:46.902489 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 17:56:46.907027 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 17:56:46.909570 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 17:56:46.911150 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 17:56:46.920925 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:56:47.102037 kernel: EXT4-fs (vda9): resized filesystem to 12499963 Jan 23 17:56:47.287318 extend-filesystems[1629]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 17:56:47.287318 extend-filesystems[1629]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 23 17:56:47.287318 extend-filesystems[1629]: The filesystem on /dev/vda9 is now 12499963 (4k) blocks long. Jan 23 17:56:47.290930 extend-filesystems[1602]: Resized filesystem in /dev/vda9 Jan 23 17:56:47.290092 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 17:56:47.290961 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 17:56:47.489948 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:56:47.918205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:56:47.922024 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:56:48.437715 kubelet[1734]: E0123 17:56:48.437642 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:56:48.440217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:56:48.440355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:56:48.441982 systemd[1]: kubelet.service: Consumed 764ms CPU time, 257.5M memory peak. Jan 23 17:56:48.932999 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:56:49.497994 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:56:52.939927 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:56:52.946094 coreos-metadata[1596]: Jan 23 17:56:52.946 WARN failed to locate config-drive, using the metadata service API instead Jan 23 17:56:52.963122 coreos-metadata[1596]: Jan 23 17:56:52.963 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 23 17:56:53.506930 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 17:56:53.512239 coreos-metadata[1675]: Jan 23 17:56:53.512 WARN failed to locate config-drive, using the metadata service API instead Jan 23 17:56:53.525145 coreos-metadata[1675]: Jan 23 17:56:53.525 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 23 17:56:55.563188 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 17:56:55.564658 systemd[1]: Started sshd@0-10.0.0.108:22-4.153.228.146:34314.service - OpenSSH per-connection server daemon (4.153.228.146:34314). Jan 23 17:56:55.611780 coreos-metadata[1596]: Jan 23 17:56:55.611 INFO Fetch successful Jan 23 17:56:55.612124 coreos-metadata[1596]: Jan 23 17:56:55.612 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 17:56:56.184436 sshd[1753]: Accepted publickey for core from 4.153.228.146 port 34314 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 17:56:56.186386 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:56.197982 systemd-logind[1612]: New session 1 of user core. Jan 23 17:56:56.199591 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 17:56:56.201169 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 17:56:56.218726 coreos-metadata[1675]: Jan 23 17:56:56.218 INFO Fetch successful Jan 23 17:56:56.218726 coreos-metadata[1675]: Jan 23 17:56:56.218 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 17:56:56.236990 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 17:56:56.239859 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 17:56:56.255739 (systemd)[1758]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 17:56:56.257973 systemd-logind[1612]: New session c1 of user core. Jan 23 17:56:56.379523 systemd[1758]: Queued start job for default target default.target. Jan 23 17:56:56.394192 systemd[1758]: Created slice app.slice - User Application Slice. Jan 23 17:56:56.394223 systemd[1758]: Reached target paths.target - Paths. Jan 23 17:56:56.394259 systemd[1758]: Reached target timers.target - Timers. Jan 23 17:56:56.395469 systemd[1758]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 17:56:56.404998 systemd[1758]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 17:56:56.405066 systemd[1758]: Reached target sockets.target - Sockets. Jan 23 17:56:56.405108 systemd[1758]: Reached target basic.target - Basic System. Jan 23 17:56:56.405134 systemd[1758]: Reached target default.target - Main User Target. Jan 23 17:56:56.405160 systemd[1758]: Startup finished in 141ms. Jan 23 17:56:56.405638 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 17:56:56.407397 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 17:56:56.857779 systemd[1]: Started sshd@1-10.0.0.108:22-4.153.228.146:34330.service - OpenSSH per-connection server daemon (4.153.228.146:34330). Jan 23 17:56:57.485395 sshd[1769]: Accepted publickey for core from 4.153.228.146 port 34330 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 17:56:57.486159 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:57.489809 systemd-logind[1612]: New session 2 of user core. Jan 23 17:56:57.502544 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 17:56:57.920616 sshd[1772]: Connection closed by 4.153.228.146 port 34330 Jan 23 17:56:57.920921 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:57.924186 systemd[1]: sshd@1-10.0.0.108:22-4.153.228.146:34330.service: Deactivated successfully. Jan 23 17:56:57.927333 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 17:56:57.929703 systemd-logind[1612]: Session 2 logged out. Waiting for processes to exit. Jan 23 17:56:57.930996 systemd-logind[1612]: Removed session 2. Jan 23 17:56:58.034453 systemd[1]: Started sshd@2-10.0.0.108:22-4.153.228.146:34334.service - OpenSSH per-connection server daemon (4.153.228.146:34334). Jan 23 17:56:58.166746 coreos-metadata[1596]: Jan 23 17:56:58.166 INFO Fetch successful Jan 23 17:56:58.166746 coreos-metadata[1596]: Jan 23 17:56:58.166 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 23 17:56:58.174028 coreos-metadata[1675]: Jan 23 17:56:58.173 INFO Fetch successful Jan 23 17:56:58.177048 unknown[1675]: wrote ssh authorized keys file for user: core Jan 23 17:56:58.206574 update-ssh-keys[1782]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:56:58.207641 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 17:56:58.211268 systemd[1]: Finished sshkeys.service. Jan 23 17:56:58.546284 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 17:56:58.547839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:56:58.651829 sshd[1778]: Accepted publickey for core from 4.153.228.146 port 34334 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 17:56:58.653439 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:58.657951 systemd-logind[1612]: New session 3 of user core. Jan 23 17:56:58.666233 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 17:56:58.698634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:56:58.702409 (kubelet)[1794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:56:58.734993 kubelet[1794]: E0123 17:56:58.734919 1794 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:56:58.738187 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:56:58.738319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:56:58.738609 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.3M memory peak. Jan 23 17:56:59.089602 sshd[1788]: Connection closed by 4.153.228.146 port 34334 Jan 23 17:56:59.090225 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:59.093451 systemd-logind[1612]: Session 3 logged out. Waiting for processes to exit. Jan 23 17:56:59.093530 systemd[1]: sshd@2-10.0.0.108:22-4.153.228.146:34334.service: Deactivated successfully. Jan 23 17:56:59.094984 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 17:56:59.096836 systemd-logind[1612]: Removed session 3. Jan 23 17:57:01.534985 coreos-metadata[1596]: Jan 23 17:57:01.534 INFO Fetch successful Jan 23 17:57:01.534985 coreos-metadata[1596]: Jan 23 17:57:01.534 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 23 17:57:03.444078 coreos-metadata[1596]: Jan 23 17:57:03.443 INFO Fetch successful Jan 23 17:57:03.444078 coreos-metadata[1596]: Jan 23 17:57:03.444 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 23 17:57:04.120763 coreos-metadata[1596]: Jan 23 17:57:04.120 INFO Fetch successful Jan 23 17:57:04.120763 coreos-metadata[1596]: Jan 23 17:57:04.120 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 23 17:57:07.691627 coreos-metadata[1596]: Jan 23 17:57:07.691 INFO Fetch successful Jan 23 17:57:07.738003 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 17:57:07.738500 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 17:57:07.738666 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 17:57:07.739590 systemd[1]: Startup finished in 2.928s (kernel) + 17.700s (initrd) + 24.486s (userspace) = 45.115s. Jan 23 17:57:08.837836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 17:57:08.839472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:08.974823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:08.988213 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:57:09.202183 systemd[1]: Started sshd@3-10.0.0.108:22-4.153.228.146:50292.service - OpenSSH per-connection server daemon (4.153.228.146:50292). Jan 23 17:57:09.257263 kubelet[1820]: E0123 17:57:09.257188 1820 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:57:09.259878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:57:09.260163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:57:09.260598 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.4M memory peak. Jan 23 17:57:09.744450 chronyd[1594]: Selected source PHC0 Jan 23 17:57:09.835684 sshd[1828]: Accepted publickey for core from 4.153.228.146 port 50292 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 17:57:09.837170 sshd-session[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:09.842703 systemd-logind[1612]: New session 4 of user core. Jan 23 17:57:09.853430 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 17:57:10.306339 sshd[1832]: Connection closed by 4.153.228.146 port 50292 Jan 23 17:57:10.306669 sshd-session[1828]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:10.309735 systemd[1]: sshd@3-10.0.0.108:22-4.153.228.146:50292.service: Deactivated successfully. Jan 23 17:57:10.311423 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 17:57:10.312701 systemd-logind[1612]: Session 4 logged out. Waiting for processes to exit. Jan 23 17:57:10.313716 systemd-logind[1612]: Removed session 4. Jan 23 17:57:10.423503 systemd[1]: Started sshd@4-10.0.0.108:22-4.153.228.146:50302.service - OpenSSH per-connection server daemon (4.153.228.146:50302). Jan 23 17:57:11.050172 sshd[1838]: Accepted publickey for core from 4.153.228.146 port 50302 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 17:57:11.051461 sshd-session[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:11.056030 systemd-logind[1612]: New session 5 of user core. Jan 23 17:57:11.067216 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 17:57:11.471048 sshd[1841]: Connection closed by 4.153.228.146 port 50302 Jan 23 17:57:11.471400 sshd-session[1838]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:11.474702 systemd[1]: sshd@4-10.0.0.108:22-4.153.228.146:50302.service: Deactivated successfully. Jan 23 17:57:11.476347 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 17:57:11.479286 systemd-logind[1612]: Session 5 logged out. Waiting for processes to exit. Jan 23 17:57:11.480502 systemd-logind[1612]: Removed session 5. Jan 23 17:57:11.584070 systemd[1]: Started sshd@5-10.0.0.108:22-4.153.228.146:50304.service - OpenSSH per-connection server daemon (4.153.228.146:50304). Jan 23 17:57:12.217108 sshd[1847]: Accepted publickey for core from 4.153.228.146 port 50304 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 17:57:12.218423 sshd-session[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:12.222088 systemd-logind[1612]: New session 6 of user core. Jan 23 17:57:12.228148 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 17:57:12.655259 sshd[1850]: Connection closed by 4.153.228.146 port 50304 Jan 23 17:57:12.654629 sshd-session[1847]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:12.658409 systemd[1]: sshd@5-10.0.0.108:22-4.153.228.146:50304.service: Deactivated successfully. Jan 23 17:57:12.659953 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 17:57:12.660605 systemd-logind[1612]: Session 6 logged out. Waiting for processes to exit. Jan 23 17:57:12.661540 systemd-logind[1612]: Removed session 6. Jan 23 17:57:12.762090 systemd[1]: Started sshd@6-10.0.0.108:22-4.153.228.146:50310.service - OpenSSH per-connection server daemon (4.153.228.146:50310). Jan 23 17:57:13.394968 sshd[1856]: Accepted publickey for core from 4.153.228.146 port 50310 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 17:57:13.395612 sshd-session[1856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:13.399268 systemd-logind[1612]: New session 7 of user core. Jan 23 17:57:13.406057 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 17:57:13.744277 sudo[1860]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 17:57:13.744561 sudo[1860]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:57:13.759945 sudo[1860]: pam_unix(sudo:session): session closed for user root Jan 23 17:57:13.859078 sshd[1859]: Connection closed by 4.153.228.146 port 50310 Jan 23 17:57:13.859618 sshd-session[1856]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:13.863708 systemd[1]: sshd@6-10.0.0.108:22-4.153.228.146:50310.service: Deactivated successfully. Jan 23 17:57:13.865151 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 17:57:13.868171 systemd-logind[1612]: Session 7 logged out. Waiting for processes to exit. Jan 23 17:57:13.869263 systemd-logind[1612]: Removed session 7. Jan 23 17:57:13.971700 systemd[1]: Started sshd@7-10.0.0.108:22-4.153.228.146:50326.service - OpenSSH per-connection server daemon (4.153.228.146:50326). Jan 23 17:57:14.590359 sshd[1866]: Accepted publickey for core from 4.153.228.146 port 50326 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 17:57:14.591682 sshd-session[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:14.596162 systemd-logind[1612]: New session 8 of user core. Jan 23 17:57:14.611299 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 17:57:14.928580 sudo[1871]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 17:57:14.928838 sudo[1871]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:57:14.933239 sudo[1871]: pam_unix(sudo:session): session closed for user root Jan 23 17:57:14.937880 sudo[1870]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 17:57:14.938206 sudo[1870]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:57:14.946834 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:57:14.977113 augenrules[1893]: No rules Jan 23 17:57:14.978240 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:57:14.979007 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:57:14.979895 sudo[1870]: pam_unix(sudo:session): session closed for user root Jan 23 17:57:15.078510 sshd[1869]: Connection closed by 4.153.228.146 port 50326 Jan 23 17:57:15.079028 sshd-session[1866]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:15.082723 systemd[1]: sshd@7-10.0.0.108:22-4.153.228.146:50326.service: Deactivated successfully. Jan 23 17:57:15.084213 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 17:57:15.084981 systemd-logind[1612]: Session 8 logged out. Waiting for processes to exit. Jan 23 17:57:15.086132 systemd-logind[1612]: Removed session 8. Jan 23 17:57:15.189082 systemd[1]: Started sshd@8-10.0.0.108:22-4.153.228.146:55746.service - OpenSSH per-connection server daemon (4.153.228.146:55746). Jan 23 17:57:15.801041 sshd[1902]: Accepted publickey for core from 4.153.228.146 port 55746 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 17:57:15.802301 sshd-session[1902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:15.805990 systemd-logind[1612]: New session 9 of user core. Jan 23 17:57:15.820326 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 17:57:16.130499 sudo[1906]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 17:57:16.130752 sudo[1906]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:57:16.444888 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 17:57:16.460275 (dockerd)[1927]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 17:57:16.684116 dockerd[1927]: time="2026-01-23T17:57:16.684045148Z" level=info msg="Starting up" Jan 23 17:57:16.684954 dockerd[1927]: time="2026-01-23T17:57:16.684930270Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 17:57:16.695711 dockerd[1927]: time="2026-01-23T17:57:16.695485896Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 17:57:16.727735 dockerd[1927]: time="2026-01-23T17:57:16.727689375Z" level=info msg="Loading containers: start." Jan 23 17:57:16.738952 kernel: Initializing XFRM netlink socket Jan 23 17:57:16.952965 systemd-networkd[1442]: docker0: Link UP Jan 23 17:57:16.957082 dockerd[1927]: time="2026-01-23T17:57:16.957049097Z" level=info msg="Loading containers: done." Jan 23 17:57:16.969191 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3073136572-merged.mount: Deactivated successfully. Jan 23 17:57:16.971642 dockerd[1927]: time="2026-01-23T17:57:16.971336172Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 17:57:16.971642 dockerd[1927]: time="2026-01-23T17:57:16.971423172Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 17:57:16.971642 dockerd[1927]: time="2026-01-23T17:57:16.971497212Z" level=info msg="Initializing buildkit" Jan 23 17:57:16.995564 dockerd[1927]: time="2026-01-23T17:57:16.995528231Z" level=info msg="Completed buildkit initialization" Jan 23 17:57:17.000627 dockerd[1927]: time="2026-01-23T17:57:17.000471684Z" level=info msg="Daemon has completed initialization" Jan 23 17:57:17.000733 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 17:57:17.001028 dockerd[1927]: time="2026-01-23T17:57:17.000878405Z" level=info msg="API listen on /run/docker.sock" Jan 23 17:57:18.051378 containerd[1632]: time="2026-01-23T17:57:18.051022721Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 17:57:18.611163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3376576066.mount: Deactivated successfully. Jan 23 17:57:19.337410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 17:57:19.339660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:20.205581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:20.209036 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:57:20.346620 kubelet[2209]: E0123 17:57:20.346558 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:57:20.348983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:57:20.349205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:57:20.349585 systemd[1]: kubelet.service: Consumed 143ms CPU time, 107.6M memory peak. Jan 23 17:57:20.440128 containerd[1632]: time="2026-01-23T17:57:20.439017351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:20.440128 containerd[1632]: time="2026-01-23T17:57:20.440098073Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26442080" Jan 23 17:57:20.440881 containerd[1632]: time="2026-01-23T17:57:20.440853115Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:20.443658 containerd[1632]: time="2026-01-23T17:57:20.443631842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:20.444631 containerd[1632]: time="2026-01-23T17:57:20.444599884Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.393534523s" Jan 23 17:57:20.444781 containerd[1632]: time="2026-01-23T17:57:20.444718365Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 17:57:20.445387 containerd[1632]: time="2026-01-23T17:57:20.445363486Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 17:57:21.656924 containerd[1632]: time="2026-01-23T17:57:21.656165136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:21.656924 containerd[1632]: time="2026-01-23T17:57:21.656801537Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622106" Jan 23 17:57:21.658624 containerd[1632]: time="2026-01-23T17:57:21.658596142Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:21.661750 containerd[1632]: time="2026-01-23T17:57:21.661721109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:21.662818 containerd[1632]: time="2026-01-23T17:57:21.662778232Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.217382786s" Jan 23 17:57:21.662880 containerd[1632]: time="2026-01-23T17:57:21.662821032Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 17:57:21.663240 containerd[1632]: time="2026-01-23T17:57:21.663216793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 17:57:22.618562 containerd[1632]: time="2026-01-23T17:57:22.618510336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:22.619707 containerd[1632]: time="2026-01-23T17:57:22.619673179Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616767" Jan 23 17:57:22.621931 containerd[1632]: time="2026-01-23T17:57:22.620967582Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:22.624081 containerd[1632]: time="2026-01-23T17:57:22.624037029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:22.625071 containerd[1632]: time="2026-01-23T17:57:22.625042352Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 961.791359ms" Jan 23 17:57:22.625170 containerd[1632]: time="2026-01-23T17:57:22.625155392Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 17:57:22.625621 containerd[1632]: time="2026-01-23T17:57:22.625596993Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 17:57:23.527763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927677719.mount: Deactivated successfully. Jan 23 17:57:23.748573 containerd[1632]: time="2026-01-23T17:57:23.748495627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:23.749896 containerd[1632]: time="2026-01-23T17:57:23.749871630Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558750" Jan 23 17:57:23.750747 containerd[1632]: time="2026-01-23T17:57:23.750703073Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:23.753809 containerd[1632]: time="2026-01-23T17:57:23.753565280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:23.754217 containerd[1632]: time="2026-01-23T17:57:23.754192321Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.128562488s" Jan 23 17:57:23.754267 containerd[1632]: time="2026-01-23T17:57:23.754220921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 17:57:23.754943 containerd[1632]: time="2026-01-23T17:57:23.754922123Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 17:57:24.300236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2010002891.mount: Deactivated successfully. Jan 23 17:57:24.945323 containerd[1632]: time="2026-01-23T17:57:24.945274282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:24.946323 containerd[1632]: time="2026-01-23T17:57:24.946180444Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jan 23 17:57:24.947417 containerd[1632]: time="2026-01-23T17:57:24.947386367Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:24.950988 containerd[1632]: time="2026-01-23T17:57:24.950926576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:24.952632 containerd[1632]: time="2026-01-23T17:57:24.952587300Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.197564777s" Jan 23 17:57:24.952699 containerd[1632]: time="2026-01-23T17:57:24.952638900Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 17:57:24.953095 containerd[1632]: time="2026-01-23T17:57:24.953046021Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 17:57:25.378328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3872651188.mount: Deactivated successfully. Jan 23 17:57:25.382937 containerd[1632]: time="2026-01-23T17:57:25.382881635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:57:25.383710 containerd[1632]: time="2026-01-23T17:57:25.383676437Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 23 17:57:25.384580 containerd[1632]: time="2026-01-23T17:57:25.384536559Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:57:25.387230 containerd[1632]: time="2026-01-23T17:57:25.387175446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:57:25.387831 containerd[1632]: time="2026-01-23T17:57:25.387699407Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 434.622586ms" Jan 23 17:57:25.387831 containerd[1632]: time="2026-01-23T17:57:25.387733247Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 17:57:25.388307 containerd[1632]: time="2026-01-23T17:57:25.388281249Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 17:57:25.872640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710234352.mount: Deactivated successfully. Jan 23 17:57:27.379213 containerd[1632]: time="2026-01-23T17:57:27.379142206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:27.380196 containerd[1632]: time="2026-01-23T17:57:27.380159569Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Jan 23 17:57:27.381184 containerd[1632]: time="2026-01-23T17:57:27.381156091Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:27.384443 containerd[1632]: time="2026-01-23T17:57:27.384406779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:27.385638 containerd[1632]: time="2026-01-23T17:57:27.385518982Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.997200133s" Jan 23 17:57:27.385638 containerd[1632]: time="2026-01-23T17:57:27.385553782Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 17:57:30.587500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 17:57:30.591071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:30.857538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:30.861226 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:57:30.897829 kubelet[2374]: E0123 17:57:30.897768 2374 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:57:30.900269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:57:30.900490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:57:30.902979 systemd[1]: kubelet.service: Consumed 141ms CPU time, 107.3M memory peak. Jan 23 17:57:31.629049 update_engine[1618]: I20260123 17:57:31.628865 1618 update_attempter.cc:509] Updating boot flags... Jan 23 17:57:32.315779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:32.316368 systemd[1]: kubelet.service: Consumed 141ms CPU time, 107.3M memory peak. Jan 23 17:57:32.318247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:32.337757 systemd[1]: Reload requested from client PID 2406 ('systemctl') (unit session-9.scope)... Jan 23 17:57:32.337777 systemd[1]: Reloading... Jan 23 17:57:32.417939 zram_generator::config[2450]: No configuration found. Jan 23 17:57:32.572569 systemd[1]: Reloading finished in 234 ms. Jan 23 17:57:32.630938 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 17:57:32.631020 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 17:57:32.631283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:32.631334 systemd[1]: kubelet.service: Consumed 95ms CPU time, 95M memory peak. Jan 23 17:57:32.632880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:33.208614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:33.214276 (kubelet)[2496]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:57:33.247820 kubelet[2496]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:57:33.247820 kubelet[2496]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:57:33.247820 kubelet[2496]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:57:33.248172 kubelet[2496]: I0123 17:57:33.247881 2496 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:57:33.953930 kubelet[2496]: I0123 17:57:33.953877 2496 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 17:57:33.953930 kubelet[2496]: I0123 17:57:33.953924 2496 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:57:33.954228 kubelet[2496]: I0123 17:57:33.954191 2496 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 17:57:34.450557 kubelet[2496]: E0123 17:57:34.450513 2496 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:57:34.451309 kubelet[2496]: I0123 17:57:34.451279 2496 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:57:34.459097 kubelet[2496]: I0123 17:57:34.459062 2496 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:57:34.462093 kubelet[2496]: I0123 17:57:34.462064 2496 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:57:34.463127 kubelet[2496]: I0123 17:57:34.463078 2496 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:57:34.463380 kubelet[2496]: I0123 17:57:34.463200 2496 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-a-575e6c418a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:57:34.463608 kubelet[2496]: I0123 17:57:34.463594 2496 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:57:34.463663 kubelet[2496]: I0123 17:57:34.463655 2496 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 17:57:34.463978 kubelet[2496]: I0123 17:57:34.463961 2496 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:57:34.468588 kubelet[2496]: I0123 17:57:34.468556 2496 kubelet.go:446] "Attempting to sync node with API server" Jan 23 17:57:34.468743 kubelet[2496]: I0123 17:57:34.468669 2496 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:57:34.468944 kubelet[2496]: I0123 17:57:34.468701 2496 kubelet.go:352] "Adding apiserver pod source" Jan 23 17:57:34.469059 kubelet[2496]: I0123 17:57:34.469006 2496 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:57:34.471029 kubelet[2496]: W0123 17:57:34.470973 2496 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-3-a-575e6c418a&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 23 17:57:34.471100 kubelet[2496]: E0123 17:57:34.471042 2496 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-3-a-575e6c418a&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:57:34.471793 kubelet[2496]: W0123 17:57:34.471726 2496 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 23 17:57:34.471793 kubelet[2496]: E0123 17:57:34.471771 2496 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:57:34.474003 kubelet[2496]: I0123 17:57:34.473980 2496 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:57:34.474662 kubelet[2496]: I0123 17:57:34.474647 2496 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 17:57:34.474797 kubelet[2496]: W0123 17:57:34.474785 2496 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 17:57:34.475882 kubelet[2496]: I0123 17:57:34.475849 2496 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:57:34.475943 kubelet[2496]: I0123 17:57:34.475896 2496 server.go:1287] "Started kubelet" Jan 23 17:57:34.477402 kubelet[2496]: I0123 17:57:34.477300 2496 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:57:34.480074 kubelet[2496]: I0123 17:57:34.480013 2496 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:57:34.481808 kubelet[2496]: E0123 17:57:34.480685 2496 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-3-a-575e6c418a\" not found" Jan 23 17:57:34.481808 kubelet[2496]: I0123 17:57:34.480742 2496 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:57:34.481808 kubelet[2496]: I0123 17:57:34.480978 2496 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:57:34.481808 kubelet[2496]: I0123 17:57:34.480982 2496 server.go:479] "Adding debug handlers to kubelet server" Jan 23 17:57:34.481808 kubelet[2496]: I0123 17:57:34.481023 2496 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:57:34.481808 kubelet[2496]: E0123 17:57:34.481138 2496 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-3-a-575e6c418a.188d6de3f5d53a03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-3-a-575e6c418a,UID:ci-4459-2-3-a-575e6c418a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-3-a-575e6c418a,},FirstTimestamp:2026-01-23 17:57:34.475868675 +0000 UTC m=+1.258450007,LastTimestamp:2026-01-23 17:57:34.475868675 +0000 UTC m=+1.258450007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-a-575e6c418a,}" Jan 23 17:57:34.481808 kubelet[2496]: I0123 17:57:34.481626 2496 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:57:34.481808 kubelet[2496]: I0123 17:57:34.481629 2496 factory.go:221] Registration of the systemd container factory successfully Jan 23 17:57:34.482083 kubelet[2496]: I0123 17:57:34.482025 2496 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:57:34.482248 kubelet[2496]: E0123 17:57:34.482170 2496 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-a-575e6c418a?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="200ms" Jan 23 17:57:34.482368 kubelet[2496]: I0123 17:57:34.482352 2496 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:57:34.482439 kubelet[2496]: I0123 17:57:34.482407 2496 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:57:34.482473 kubelet[2496]: E0123 17:57:34.482433 2496 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:57:34.484697 kubelet[2496]: I0123 17:57:34.484647 2496 factory.go:221] Registration of the containerd container factory successfully Jan 23 17:57:34.484971 kubelet[2496]: W0123 17:57:34.484926 2496 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 23 17:57:34.485077 kubelet[2496]: E0123 17:57:34.485057 2496 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:57:34.499860 kubelet[2496]: I0123 17:57:34.499786 2496 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 17:57:34.501896 kubelet[2496]: I0123 17:57:34.501256 2496 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 17:57:34.501896 kubelet[2496]: I0123 17:57:34.501291 2496 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 17:57:34.501896 kubelet[2496]: I0123 17:57:34.501316 2496 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:57:34.501896 kubelet[2496]: I0123 17:57:34.501323 2496 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 17:57:34.501896 kubelet[2496]: E0123 17:57:34.501363 2496 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:57:34.502153 kubelet[2496]: W0123 17:57:34.502134 2496 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 23 17:57:34.502202 kubelet[2496]: E0123 17:57:34.502169 2496 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:57:34.503687 kubelet[2496]: I0123 17:57:34.503660 2496 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:57:34.503687 kubelet[2496]: I0123 17:57:34.503680 2496 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:57:34.503808 kubelet[2496]: I0123 17:57:34.503700 2496 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:57:34.507821 kubelet[2496]: I0123 17:57:34.507761 2496 policy_none.go:49] "None policy: Start" Jan 23 17:57:34.507821 kubelet[2496]: I0123 17:57:34.507800 2496 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:57:34.507821 kubelet[2496]: I0123 17:57:34.507823 2496 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:57:34.512862 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 17:57:34.529290 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 17:57:34.533006 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 17:57:34.541453 kubelet[2496]: I0123 17:57:34.540950 2496 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 17:57:34.541453 kubelet[2496]: I0123 17:57:34.541171 2496 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:57:34.541453 kubelet[2496]: I0123 17:57:34.541181 2496 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:57:34.541453 kubelet[2496]: I0123 17:57:34.541441 2496 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:57:34.542505 kubelet[2496]: E0123 17:57:34.542479 2496 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:57:34.542599 kubelet[2496]: E0123 17:57:34.542534 2496 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-3-a-575e6c418a\" not found" Jan 23 17:57:34.610699 systemd[1]: Created slice kubepods-burstable-pod84ab83549237a695c4f6b13b0f94860d.slice - libcontainer container kubepods-burstable-pod84ab83549237a695c4f6b13b0f94860d.slice. Jan 23 17:57:34.621772 kubelet[2496]: E0123 17:57:34.621712 2496 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-a-575e6c418a\" not found" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.623409 systemd[1]: Created slice kubepods-burstable-pod022da0636165b93d75a7f74c0de4041b.slice - libcontainer container kubepods-burstable-pod022da0636165b93d75a7f74c0de4041b.slice. Jan 23 17:57:34.631452 kubelet[2496]: E0123 17:57:34.631406 2496 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-a-575e6c418a\" not found" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.633756 systemd[1]: Created slice kubepods-burstable-pod71321f1ec46c45c21935cacc5cbfd824.slice - libcontainer container kubepods-burstable-pod71321f1ec46c45c21935cacc5cbfd824.slice. Jan 23 17:57:34.635936 kubelet[2496]: E0123 17:57:34.635713 2496 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-a-575e6c418a\" not found" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.643119 kubelet[2496]: I0123 17:57:34.643087 2496 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.643624 kubelet[2496]: E0123 17:57:34.643586 2496 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.683464 kubelet[2496]: I0123 17:57:34.683207 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84ab83549237a695c4f6b13b0f94860d-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-a-575e6c418a\" (UID: \"84ab83549237a695c4f6b13b0f94860d\") " pod="kube-system/kube-apiserver-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.683464 kubelet[2496]: I0123 17:57:34.683254 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84ab83549237a695c4f6b13b0f94860d-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-a-575e6c418a\" (UID: \"84ab83549237a695c4f6b13b0f94860d\") " pod="kube-system/kube-apiserver-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.683464 kubelet[2496]: I0123 17:57:34.683272 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/022da0636165b93d75a7f74c0de4041b-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-a-575e6c418a\" (UID: \"022da0636165b93d75a7f74c0de4041b\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.683464 kubelet[2496]: I0123 17:57:34.683301 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/022da0636165b93d75a7f74c0de4041b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-a-575e6c418a\" (UID: \"022da0636165b93d75a7f74c0de4041b\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.683464 kubelet[2496]: I0123 17:57:34.683319 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71321f1ec46c45c21935cacc5cbfd824-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-a-575e6c418a\" (UID: \"71321f1ec46c45c21935cacc5cbfd824\") " pod="kube-system/kube-scheduler-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.683712 kubelet[2496]: I0123 17:57:34.683334 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84ab83549237a695c4f6b13b0f94860d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-a-575e6c418a\" (UID: \"84ab83549237a695c4f6b13b0f94860d\") " pod="kube-system/kube-apiserver-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.683712 kubelet[2496]: I0123 17:57:34.683348 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/022da0636165b93d75a7f74c0de4041b-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-a-575e6c418a\" (UID: \"022da0636165b93d75a7f74c0de4041b\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.683712 kubelet[2496]: I0123 17:57:34.683363 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/022da0636165b93d75a7f74c0de4041b-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-a-575e6c418a\" (UID: \"022da0636165b93d75a7f74c0de4041b\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.683712 kubelet[2496]: I0123 17:57:34.683392 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/022da0636165b93d75a7f74c0de4041b-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-a-575e6c418a\" (UID: \"022da0636165b93d75a7f74c0de4041b\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.683712 kubelet[2496]: E0123 17:57:34.683383 2496 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-a-575e6c418a?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="400ms" Jan 23 17:57:34.845734 kubelet[2496]: I0123 17:57:34.845682 2496 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.846049 kubelet[2496]: E0123 17:57:34.846018 2496 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:34.923974 containerd[1632]: time="2026-01-23T17:57:34.923443812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-a-575e6c418a,Uid:84ab83549237a695c4f6b13b0f94860d,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:34.933260 containerd[1632]: time="2026-01-23T17:57:34.933168316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-a-575e6c418a,Uid:022da0636165b93d75a7f74c0de4041b,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:34.936864 containerd[1632]: time="2026-01-23T17:57:34.936824005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-a-575e6c418a,Uid:71321f1ec46c45c21935cacc5cbfd824,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:34.946590 containerd[1632]: time="2026-01-23T17:57:34.946547349Z" level=info msg="connecting to shim bb76bc480409341f7ec20222635dcdc4cad4da1e0f324e85699dd2bfd04295fc" address="unix:///run/containerd/s/cbb48ddbdaf9bcc6cf335c13a0e5a0772ec7f46cc967dd4e3b2713064ee0e68c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:34.967485 systemd[1]: Started cri-containerd-bb76bc480409341f7ec20222635dcdc4cad4da1e0f324e85699dd2bfd04295fc.scope - libcontainer container bb76bc480409341f7ec20222635dcdc4cad4da1e0f324e85699dd2bfd04295fc. Jan 23 17:57:34.975550 containerd[1632]: time="2026-01-23T17:57:34.975486780Z" level=info msg="connecting to shim 6d00336dfbdaa6459835b9dd356fab7b7e086429d1b7fe3a36cc207ef69ad9ce" address="unix:///run/containerd/s/6fe6947255243be60798dff1f7ee60c23a9726892f84da22197ea33410739ef8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:34.978525 containerd[1632]: time="2026-01-23T17:57:34.978473867Z" level=info msg="connecting to shim 2bf052853e6b843a481301343bc7f6de5f205feb73b167d19a83dd78ca200cbd" address="unix:///run/containerd/s/9ba3784b833b13a75bb4781ea567ae0690572369a63ddf2973d36a0d3144b43e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:34.997096 systemd[1]: Started cri-containerd-6d00336dfbdaa6459835b9dd356fab7b7e086429d1b7fe3a36cc207ef69ad9ce.scope - libcontainer container 6d00336dfbdaa6459835b9dd356fab7b7e086429d1b7fe3a36cc207ef69ad9ce. Jan 23 17:57:35.002669 systemd[1]: Started cri-containerd-2bf052853e6b843a481301343bc7f6de5f205feb73b167d19a83dd78ca200cbd.scope - libcontainer container 2bf052853e6b843a481301343bc7f6de5f205feb73b167d19a83dd78ca200cbd. Jan 23 17:57:35.023459 containerd[1632]: time="2026-01-23T17:57:35.023389057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-a-575e6c418a,Uid:84ab83549237a695c4f6b13b0f94860d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb76bc480409341f7ec20222635dcdc4cad4da1e0f324e85699dd2bfd04295fc\"" Jan 23 17:57:35.027046 containerd[1632]: time="2026-01-23T17:57:35.026690426Z" level=info msg="CreateContainer within sandbox \"bb76bc480409341f7ec20222635dcdc4cad4da1e0f324e85699dd2bfd04295fc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 17:57:35.040272 containerd[1632]: time="2026-01-23T17:57:35.040236579Z" level=info msg="Container fb3a4ea61ffdbd161919a957b84a59c0707f5606a72b8c01383c8a78298baf3c: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:35.044836 containerd[1632]: time="2026-01-23T17:57:35.044787790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-a-575e6c418a,Uid:022da0636165b93d75a7f74c0de4041b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bf052853e6b843a481301343bc7f6de5f205feb73b167d19a83dd78ca200cbd\"" Jan 23 17:57:35.047984 containerd[1632]: time="2026-01-23T17:57:35.047941798Z" level=info msg="CreateContainer within sandbox \"2bf052853e6b843a481301343bc7f6de5f205feb73b167d19a83dd78ca200cbd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 17:57:35.048985 containerd[1632]: time="2026-01-23T17:57:35.048951400Z" level=info msg="CreateContainer within sandbox \"bb76bc480409341f7ec20222635dcdc4cad4da1e0f324e85699dd2bfd04295fc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fb3a4ea61ffdbd161919a957b84a59c0707f5606a72b8c01383c8a78298baf3c\"" Jan 23 17:57:35.049686 containerd[1632]: time="2026-01-23T17:57:35.049644042Z" level=info msg="StartContainer for \"fb3a4ea61ffdbd161919a957b84a59c0707f5606a72b8c01383c8a78298baf3c\"" Jan 23 17:57:35.052039 containerd[1632]: time="2026-01-23T17:57:35.052002928Z" level=info msg="connecting to shim fb3a4ea61ffdbd161919a957b84a59c0707f5606a72b8c01383c8a78298baf3c" address="unix:///run/containerd/s/cbb48ddbdaf9bcc6cf335c13a0e5a0772ec7f46cc967dd4e3b2713064ee0e68c" protocol=ttrpc version=3 Jan 23 17:57:35.058468 containerd[1632]: time="2026-01-23T17:57:35.058429063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-a-575e6c418a,Uid:71321f1ec46c45c21935cacc5cbfd824,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d00336dfbdaa6459835b9dd356fab7b7e086429d1b7fe3a36cc207ef69ad9ce\"" Jan 23 17:57:35.061549 containerd[1632]: time="2026-01-23T17:57:35.061511391Z" level=info msg="CreateContainer within sandbox \"6d00336dfbdaa6459835b9dd356fab7b7e086429d1b7fe3a36cc207ef69ad9ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 17:57:35.062933 containerd[1632]: time="2026-01-23T17:57:35.061962312Z" level=info msg="Container a31e67ed0f77e44880ada2b0936d8a081b4551844534cbd82f05aeda8a597528: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:35.075040 containerd[1632]: time="2026-01-23T17:57:35.074992584Z" level=info msg="CreateContainer within sandbox \"2bf052853e6b843a481301343bc7f6de5f205feb73b167d19a83dd78ca200cbd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a31e67ed0f77e44880ada2b0936d8a081b4551844534cbd82f05aeda8a597528\"" Jan 23 17:57:35.075526 containerd[1632]: time="2026-01-23T17:57:35.075479665Z" level=info msg="StartContainer for \"a31e67ed0f77e44880ada2b0936d8a081b4551844534cbd82f05aeda8a597528\"" Jan 23 17:57:35.075817 containerd[1632]: time="2026-01-23T17:57:35.075784506Z" level=info msg="Container 97a629a00acece9056be33d0a682768bdc0215d7b90dc06ac0587a077049e6b6: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:35.076095 systemd[1]: Started cri-containerd-fb3a4ea61ffdbd161919a957b84a59c0707f5606a72b8c01383c8a78298baf3c.scope - libcontainer container fb3a4ea61ffdbd161919a957b84a59c0707f5606a72b8c01383c8a78298baf3c. Jan 23 17:57:35.076626 containerd[1632]: time="2026-01-23T17:57:35.076596548Z" level=info msg="connecting to shim a31e67ed0f77e44880ada2b0936d8a081b4551844534cbd82f05aeda8a597528" address="unix:///run/containerd/s/9ba3784b833b13a75bb4781ea567ae0690572369a63ddf2973d36a0d3144b43e" protocol=ttrpc version=3 Jan 23 17:57:35.084112 kubelet[2496]: E0123 17:57:35.084066 2496 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-a-575e6c418a?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="800ms" Jan 23 17:57:35.088754 containerd[1632]: time="2026-01-23T17:57:35.088708378Z" level=info msg="CreateContainer within sandbox \"6d00336dfbdaa6459835b9dd356fab7b7e086429d1b7fe3a36cc207ef69ad9ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"97a629a00acece9056be33d0a682768bdc0215d7b90dc06ac0587a077049e6b6\"" Jan 23 17:57:35.089369 containerd[1632]: time="2026-01-23T17:57:35.089203419Z" level=info msg="StartContainer for \"97a629a00acece9056be33d0a682768bdc0215d7b90dc06ac0587a077049e6b6\"" Jan 23 17:57:35.090403 containerd[1632]: time="2026-01-23T17:57:35.090376382Z" level=info msg="connecting to shim 97a629a00acece9056be33d0a682768bdc0215d7b90dc06ac0587a077049e6b6" address="unix:///run/containerd/s/6fe6947255243be60798dff1f7ee60c23a9726892f84da22197ea33410739ef8" protocol=ttrpc version=3 Jan 23 17:57:35.098122 systemd[1]: Started cri-containerd-a31e67ed0f77e44880ada2b0936d8a081b4551844534cbd82f05aeda8a597528.scope - libcontainer container a31e67ed0f77e44880ada2b0936d8a081b4551844534cbd82f05aeda8a597528. Jan 23 17:57:35.111195 systemd[1]: Started cri-containerd-97a629a00acece9056be33d0a682768bdc0215d7b90dc06ac0587a077049e6b6.scope - libcontainer container 97a629a00acece9056be33d0a682768bdc0215d7b90dc06ac0587a077049e6b6. Jan 23 17:57:35.137842 containerd[1632]: time="2026-01-23T17:57:35.137590618Z" level=info msg="StartContainer for \"fb3a4ea61ffdbd161919a957b84a59c0707f5606a72b8c01383c8a78298baf3c\" returns successfully" Jan 23 17:57:35.150660 containerd[1632]: time="2026-01-23T17:57:35.150596049Z" level=info msg="StartContainer for \"a31e67ed0f77e44880ada2b0936d8a081b4551844534cbd82f05aeda8a597528\" returns successfully" Jan 23 17:57:35.161218 containerd[1632]: time="2026-01-23T17:57:35.160911115Z" level=info msg="StartContainer for \"97a629a00acece9056be33d0a682768bdc0215d7b90dc06ac0587a077049e6b6\" returns successfully" Jan 23 17:57:35.248497 kubelet[2496]: I0123 17:57:35.248462 2496 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:35.511762 kubelet[2496]: E0123 17:57:35.511673 2496 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-a-575e6c418a\" not found" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:35.516573 kubelet[2496]: E0123 17:57:35.516544 2496 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-a-575e6c418a\" not found" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:35.517581 kubelet[2496]: E0123 17:57:35.517563 2496 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-a-575e6c418a\" not found" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:36.519838 kubelet[2496]: E0123 17:57:36.519672 2496 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-a-575e6c418a\" not found" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:36.520162 kubelet[2496]: E0123 17:57:36.519895 2496 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-a-575e6c418a\" not found" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:36.836158 kubelet[2496]: E0123 17:57:36.836056 2496 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-3-a-575e6c418a\" not found" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:36.992588 kubelet[2496]: I0123 17:57:36.992542 2496 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:36.992588 kubelet[2496]: E0123 17:57:36.992591 2496 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-2-3-a-575e6c418a\": node \"ci-4459-2-3-a-575e6c418a\" not found" Jan 23 17:57:37.082586 kubelet[2496]: I0123 17:57:37.082505 2496 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:37.089848 kubelet[2496]: E0123 17:57:37.089800 2496 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-3-a-575e6c418a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:37.089848 kubelet[2496]: I0123 17:57:37.089840 2496 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:37.091531 kubelet[2496]: E0123 17:57:37.091504 2496 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-a-575e6c418a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:37.091531 kubelet[2496]: I0123 17:57:37.091532 2496 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:37.093365 kubelet[2496]: E0123 17:57:37.093331 2496 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-a-575e6c418a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:37.471012 kubelet[2496]: I0123 17:57:37.470890 2496 apiserver.go:52] "Watching apiserver" Jan 23 17:57:37.481792 kubelet[2496]: I0123 17:57:37.481753 2496 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:57:37.507047 kubelet[2496]: I0123 17:57:37.506999 2496 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:37.509103 kubelet[2496]: E0123 17:57:37.509064 2496 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-3-a-575e6c418a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:37.519527 kubelet[2496]: I0123 17:57:37.519475 2496 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:37.521895 kubelet[2496]: E0123 17:57:37.521735 2496 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-a-575e6c418a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:39.121963 systemd[1]: Reload requested from client PID 2779 ('systemctl') (unit session-9.scope)... Jan 23 17:57:39.121979 systemd[1]: Reloading... Jan 23 17:57:39.195930 zram_generator::config[2824]: No configuration found. Jan 23 17:57:39.377965 systemd[1]: Reloading finished in 255 ms. Jan 23 17:57:39.400476 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:39.412286 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:57:39.413944 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:39.414009 systemd[1]: kubelet.service: Consumed 1.167s CPU time, 130.9M memory peak. Jan 23 17:57:39.416201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:40.428910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:40.433853 (kubelet)[2867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:57:40.486773 kubelet[2867]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:57:40.486773 kubelet[2867]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:57:40.486773 kubelet[2867]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:57:40.487131 kubelet[2867]: I0123 17:57:40.486809 2867 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:57:40.493873 kubelet[2867]: I0123 17:57:40.493819 2867 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 17:57:40.493873 kubelet[2867]: I0123 17:57:40.493853 2867 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:57:40.494467 kubelet[2867]: I0123 17:57:40.494316 2867 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 17:57:40.496954 kubelet[2867]: I0123 17:57:40.496922 2867 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 17:57:40.499602 kubelet[2867]: I0123 17:57:40.499577 2867 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:57:40.505792 kubelet[2867]: I0123 17:57:40.505759 2867 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:57:40.509139 kubelet[2867]: I0123 17:57:40.509111 2867 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:57:40.509465 kubelet[2867]: I0123 17:57:40.509414 2867 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:57:40.509635 kubelet[2867]: I0123 17:57:40.509468 2867 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-a-575e6c418a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:57:40.509717 kubelet[2867]: I0123 17:57:40.509647 2867 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:57:40.509717 kubelet[2867]: I0123 17:57:40.509655 2867 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 17:57:40.509717 kubelet[2867]: I0123 17:57:40.509697 2867 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:57:40.509858 kubelet[2867]: I0123 17:57:40.509845 2867 kubelet.go:446] "Attempting to sync node with API server" Jan 23 17:57:40.509891 kubelet[2867]: I0123 17:57:40.509860 2867 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:57:40.509891 kubelet[2867]: I0123 17:57:40.509880 2867 kubelet.go:352] "Adding apiserver pod source" Jan 23 17:57:40.509891 kubelet[2867]: I0123 17:57:40.509889 2867 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:57:40.511859 kubelet[2867]: I0123 17:57:40.511827 2867 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:57:40.512469 kubelet[2867]: I0123 17:57:40.512457 2867 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 17:57:40.515348 kubelet[2867]: I0123 17:57:40.515288 2867 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:57:40.515348 kubelet[2867]: I0123 17:57:40.515324 2867 server.go:1287] "Started kubelet" Jan 23 17:57:40.515437 kubelet[2867]: I0123 17:57:40.515391 2867 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:57:40.516360 kubelet[2867]: I0123 17:57:40.515779 2867 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:57:40.519945 kubelet[2867]: I0123 17:57:40.518826 2867 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:57:40.519945 kubelet[2867]: I0123 17:57:40.516267 2867 server.go:479] "Adding debug handlers to kubelet server" Jan 23 17:57:40.521662 kubelet[2867]: E0123 17:57:40.521614 2867 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:57:40.521783 kubelet[2867]: I0123 17:57:40.521759 2867 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:57:40.527143 kubelet[2867]: I0123 17:57:40.526666 2867 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:57:40.529514 kubelet[2867]: I0123 17:57:40.529486 2867 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:57:40.531301 kubelet[2867]: E0123 17:57:40.531196 2867 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-3-a-575e6c418a\" not found" Jan 23 17:57:40.532027 kubelet[2867]: I0123 17:57:40.532007 2867 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:57:40.533402 kubelet[2867]: I0123 17:57:40.533376 2867 factory.go:221] Registration of the systemd container factory successfully Jan 23 17:57:40.536092 kubelet[2867]: I0123 17:57:40.536035 2867 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:57:40.536282 kubelet[2867]: I0123 17:57:40.533480 2867 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:57:40.538173 kubelet[2867]: I0123 17:57:40.538125 2867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 17:57:40.539295 kubelet[2867]: I0123 17:57:40.539254 2867 factory.go:221] Registration of the containerd container factory successfully Jan 23 17:57:40.540618 kubelet[2867]: I0123 17:57:40.540587 2867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 17:57:40.541000 kubelet[2867]: I0123 17:57:40.540984 2867 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 17:57:40.541101 kubelet[2867]: I0123 17:57:40.541088 2867 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:57:40.541152 kubelet[2867]: I0123 17:57:40.541143 2867 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 17:57:40.541252 kubelet[2867]: E0123 17:57:40.541227 2867 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:57:40.570398 kubelet[2867]: I0123 17:57:40.570355 2867 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:57:40.570398 kubelet[2867]: I0123 17:57:40.570381 2867 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:57:40.570398 kubelet[2867]: I0123 17:57:40.570405 2867 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:57:40.571527 kubelet[2867]: I0123 17:57:40.570612 2867 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 17:57:40.571527 kubelet[2867]: I0123 17:57:40.570630 2867 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 17:57:40.571527 kubelet[2867]: I0123 17:57:40.570651 2867 policy_none.go:49] "None policy: Start" Jan 23 17:57:40.571527 kubelet[2867]: I0123 17:57:40.570665 2867 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:57:40.571527 kubelet[2867]: I0123 17:57:40.570675 2867 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:57:40.571527 kubelet[2867]: I0123 17:57:40.570792 2867 state_mem.go:75] "Updated machine memory state" Jan 23 17:57:40.575081 kubelet[2867]: I0123 17:57:40.575057 2867 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 17:57:40.575350 kubelet[2867]: I0123 17:57:40.575333 2867 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:57:40.575537 kubelet[2867]: I0123 17:57:40.575505 2867 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:57:40.576023 kubelet[2867]: I0123 17:57:40.576004 2867 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:57:40.578028 kubelet[2867]: E0123 17:57:40.578000 2867 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:57:40.642712 kubelet[2867]: I0123 17:57:40.642574 2867 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.642957 kubelet[2867]: I0123 17:57:40.642681 2867 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.643094 kubelet[2867]: I0123 17:57:40.642672 2867 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.679063 kubelet[2867]: I0123 17:57:40.678950 2867 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.687474 kubelet[2867]: I0123 17:57:40.687349 2867 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.687474 kubelet[2867]: I0123 17:57:40.687438 2867 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.736957 kubelet[2867]: I0123 17:57:40.736920 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84ab83549237a695c4f6b13b0f94860d-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-a-575e6c418a\" (UID: \"84ab83549237a695c4f6b13b0f94860d\") " pod="kube-system/kube-apiserver-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.737312 kubelet[2867]: I0123 17:57:40.737115 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/022da0636165b93d75a7f74c0de4041b-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-a-575e6c418a\" (UID: \"022da0636165b93d75a7f74c0de4041b\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.737312 kubelet[2867]: I0123 17:57:40.737147 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/022da0636165b93d75a7f74c0de4041b-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-a-575e6c418a\" (UID: \"022da0636165b93d75a7f74c0de4041b\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.737312 kubelet[2867]: I0123 17:57:40.737170 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/022da0636165b93d75a7f74c0de4041b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-a-575e6c418a\" (UID: \"022da0636165b93d75a7f74c0de4041b\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.737312 kubelet[2867]: I0123 17:57:40.737191 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84ab83549237a695c4f6b13b0f94860d-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-a-575e6c418a\" (UID: \"84ab83549237a695c4f6b13b0f94860d\") " pod="kube-system/kube-apiserver-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.737312 kubelet[2867]: I0123 17:57:40.737229 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84ab83549237a695c4f6b13b0f94860d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-a-575e6c418a\" (UID: \"84ab83549237a695c4f6b13b0f94860d\") " pod="kube-system/kube-apiserver-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.737446 kubelet[2867]: I0123 17:57:40.737246 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/022da0636165b93d75a7f74c0de4041b-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-a-575e6c418a\" (UID: \"022da0636165b93d75a7f74c0de4041b\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.737446 kubelet[2867]: I0123 17:57:40.737262 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/022da0636165b93d75a7f74c0de4041b-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-a-575e6c418a\" (UID: \"022da0636165b93d75a7f74c0de4041b\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.737446 kubelet[2867]: I0123 17:57:40.737278 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71321f1ec46c45c21935cacc5cbfd824-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-a-575e6c418a\" (UID: \"71321f1ec46c45c21935cacc5cbfd824\") " pod="kube-system/kube-scheduler-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:40.773106 sudo[2901]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 17:57:40.773378 sudo[2901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 17:57:41.104790 sudo[2901]: pam_unix(sudo:session): session closed for user root Jan 23 17:57:41.512019 kubelet[2867]: I0123 17:57:41.511764 2867 apiserver.go:52] "Watching apiserver" Jan 23 17:57:41.533385 kubelet[2867]: I0123 17:57:41.533327 2867 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:57:41.554357 kubelet[2867]: I0123 17:57:41.554317 2867 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:41.562041 kubelet[2867]: E0123 17:57:41.561977 2867 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-a-575e6c418a\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-3-a-575e6c418a" Jan 23 17:57:41.581651 kubelet[2867]: I0123 17:57:41.581550 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-3-a-575e6c418a" podStartSLOduration=1.581533021 podStartE2EDuration="1.581533021s" podCreationTimestamp="2026-01-23 17:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:41.571291636 +0000 UTC m=+1.133451301" watchObservedRunningTime="2026-01-23 17:57:41.581533021 +0000 UTC m=+1.143692726" Jan 23 17:57:41.581821 kubelet[2867]: I0123 17:57:41.581686 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-3-a-575e6c418a" podStartSLOduration=1.581679941 podStartE2EDuration="1.581679941s" podCreationTimestamp="2026-01-23 17:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:41.580590498 +0000 UTC m=+1.142750203" watchObservedRunningTime="2026-01-23 17:57:41.581679941 +0000 UTC m=+1.143839606" Jan 23 17:57:41.600792 kubelet[2867]: I0123 17:57:41.600735 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-3-a-575e6c418a" podStartSLOduration=1.600717388 podStartE2EDuration="1.600717388s" podCreationTimestamp="2026-01-23 17:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:41.590582323 +0000 UTC m=+1.152742068" watchObservedRunningTime="2026-01-23 17:57:41.600717388 +0000 UTC m=+1.162877093" Jan 23 17:57:43.223668 sudo[1906]: pam_unix(sudo:session): session closed for user root Jan 23 17:57:43.320521 sshd[1905]: Connection closed by 4.153.228.146 port 55746 Jan 23 17:57:43.320999 sshd-session[1902]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:43.324731 systemd[1]: sshd@8-10.0.0.108:22-4.153.228.146:55746.service: Deactivated successfully. Jan 23 17:57:43.326438 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 17:57:43.326610 systemd[1]: session-9.scope: Consumed 6.863s CPU time, 260.9M memory peak. Jan 23 17:57:43.327542 systemd-logind[1612]: Session 9 logged out. Waiting for processes to exit. Jan 23 17:57:43.328534 systemd-logind[1612]: Removed session 9. Jan 23 17:57:43.787877 kubelet[2867]: I0123 17:57:43.787847 2867 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 17:57:43.788687 containerd[1632]: time="2026-01-23T17:57:43.788545354Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 17:57:43.789015 kubelet[2867]: I0123 17:57:43.788769 2867 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 17:57:44.758327 kubelet[2867]: W0123 17:57:44.758186 2867 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4459-2-3-a-575e6c418a" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-2-3-a-575e6c418a' and this object Jan 23 17:57:44.758327 kubelet[2867]: E0123 17:57:44.758228 2867 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4459-2-3-a-575e6c418a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-3-a-575e6c418a' and this object" logger="UnhandledError" Jan 23 17:57:44.758327 kubelet[2867]: I0123 17:57:44.758239 2867 status_manager.go:890] "Failed to get status for pod" podUID="a7bbbc73-4101-4c7e-b35c-97159681ecb0" pod="kube-system/kube-proxy-hjp5p" err="pods \"kube-proxy-hjp5p\" is forbidden: User \"system:node:ci-4459-2-3-a-575e6c418a\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-3-a-575e6c418a' and this object" Jan 23 17:57:44.759311 kubelet[2867]: W0123 17:57:44.758288 2867 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4459-2-3-a-575e6c418a" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-2-3-a-575e6c418a' and this object Jan 23 17:57:44.759311 kubelet[2867]: E0123 17:57:44.758391 2867 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4459-2-3-a-575e6c418a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-2-3-a-575e6c418a' and this object" logger="UnhandledError" Jan 23 17:57:44.765068 systemd[1]: Created slice kubepods-besteffort-poda7bbbc73_4101_4c7e_b35c_97159681ecb0.slice - libcontainer container kubepods-besteffort-poda7bbbc73_4101_4c7e_b35c_97159681ecb0.slice. Jan 23 17:57:44.778269 systemd[1]: Created slice kubepods-burstable-pod57180e6e_8cec_4f96_8655_ff94dd6f5fc5.slice - libcontainer container kubepods-burstable-pod57180e6e_8cec_4f96_8655_ff94dd6f5fc5.slice. Jan 23 17:57:44.830203 systemd[1]: Created slice kubepods-besteffort-pod8ca3711e_8d49_4a07_9947_8c219e121534.slice - libcontainer container kubepods-besteffort-pod8ca3711e_8d49_4a07_9947_8c219e121534.slice. Jan 23 17:57:44.865291 kubelet[2867]: I0123 17:57:44.865186 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-hubble-tls\") pod \"cilium-px255\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " pod="kube-system/cilium-px255" Jan 23 17:57:44.865291 kubelet[2867]: I0123 17:57:44.865257 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-bpf-maps\") pod \"cilium-px255\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " pod="kube-system/cilium-px255" Jan 23 17:57:44.865291 kubelet[2867]: I0123 17:57:44.865295 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-xtables-lock\") pod \"cilium-px255\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " pod="kube-system/cilium-px255" Jan 23 17:57:44.866020 kubelet[2867]: I0123 17:57:44.865348 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-host-proc-sys-net\") pod \"cilium-px255\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " pod="kube-system/cilium-px255" Jan 23 17:57:44.866020 kubelet[2867]: I0123 17:57:44.865397 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b62q4\" (UniqueName: \"kubernetes.io/projected/8ca3711e-8d49-4a07-9947-8c219e121534-kube-api-access-b62q4\") pod \"cilium-operator-6c4d7847fc-ccbrx\" (UID: \"8ca3711e-8d49-4a07-9947-8c219e121534\") " pod="kube-system/cilium-operator-6c4d7847fc-ccbrx" Jan 23 17:57:44.866020 kubelet[2867]: I0123 17:57:44.865447 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-hostproc\") pod \"cilium-px255\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " pod="kube-system/cilium-px255" Jan 23 17:57:44.866020 kubelet[2867]: I0123 17:57:44.865490 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cni-path\") pod \"cilium-px255\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " pod="kube-system/cilium-px255" Jan 23 17:57:44.866020 kubelet[2867]: I0123 17:57:44.865534 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ca3711e-8d49-4a07-9947-8c219e121534-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-ccbrx\" (UID: \"8ca3711e-8d49-4a07-9947-8c219e121534\") " pod="kube-system/cilium-operator-6c4d7847fc-ccbrx" Jan 23 17:57:44.866124 kubelet[2867]: I0123 17:57:44.865577 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmhxb\" (UniqueName: \"kubernetes.io/projected/a7bbbc73-4101-4c7e-b35c-97159681ecb0-kube-api-access-wmhxb\") pod \"kube-proxy-hjp5p\" (UID: \"a7bbbc73-4101-4c7e-b35c-97159681ecb0\") " pod="kube-system/kube-proxy-hjp5p" Jan 23 17:57:44.866124 kubelet[2867]: I0123 17:57:44.865594 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-etc-cni-netd\") pod \"cilium-px255\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " pod="kube-system/cilium-px255" Jan 23 17:57:44.866124 kubelet[2867]: I0123 17:57:44.865609 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-clustermesh-secrets\") pod \"cilium-px255\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " pod="kube-system/cilium-px255" Jan 23 17:57:44.866124 kubelet[2867]: I0123 17:57:44.865623 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cilium-run\") pod \"cilium-px255\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " pod="kube-system/cilium-px255" Jan 23 17:57:44.866124 kubelet[2867]: I0123 17:57:44.865635 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a7bbbc73-4101-4c7e-b35c-97159681ecb0-kube-proxy\") pod \"kube-proxy-hjp5p\" (UID: \"a7bbbc73-4101-4c7e-b35c-97159681ecb0\") " pod="kube-system/kube-proxy-hjp5p" Jan 23 17:57:44.866216 kubelet[2867]: I0123 17:57:44.865649 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7bbbc73-4101-4c7e-b35c-97159681ecb0-xtables-lock\") pod \"kube-proxy-hjp5p\" (UID: \"a7bbbc73-4101-4c7e-b35c-97159681ecb0\") " pod="kube-system/kube-proxy-hjp5p" Jan 23 17:57:44.866216 kubelet[2867]: I0123 17:57:44.865665 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cilium-config-path\") pod \"cilium-px255\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " pod="kube-system/cilium-px255" Jan 23 17:57:44.866216 kubelet[2867]: I0123 17:57:44.865680 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7bbbc73-4101-4c7e-b35c-97159681ecb0-lib-modules\") pod \"kube-proxy-hjp5p\" (UID: \"a7bbbc73-4101-4c7e-b35c-97159681ecb0\") " pod="kube-system/kube-proxy-hjp5p" Jan 23 17:57:44.866216 kubelet[2867]: I0123 17:57:44.865698 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cilium-cgroup\") pod \"cilium-px255\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " pod="kube-system/cilium-px255" Jan 23 17:57:44.866216 kubelet[2867]: I0123 17:57:44.865717 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-lib-modules\") pod \"cilium-px255\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " pod="kube-system/cilium-px255" Jan 23 17:57:44.866310 kubelet[2867]: I0123 17:57:44.865730 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-host-proc-sys-kernel\") pod \"cilium-px255\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " pod="kube-system/cilium-px255" Jan 23 17:57:44.866310 kubelet[2867]: I0123 17:57:44.865744 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkmjm\" (UniqueName: \"kubernetes.io/projected/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-kube-api-access-dkmjm\") pod \"cilium-px255\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " pod="kube-system/cilium-px255" Jan 23 17:57:45.735667 containerd[1632]: time="2026-01-23T17:57:45.735583610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ccbrx,Uid:8ca3711e-8d49-4a07-9947-8c219e121534,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:45.755464 containerd[1632]: time="2026-01-23T17:57:45.755395578Z" level=info msg="connecting to shim 3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f" address="unix:///run/containerd/s/26af77344ec7aff9e2ac1da87a8bf229099fbee34ad058e07250966b631b5fa6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:45.779274 systemd[1]: Started cri-containerd-3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f.scope - libcontainer container 3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f. Jan 23 17:57:45.809446 containerd[1632]: time="2026-01-23T17:57:45.809402591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ccbrx,Uid:8ca3711e-8d49-4a07-9947-8c219e121534,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\"" Jan 23 17:57:45.811584 containerd[1632]: time="2026-01-23T17:57:45.811507716Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 17:57:45.967745 kubelet[2867]: E0123 17:57:45.967701 2867 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 23 17:57:45.968144 kubelet[2867]: E0123 17:57:45.967796 2867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a7bbbc73-4101-4c7e-b35c-97159681ecb0-kube-proxy podName:a7bbbc73-4101-4c7e-b35c-97159681ecb0 nodeName:}" failed. No retries permitted until 2026-01-23 17:57:46.467770459 +0000 UTC m=+6.029930164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/a7bbbc73-4101-4c7e-b35c-97159681ecb0-kube-proxy") pod "kube-proxy-hjp5p" (UID: "a7bbbc73-4101-4c7e-b35c-97159681ecb0") : failed to sync configmap cache: timed out waiting for the condition Jan 23 17:57:45.983596 containerd[1632]: time="2026-01-23T17:57:45.983544858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-px255,Uid:57180e6e-8cec-4f96-8655-ff94dd6f5fc5,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:46.002266 containerd[1632]: time="2026-01-23T17:57:46.002110224Z" level=info msg="connecting to shim 61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb" address="unix:///run/containerd/s/9d99813d6de82c90604d49dc694204dae258d5ef1dde5b3eeee750440e44d441" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:46.029103 systemd[1]: Started cri-containerd-61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb.scope - libcontainer container 61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb. Jan 23 17:57:46.052522 containerd[1632]: time="2026-01-23T17:57:46.052456587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-px255,Uid:57180e6e-8cec-4f96-8655-ff94dd6f5fc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\"" Jan 23 17:57:46.575614 containerd[1632]: time="2026-01-23T17:57:46.575570310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hjp5p,Uid:a7bbbc73-4101-4c7e-b35c-97159681ecb0,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:46.590666 containerd[1632]: time="2026-01-23T17:57:46.590339586Z" level=info msg="connecting to shim 3eaa703b715a94f85b6f756126e360e074315a794113c927f810fcd7eb0be73b" address="unix:///run/containerd/s/08f9c0160a6e813b874104a76f89c5f22962ba4ccc8f5a971c11fa67ef30ea1e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:46.614083 systemd[1]: Started cri-containerd-3eaa703b715a94f85b6f756126e360e074315a794113c927f810fcd7eb0be73b.scope - libcontainer container 3eaa703b715a94f85b6f756126e360e074315a794113c927f810fcd7eb0be73b. Jan 23 17:57:46.635231 containerd[1632]: time="2026-01-23T17:57:46.635181976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hjp5p,Uid:a7bbbc73-4101-4c7e-b35c-97159681ecb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3eaa703b715a94f85b6f756126e360e074315a794113c927f810fcd7eb0be73b\"" Jan 23 17:57:46.637888 containerd[1632]: time="2026-01-23T17:57:46.637855143Z" level=info msg="CreateContainer within sandbox \"3eaa703b715a94f85b6f756126e360e074315a794113c927f810fcd7eb0be73b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 17:57:46.647713 containerd[1632]: time="2026-01-23T17:57:46.646817165Z" level=info msg="Container 50471ba9027d924e351df1df79497d70d4ff0a850048a7247c32f91fd0a92b15: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:46.655624 containerd[1632]: time="2026-01-23T17:57:46.655580826Z" level=info msg="CreateContainer within sandbox \"3eaa703b715a94f85b6f756126e360e074315a794113c927f810fcd7eb0be73b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"50471ba9027d924e351df1df79497d70d4ff0a850048a7247c32f91fd0a92b15\"" Jan 23 17:57:46.656334 containerd[1632]: time="2026-01-23T17:57:46.656308068Z" level=info msg="StartContainer for \"50471ba9027d924e351df1df79497d70d4ff0a850048a7247c32f91fd0a92b15\"" Jan 23 17:57:46.657827 containerd[1632]: time="2026-01-23T17:57:46.657794272Z" level=info msg="connecting to shim 50471ba9027d924e351df1df79497d70d4ff0a850048a7247c32f91fd0a92b15" address="unix:///run/containerd/s/08f9c0160a6e813b874104a76f89c5f22962ba4ccc8f5a971c11fa67ef30ea1e" protocol=ttrpc version=3 Jan 23 17:57:46.677328 systemd[1]: Started cri-containerd-50471ba9027d924e351df1df79497d70d4ff0a850048a7247c32f91fd0a92b15.scope - libcontainer container 50471ba9027d924e351df1df79497d70d4ff0a850048a7247c32f91fd0a92b15. Jan 23 17:57:46.738254 containerd[1632]: time="2026-01-23T17:57:46.738132109Z" level=info msg="StartContainer for \"50471ba9027d924e351df1df79497d70d4ff0a850048a7247c32f91fd0a92b15\" returns successfully" Jan 23 17:57:46.999499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2039674114.mount: Deactivated successfully. Jan 23 17:57:47.289954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1598683103.mount: Deactivated successfully. Jan 23 17:57:47.525754 containerd[1632]: time="2026-01-23T17:57:47.525700281Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:47.527401 containerd[1632]: time="2026-01-23T17:57:47.527367045Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 23 17:57:47.528394 containerd[1632]: time="2026-01-23T17:57:47.528363407Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:47.530242 containerd[1632]: time="2026-01-23T17:57:47.530127092Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.718573256s" Jan 23 17:57:47.530242 containerd[1632]: time="2026-01-23T17:57:47.530159572Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 23 17:57:47.531175 containerd[1632]: time="2026-01-23T17:57:47.531152494Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 17:57:47.533059 containerd[1632]: time="2026-01-23T17:57:47.532894018Z" level=info msg="CreateContainer within sandbox \"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 17:57:47.541254 containerd[1632]: time="2026-01-23T17:57:47.540546957Z" level=info msg="Container a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:47.548362 containerd[1632]: time="2026-01-23T17:57:47.548301336Z" level=info msg="CreateContainer within sandbox \"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59\"" Jan 23 17:57:47.548919 containerd[1632]: time="2026-01-23T17:57:47.548823377Z" level=info msg="StartContainer for \"a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59\"" Jan 23 17:57:47.549691 containerd[1632]: time="2026-01-23T17:57:47.549661419Z" level=info msg="connecting to shim a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59" address="unix:///run/containerd/s/26af77344ec7aff9e2ac1da87a8bf229099fbee34ad058e07250966b631b5fa6" protocol=ttrpc version=3 Jan 23 17:57:47.570082 systemd[1]: Started cri-containerd-a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59.scope - libcontainer container a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59. Jan 23 17:57:47.581785 kubelet[2867]: I0123 17:57:47.581522 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hjp5p" podStartSLOduration=3.581504698 podStartE2EDuration="3.581504698s" podCreationTimestamp="2026-01-23 17:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:47.580280055 +0000 UTC m=+7.142439760" watchObservedRunningTime="2026-01-23 17:57:47.581504698 +0000 UTC m=+7.143664403" Jan 23 17:57:47.600429 containerd[1632]: time="2026-01-23T17:57:47.600389064Z" level=info msg="StartContainer for \"a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59\" returns successfully" Jan 23 17:57:51.370469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717823044.mount: Deactivated successfully. Jan 23 17:57:52.354016 containerd[1632]: time="2026-01-23T17:57:52.353709003Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:52.355633 containerd[1632]: time="2026-01-23T17:57:52.355601968Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 23 17:57:52.356964 containerd[1632]: time="2026-01-23T17:57:52.356938491Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:52.358666 containerd[1632]: time="2026-01-23T17:57:52.358263094Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.8270112s" Jan 23 17:57:52.358666 containerd[1632]: time="2026-01-23T17:57:52.358305174Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 23 17:57:52.361544 containerd[1632]: time="2026-01-23T17:57:52.361155221Z" level=info msg="CreateContainer within sandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 17:57:52.369703 containerd[1632]: time="2026-01-23T17:57:52.369595202Z" level=info msg="Container ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:52.376972 containerd[1632]: time="2026-01-23T17:57:52.376887260Z" level=info msg="CreateContainer within sandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36\"" Jan 23 17:57:52.378982 containerd[1632]: time="2026-01-23T17:57:52.378892785Z" level=info msg="StartContainer for \"ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36\"" Jan 23 17:57:52.379977 containerd[1632]: time="2026-01-23T17:57:52.379940947Z" level=info msg="connecting to shim ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36" address="unix:///run/containerd/s/9d99813d6de82c90604d49dc694204dae258d5ef1dde5b3eeee750440e44d441" protocol=ttrpc version=3 Jan 23 17:57:52.403280 systemd[1]: Started cri-containerd-ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36.scope - libcontainer container ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36. Jan 23 17:57:52.428623 containerd[1632]: time="2026-01-23T17:57:52.428530586Z" level=info msg="StartContainer for \"ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36\" returns successfully" Jan 23 17:57:52.441123 systemd[1]: cri-containerd-ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36.scope: Deactivated successfully. Jan 23 17:57:52.445036 containerd[1632]: time="2026-01-23T17:57:52.444992387Z" level=info msg="received container exit event container_id:\"ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36\" id:\"ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36\" pid:3336 exited_at:{seconds:1769191072 nanos:444660026}" Jan 23 17:57:52.462294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36-rootfs.mount: Deactivated successfully. Jan 23 17:57:52.600592 kubelet[2867]: I0123 17:57:52.600528 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-ccbrx" podStartSLOduration=6.880625149 podStartE2EDuration="8.600511408s" podCreationTimestamp="2026-01-23 17:57:44 +0000 UTC" firstStartedPulling="2026-01-23 17:57:45.811148755 +0000 UTC m=+5.373308460" lastFinishedPulling="2026-01-23 17:57:47.531035014 +0000 UTC m=+7.093194719" observedRunningTime="2026-01-23 17:57:48.597262749 +0000 UTC m=+8.159422494" watchObservedRunningTime="2026-01-23 17:57:52.600511408 +0000 UTC m=+12.162671073" Jan 23 17:57:59.603186 containerd[1632]: time="2026-01-23T17:57:59.603143385Z" level=info msg="CreateContainer within sandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 17:57:59.614749 containerd[1632]: time="2026-01-23T17:57:59.614615253Z" level=info msg="Container fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:59.622814 containerd[1632]: time="2026-01-23T17:57:59.622701713Z" level=info msg="CreateContainer within sandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd\"" Jan 23 17:57:59.623382 containerd[1632]: time="2026-01-23T17:57:59.623356274Z" level=info msg="StartContainer for \"fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd\"" Jan 23 17:57:59.624324 containerd[1632]: time="2026-01-23T17:57:59.624300556Z" level=info msg="connecting to shim fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd" address="unix:///run/containerd/s/9d99813d6de82c90604d49dc694204dae258d5ef1dde5b3eeee750440e44d441" protocol=ttrpc version=3 Jan 23 17:57:59.653278 systemd[1]: Started cri-containerd-fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd.scope - libcontainer container fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd. Jan 23 17:57:59.678939 containerd[1632]: time="2026-01-23T17:57:59.678865970Z" level=info msg="StartContainer for \"fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd\" returns successfully" Jan 23 17:57:59.689988 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:57:59.690569 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:57:59.690936 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:57:59.692188 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:57:59.693606 containerd[1632]: time="2026-01-23T17:57:59.693487726Z" level=info msg="received container exit event container_id:\"fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd\" id:\"fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd\" pid:3383 exited_at:{seconds:1769191079 nanos:693279206}" Jan 23 17:57:59.694043 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 17:57:59.694576 systemd[1]: cri-containerd-fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd.scope: Deactivated successfully. Jan 23 17:57:59.712966 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:58:00.603341 containerd[1632]: time="2026-01-23T17:58:00.603290038Z" level=info msg="CreateContainer within sandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 17:58:00.614110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd-rootfs.mount: Deactivated successfully. Jan 23 17:58:00.622573 containerd[1632]: time="2026-01-23T17:58:00.622324565Z" level=info msg="Container 510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:00.625611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4150771656.mount: Deactivated successfully. Jan 23 17:58:00.633046 containerd[1632]: time="2026-01-23T17:58:00.632999311Z" level=info msg="CreateContainer within sandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7\"" Jan 23 17:58:00.633767 containerd[1632]: time="2026-01-23T17:58:00.633727353Z" level=info msg="StartContainer for \"510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7\"" Jan 23 17:58:00.635725 containerd[1632]: time="2026-01-23T17:58:00.635610117Z" level=info msg="connecting to shim 510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7" address="unix:///run/containerd/s/9d99813d6de82c90604d49dc694204dae258d5ef1dde5b3eeee750440e44d441" protocol=ttrpc version=3 Jan 23 17:58:00.656081 systemd[1]: Started cri-containerd-510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7.scope - libcontainer container 510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7. Jan 23 17:58:00.728078 containerd[1632]: time="2026-01-23T17:58:00.728008064Z" level=info msg="StartContainer for \"510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7\" returns successfully" Jan 23 17:58:00.728512 systemd[1]: cri-containerd-510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7.scope: Deactivated successfully. Jan 23 17:58:00.731176 containerd[1632]: time="2026-01-23T17:58:00.731146992Z" level=info msg="received container exit event container_id:\"510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7\" id:\"510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7\" pid:3432 exited_at:{seconds:1769191080 nanos:730894271}" Jan 23 17:58:01.609508 containerd[1632]: time="2026-01-23T17:58:01.609448306Z" level=info msg="CreateContainer within sandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 17:58:01.614525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7-rootfs.mount: Deactivated successfully. Jan 23 17:58:01.622579 containerd[1632]: time="2026-01-23T17:58:01.621987497Z" level=info msg="Container edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:01.623182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1075373521.mount: Deactivated successfully. Jan 23 17:58:01.631625 containerd[1632]: time="2026-01-23T17:58:01.631557000Z" level=info msg="CreateContainer within sandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332\"" Jan 23 17:58:01.632147 containerd[1632]: time="2026-01-23T17:58:01.632113442Z" level=info msg="StartContainer for \"edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332\"" Jan 23 17:58:01.632939 containerd[1632]: time="2026-01-23T17:58:01.632884963Z" level=info msg="connecting to shim edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332" address="unix:///run/containerd/s/9d99813d6de82c90604d49dc694204dae258d5ef1dde5b3eeee750440e44d441" protocol=ttrpc version=3 Jan 23 17:58:01.652091 systemd[1]: Started cri-containerd-edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332.scope - libcontainer container edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332. Jan 23 17:58:01.673861 systemd[1]: cri-containerd-edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332.scope: Deactivated successfully. Jan 23 17:58:01.676115 containerd[1632]: time="2026-01-23T17:58:01.676064229Z" level=info msg="received container exit event container_id:\"edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332\" id:\"edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332\" pid:3469 exited_at:{seconds:1769191081 nanos:674681786}" Jan 23 17:58:01.683531 containerd[1632]: time="2026-01-23T17:58:01.683495168Z" level=info msg="StartContainer for \"edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332\" returns successfully" Jan 23 17:58:01.694953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332-rootfs.mount: Deactivated successfully. Jan 23 17:58:02.615961 containerd[1632]: time="2026-01-23T17:58:02.615205213Z" level=info msg="CreateContainer within sandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 17:58:02.627820 containerd[1632]: time="2026-01-23T17:58:02.627764964Z" level=info msg="Container bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:02.629181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4092947605.mount: Deactivated successfully. Jan 23 17:58:02.637082 containerd[1632]: time="2026-01-23T17:58:02.637034387Z" level=info msg="CreateContainer within sandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa\"" Jan 23 17:58:02.637787 containerd[1632]: time="2026-01-23T17:58:02.637751948Z" level=info msg="StartContainer for \"bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa\"" Jan 23 17:58:02.638923 containerd[1632]: time="2026-01-23T17:58:02.638859831Z" level=info msg="connecting to shim bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa" address="unix:///run/containerd/s/9d99813d6de82c90604d49dc694204dae258d5ef1dde5b3eeee750440e44d441" protocol=ttrpc version=3 Jan 23 17:58:02.657072 systemd[1]: Started cri-containerd-bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa.scope - libcontainer container bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa. Jan 23 17:58:02.703688 containerd[1632]: time="2026-01-23T17:58:02.703648830Z" level=info msg="StartContainer for \"bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa\" returns successfully" Jan 23 17:58:02.821001 kubelet[2867]: I0123 17:58:02.820000 2867 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 17:58:02.853224 systemd[1]: Created slice kubepods-burstable-podd0aebcfe_b570_4511_859b_cb26300b46de.slice - libcontainer container kubepods-burstable-podd0aebcfe_b570_4511_859b_cb26300b46de.slice. Jan 23 17:58:02.859646 systemd[1]: Created slice kubepods-burstable-podf3db4a9d_f754_4b77_8ad2_b1eb9faed73a.slice - libcontainer container kubepods-burstable-podf3db4a9d_f754_4b77_8ad2_b1eb9faed73a.slice. Jan 23 17:58:02.887995 kubelet[2867]: I0123 17:58:02.887833 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3db4a9d-f754-4b77-8ad2-b1eb9faed73a-config-volume\") pod \"coredns-668d6bf9bc-qp8dr\" (UID: \"f3db4a9d-f754-4b77-8ad2-b1eb9faed73a\") " pod="kube-system/coredns-668d6bf9bc-qp8dr" Jan 23 17:58:02.887995 kubelet[2867]: I0123 17:58:02.887882 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0aebcfe-b570-4511-859b-cb26300b46de-config-volume\") pod \"coredns-668d6bf9bc-6k9xf\" (UID: \"d0aebcfe-b570-4511-859b-cb26300b46de\") " pod="kube-system/coredns-668d6bf9bc-6k9xf" Jan 23 17:58:02.888320 kubelet[2867]: I0123 17:58:02.888219 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hbkq\" (UniqueName: \"kubernetes.io/projected/d0aebcfe-b570-4511-859b-cb26300b46de-kube-api-access-7hbkq\") pod \"coredns-668d6bf9bc-6k9xf\" (UID: \"d0aebcfe-b570-4511-859b-cb26300b46de\") " pod="kube-system/coredns-668d6bf9bc-6k9xf" Jan 23 17:58:02.888320 kubelet[2867]: I0123 17:58:02.888276 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqf9h\" (UniqueName: \"kubernetes.io/projected/f3db4a9d-f754-4b77-8ad2-b1eb9faed73a-kube-api-access-tqf9h\") pod \"coredns-668d6bf9bc-qp8dr\" (UID: \"f3db4a9d-f754-4b77-8ad2-b1eb9faed73a\") " pod="kube-system/coredns-668d6bf9bc-qp8dr" Jan 23 17:58:03.158748 containerd[1632]: time="2026-01-23T17:58:03.158106345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6k9xf,Uid:d0aebcfe-b570-4511-859b-cb26300b46de,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:03.162919 containerd[1632]: time="2026-01-23T17:58:03.162863036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qp8dr,Uid:f3db4a9d-f754-4b77-8ad2-b1eb9faed73a,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:03.633468 kubelet[2867]: I0123 17:58:03.633337 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-px255" podStartSLOduration=13.327687404 podStartE2EDuration="19.633319111s" podCreationTimestamp="2026-01-23 17:57:44 +0000 UTC" firstStartedPulling="2026-01-23 17:57:46.053412109 +0000 UTC m=+5.615571774" lastFinishedPulling="2026-01-23 17:57:52.359043776 +0000 UTC m=+11.921203481" observedRunningTime="2026-01-23 17:58:03.63315991 +0000 UTC m=+23.195319615" watchObservedRunningTime="2026-01-23 17:58:03.633319111 +0000 UTC m=+23.195478816" Jan 23 17:58:04.748119 systemd-networkd[1442]: cilium_host: Link UP Jan 23 17:58:04.748233 systemd-networkd[1442]: cilium_net: Link UP Jan 23 17:58:04.748343 systemd-networkd[1442]: cilium_net: Gained carrier Jan 23 17:58:04.748441 systemd-networkd[1442]: cilium_host: Gained carrier Jan 23 17:58:04.832278 systemd-networkd[1442]: cilium_vxlan: Link UP Jan 23 17:58:04.832285 systemd-networkd[1442]: cilium_vxlan: Gained carrier Jan 23 17:58:05.107965 kernel: NET: Registered PF_ALG protocol family Jan 23 17:58:05.313996 systemd-networkd[1442]: cilium_net: Gained IPv6LL Jan 23 17:58:05.504011 systemd-networkd[1442]: cilium_host: Gained IPv6LL Jan 23 17:58:05.680645 systemd-networkd[1442]: lxc_health: Link UP Jan 23 17:58:05.690169 systemd-networkd[1442]: lxc_health: Gained carrier Jan 23 17:58:05.952063 systemd-networkd[1442]: cilium_vxlan: Gained IPv6LL Jan 23 17:58:06.208025 kernel: eth0: renamed from tmpc1722 Jan 23 17:58:06.208945 kernel: eth0: renamed from tmpe637a Jan 23 17:58:06.210288 systemd-networkd[1442]: lxc2f49d1c40a51: Link UP Jan 23 17:58:06.216082 systemd-networkd[1442]: lxc984e187e3b05: Link UP Jan 23 17:58:06.216431 systemd-networkd[1442]: lxc2f49d1c40a51: Gained carrier Jan 23 17:58:06.216562 systemd-networkd[1442]: lxc984e187e3b05: Gained carrier Jan 23 17:58:07.106026 systemd-networkd[1442]: lxc_health: Gained IPv6LL Jan 23 17:58:08.064194 systemd-networkd[1442]: lxc2f49d1c40a51: Gained IPv6LL Jan 23 17:58:08.064447 systemd-networkd[1442]: lxc984e187e3b05: Gained IPv6LL Jan 23 17:58:09.732471 containerd[1632]: time="2026-01-23T17:58:09.732413870Z" level=info msg="connecting to shim e637a7e8de2c98a9fdcb73fbce97b1bf1b529a659f466bef7f24cedd5b9c026d" address="unix:///run/containerd/s/fe78efd5173caafbb264cb66a576ffcb3b23845a312408baea4461498744a948" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:09.742925 containerd[1632]: time="2026-01-23T17:58:09.742701015Z" level=info msg="connecting to shim c1722aeb95dd0d6318df1e7d383472beef5f6d0ae94b9668569a3e4237b3d815" address="unix:///run/containerd/s/5d827f732f26323ae0bcde33f0a15bc2d72326d8289f5f4d6f5071bae05c51c1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:09.761083 systemd[1]: Started cri-containerd-e637a7e8de2c98a9fdcb73fbce97b1bf1b529a659f466bef7f24cedd5b9c026d.scope - libcontainer container e637a7e8de2c98a9fdcb73fbce97b1bf1b529a659f466bef7f24cedd5b9c026d. Jan 23 17:58:09.765247 systemd[1]: Started cri-containerd-c1722aeb95dd0d6318df1e7d383472beef5f6d0ae94b9668569a3e4237b3d815.scope - libcontainer container c1722aeb95dd0d6318df1e7d383472beef5f6d0ae94b9668569a3e4237b3d815. Jan 23 17:58:09.795679 containerd[1632]: time="2026-01-23T17:58:09.795635425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qp8dr,Uid:f3db4a9d-f754-4b77-8ad2-b1eb9faed73a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e637a7e8de2c98a9fdcb73fbce97b1bf1b529a659f466bef7f24cedd5b9c026d\"" Jan 23 17:58:09.798532 containerd[1632]: time="2026-01-23T17:58:09.798457272Z" level=info msg="CreateContainer within sandbox \"e637a7e8de2c98a9fdcb73fbce97b1bf1b529a659f466bef7f24cedd5b9c026d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:58:09.805087 containerd[1632]: time="2026-01-23T17:58:09.805042248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6k9xf,Uid:d0aebcfe-b570-4511-859b-cb26300b46de,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1722aeb95dd0d6318df1e7d383472beef5f6d0ae94b9668569a3e4237b3d815\"" Jan 23 17:58:09.810466 containerd[1632]: time="2026-01-23T17:58:09.810338021Z" level=info msg="CreateContainer within sandbox \"c1722aeb95dd0d6318df1e7d383472beef5f6d0ae94b9668569a3e4237b3d815\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:58:09.813047 containerd[1632]: time="2026-01-23T17:58:09.813000588Z" level=info msg="Container a4ed0955de096ca579b62a321e4859535ab675b6f0825b448a8973c01fafe3bd: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:09.821787 containerd[1632]: time="2026-01-23T17:58:09.821734329Z" level=info msg="CreateContainer within sandbox \"e637a7e8de2c98a9fdcb73fbce97b1bf1b529a659f466bef7f24cedd5b9c026d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a4ed0955de096ca579b62a321e4859535ab675b6f0825b448a8973c01fafe3bd\"" Jan 23 17:58:09.822370 containerd[1632]: time="2026-01-23T17:58:09.822274531Z" level=info msg="StartContainer for \"a4ed0955de096ca579b62a321e4859535ab675b6f0825b448a8973c01fafe3bd\"" Jan 23 17:58:09.824128 containerd[1632]: time="2026-01-23T17:58:09.824097415Z" level=info msg="connecting to shim a4ed0955de096ca579b62a321e4859535ab675b6f0825b448a8973c01fafe3bd" address="unix:///run/containerd/s/fe78efd5173caafbb264cb66a576ffcb3b23845a312408baea4461498744a948" protocol=ttrpc version=3 Jan 23 17:58:09.827399 containerd[1632]: time="2026-01-23T17:58:09.827359143Z" level=info msg="Container d9f6b642ceec70ae58185e6ee8d56f96f6ca563f9ef29d9d8056ba9182b3a88f: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:09.833962 containerd[1632]: time="2026-01-23T17:58:09.833893319Z" level=info msg="CreateContainer within sandbox \"c1722aeb95dd0d6318df1e7d383472beef5f6d0ae94b9668569a3e4237b3d815\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9f6b642ceec70ae58185e6ee8d56f96f6ca563f9ef29d9d8056ba9182b3a88f\"" Jan 23 17:58:09.835124 containerd[1632]: time="2026-01-23T17:58:09.835081082Z" level=info msg="StartContainer for \"d9f6b642ceec70ae58185e6ee8d56f96f6ca563f9ef29d9d8056ba9182b3a88f\"" Jan 23 17:58:09.837722 containerd[1632]: time="2026-01-23T17:58:09.837561128Z" level=info msg="connecting to shim d9f6b642ceec70ae58185e6ee8d56f96f6ca563f9ef29d9d8056ba9182b3a88f" address="unix:///run/containerd/s/5d827f732f26323ae0bcde33f0a15bc2d72326d8289f5f4d6f5071bae05c51c1" protocol=ttrpc version=3 Jan 23 17:58:09.848288 systemd[1]: Started cri-containerd-a4ed0955de096ca579b62a321e4859535ab675b6f0825b448a8973c01fafe3bd.scope - libcontainer container a4ed0955de096ca579b62a321e4859535ab675b6f0825b448a8973c01fafe3bd. Jan 23 17:58:09.861262 systemd[1]: Started cri-containerd-d9f6b642ceec70ae58185e6ee8d56f96f6ca563f9ef29d9d8056ba9182b3a88f.scope - libcontainer container d9f6b642ceec70ae58185e6ee8d56f96f6ca563f9ef29d9d8056ba9182b3a88f. Jan 23 17:58:09.882690 containerd[1632]: time="2026-01-23T17:58:09.882397798Z" level=info msg="StartContainer for \"a4ed0955de096ca579b62a321e4859535ab675b6f0825b448a8973c01fafe3bd\" returns successfully" Jan 23 17:58:09.895299 containerd[1632]: time="2026-01-23T17:58:09.895196989Z" level=info msg="StartContainer for \"d9f6b642ceec70ae58185e6ee8d56f96f6ca563f9ef29d9d8056ba9182b3a88f\" returns successfully" Jan 23 17:58:10.649924 kubelet[2867]: I0123 17:58:10.649563 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6k9xf" podStartSLOduration=26.64954884 podStartE2EDuration="26.64954884s" podCreationTimestamp="2026-01-23 17:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:10.648625637 +0000 UTC m=+30.210785342" watchObservedRunningTime="2026-01-23 17:58:10.64954884 +0000 UTC m=+30.211708545" Jan 23 17:58:10.675170 kubelet[2867]: I0123 17:58:10.675080 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qp8dr" podStartSLOduration=26.675064182 podStartE2EDuration="26.675064182s" podCreationTimestamp="2026-01-23 17:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:10.675040502 +0000 UTC m=+30.237200207" watchObservedRunningTime="2026-01-23 17:58:10.675064182 +0000 UTC m=+30.237223887" Jan 23 18:00:07.842542 systemd[1]: Started sshd@9-10.0.0.108:22-4.153.228.146:49686.service - OpenSSH per-connection server daemon (4.153.228.146:49686). Jan 23 18:00:08.447952 sshd[4208]: Accepted publickey for core from 4.153.228.146 port 49686 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:00:08.449318 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:08.453727 systemd-logind[1612]: New session 10 of user core. Jan 23 18:00:08.461227 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:00:08.946245 sshd[4211]: Connection closed by 4.153.228.146 port 49686 Jan 23 18:00:08.946602 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:08.950062 systemd[1]: sshd@9-10.0.0.108:22-4.153.228.146:49686.service: Deactivated successfully. Jan 23 18:00:08.951728 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:00:08.953835 systemd-logind[1612]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:00:08.955259 systemd-logind[1612]: Removed session 10. Jan 23 18:00:14.056696 systemd[1]: Started sshd@10-10.0.0.108:22-4.153.228.146:49696.service - OpenSSH per-connection server daemon (4.153.228.146:49696). Jan 23 18:00:14.674613 sshd[4228]: Accepted publickey for core from 4.153.228.146 port 49696 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:00:14.676060 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:14.679978 systemd-logind[1612]: New session 11 of user core. Jan 23 18:00:14.690248 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:00:15.164967 sshd[4231]: Connection closed by 4.153.228.146 port 49696 Jan 23 18:00:15.164980 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:15.169123 systemd[1]: sshd@10-10.0.0.108:22-4.153.228.146:49696.service: Deactivated successfully. Jan 23 18:00:15.171561 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:00:15.173986 systemd-logind[1612]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:00:15.175177 systemd-logind[1612]: Removed session 11. Jan 23 18:00:20.274579 systemd[1]: Started sshd@11-10.0.0.108:22-4.153.228.146:60914.service - OpenSSH per-connection server daemon (4.153.228.146:60914). Jan 23 18:00:20.887976 sshd[4247]: Accepted publickey for core from 4.153.228.146 port 60914 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:00:20.889215 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:20.893855 systemd-logind[1612]: New session 12 of user core. Jan 23 18:00:20.899058 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:00:21.372675 sshd[4250]: Connection closed by 4.153.228.146 port 60914 Jan 23 18:00:21.372029 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:21.375934 systemd[1]: sshd@11-10.0.0.108:22-4.153.228.146:60914.service: Deactivated successfully. Jan 23 18:00:21.377547 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:00:21.380234 systemd-logind[1612]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:00:21.381564 systemd-logind[1612]: Removed session 12. Jan 23 18:00:26.481606 systemd[1]: Started sshd@12-10.0.0.108:22-4.153.228.146:49568.service - OpenSSH per-connection server daemon (4.153.228.146:49568). Jan 23 18:00:27.094991 sshd[4264]: Accepted publickey for core from 4.153.228.146 port 49568 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:00:27.096352 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:27.100621 systemd-logind[1612]: New session 13 of user core. Jan 23 18:00:27.116107 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:00:27.575306 sshd[4267]: Connection closed by 4.153.228.146 port 49568 Jan 23 18:00:27.575613 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:27.579712 systemd[1]: sshd@12-10.0.0.108:22-4.153.228.146:49568.service: Deactivated successfully. Jan 23 18:00:27.581621 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:00:27.582482 systemd-logind[1612]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:00:27.584025 systemd-logind[1612]: Removed session 13. Jan 23 18:00:32.685590 systemd[1]: Started sshd@13-10.0.0.108:22-4.153.228.146:49580.service - OpenSSH per-connection server daemon (4.153.228.146:49580). Jan 23 18:00:33.284338 sshd[4282]: Accepted publickey for core from 4.153.228.146 port 49580 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:00:33.285680 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:33.289963 systemd-logind[1612]: New session 14 of user core. Jan 23 18:00:33.296260 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:00:33.766425 sshd[4285]: Connection closed by 4.153.228.146 port 49580 Jan 23 18:00:33.766780 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:33.771291 systemd[1]: sshd@13-10.0.0.108:22-4.153.228.146:49580.service: Deactivated successfully. Jan 23 18:00:33.772948 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:00:33.773718 systemd-logind[1612]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:00:33.774688 systemd-logind[1612]: Removed session 14. Jan 23 18:00:38.882378 systemd[1]: Started sshd@14-10.0.0.108:22-4.153.228.146:38896.service - OpenSSH per-connection server daemon (4.153.228.146:38896). Jan 23 18:00:39.500053 sshd[4299]: Accepted publickey for core from 4.153.228.146 port 38896 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:00:39.501353 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:39.505205 systemd-logind[1612]: New session 15 of user core. Jan 23 18:00:39.517236 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:00:39.995381 sshd[4302]: Connection closed by 4.153.228.146 port 38896 Jan 23 18:00:39.996151 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:39.999943 systemd[1]: sshd@14-10.0.0.108:22-4.153.228.146:38896.service: Deactivated successfully. Jan 23 18:00:40.001610 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:00:40.002398 systemd-logind[1612]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:00:40.003455 systemd-logind[1612]: Removed session 15. Jan 23 18:00:40.101355 systemd[1]: Started sshd@15-10.0.0.108:22-4.153.228.146:38912.service - OpenSSH per-connection server daemon (4.153.228.146:38912). Jan 23 18:00:40.714600 sshd[4317]: Accepted publickey for core from 4.153.228.146 port 38912 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:00:40.715805 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:40.719677 systemd-logind[1612]: New session 16 of user core. Jan 23 18:00:40.728193 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:00:41.234144 sshd[4322]: Connection closed by 4.153.228.146 port 38912 Jan 23 18:00:41.234527 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:41.238132 systemd[1]: sshd@15-10.0.0.108:22-4.153.228.146:38912.service: Deactivated successfully. Jan 23 18:00:41.240043 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:00:41.240921 systemd-logind[1612]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:00:41.242082 systemd-logind[1612]: Removed session 16. Jan 23 18:00:41.346343 systemd[1]: Started sshd@16-10.0.0.108:22-4.153.228.146:38918.service - OpenSSH per-connection server daemon (4.153.228.146:38918). Jan 23 18:00:41.955661 sshd[4333]: Accepted publickey for core from 4.153.228.146 port 38918 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:00:41.957008 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:41.960753 systemd-logind[1612]: New session 17 of user core. Jan 23 18:00:41.967124 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:00:42.436679 sshd[4336]: Connection closed by 4.153.228.146 port 38918 Jan 23 18:00:42.437138 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:42.440816 systemd[1]: sshd@16-10.0.0.108:22-4.153.228.146:38918.service: Deactivated successfully. Jan 23 18:00:42.442541 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:00:42.443472 systemd-logind[1612]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:00:42.444775 systemd-logind[1612]: Removed session 17. Jan 23 18:00:47.546366 systemd[1]: Started sshd@17-10.0.0.108:22-4.153.228.146:48912.service - OpenSSH per-connection server daemon (4.153.228.146:48912). Jan 23 18:00:48.160260 sshd[4353]: Accepted publickey for core from 4.153.228.146 port 48912 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:00:48.161774 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:48.165969 systemd-logind[1612]: New session 18 of user core. Jan 23 18:00:48.172134 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:00:48.643887 sshd[4356]: Connection closed by 4.153.228.146 port 48912 Jan 23 18:00:48.644363 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:48.647742 systemd[1]: sshd@17-10.0.0.108:22-4.153.228.146:48912.service: Deactivated successfully. Jan 23 18:00:48.649407 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:00:48.650097 systemd-logind[1612]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:00:48.651679 systemd-logind[1612]: Removed session 18. Jan 23 18:00:48.750454 systemd[1]: Started sshd@18-10.0.0.108:22-4.153.228.146:48926.service - OpenSSH per-connection server daemon (4.153.228.146:48926). Jan 23 18:00:49.356226 sshd[4369]: Accepted publickey for core from 4.153.228.146 port 48926 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:00:49.357619 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:49.361649 systemd-logind[1612]: New session 19 of user core. Jan 23 18:00:49.368112 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:00:50.007459 sshd[4372]: Connection closed by 4.153.228.146 port 48926 Jan 23 18:00:50.007988 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:50.011460 systemd[1]: sshd@18-10.0.0.108:22-4.153.228.146:48926.service: Deactivated successfully. Jan 23 18:00:50.014259 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:00:50.015034 systemd-logind[1612]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:00:50.016236 systemd-logind[1612]: Removed session 19. Jan 23 18:00:50.120545 systemd[1]: Started sshd@19-10.0.0.108:22-4.153.228.146:48932.service - OpenSSH per-connection server daemon (4.153.228.146:48932). Jan 23 18:00:50.742016 sshd[4383]: Accepted publickey for core from 4.153.228.146 port 48932 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:00:50.743496 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:50.747512 systemd-logind[1612]: New session 20 of user core. Jan 23 18:00:50.758053 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 18:00:51.730070 sshd[4386]: Connection closed by 4.153.228.146 port 48932 Jan 23 18:00:51.730784 sshd-session[4383]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:51.735403 systemd[1]: sshd@19-10.0.0.108:22-4.153.228.146:48932.service: Deactivated successfully. Jan 23 18:00:51.737091 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 18:00:51.737881 systemd-logind[1612]: Session 20 logged out. Waiting for processes to exit. Jan 23 18:00:51.738873 systemd-logind[1612]: Removed session 20. Jan 23 18:00:51.843317 systemd[1]: Started sshd@20-10.0.0.108:22-4.153.228.146:48946.service - OpenSSH per-connection server daemon (4.153.228.146:48946). Jan 23 18:00:52.462154 sshd[4406]: Accepted publickey for core from 4.153.228.146 port 48946 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:00:52.463475 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:52.467996 systemd-logind[1612]: New session 21 of user core. Jan 23 18:00:52.480106 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 18:00:53.059758 sshd[4409]: Connection closed by 4.153.228.146 port 48946 Jan 23 18:00:53.060424 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:53.064059 systemd[1]: sshd@20-10.0.0.108:22-4.153.228.146:48946.service: Deactivated successfully. Jan 23 18:00:53.067483 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 18:00:53.068538 systemd-logind[1612]: Session 21 logged out. Waiting for processes to exit. Jan 23 18:00:53.070768 systemd-logind[1612]: Removed session 21. Jan 23 18:00:53.167002 systemd[1]: Started sshd@21-10.0.0.108:22-4.153.228.146:48952.service - OpenSSH per-connection server daemon (4.153.228.146:48952). Jan 23 18:00:53.781649 sshd[4421]: Accepted publickey for core from 4.153.228.146 port 48952 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:00:53.783006 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:53.787937 systemd-logind[1612]: New session 22 of user core. Jan 23 18:00:53.797357 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 18:00:54.266806 sshd[4424]: Connection closed by 4.153.228.146 port 48952 Jan 23 18:00:54.267334 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:54.270246 systemd[1]: sshd@21-10.0.0.108:22-4.153.228.146:48952.service: Deactivated successfully. Jan 23 18:00:54.272226 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 18:00:54.274878 systemd-logind[1612]: Session 22 logged out. Waiting for processes to exit. Jan 23 18:00:54.276458 systemd-logind[1612]: Removed session 22. Jan 23 18:00:59.376518 systemd[1]: Started sshd@22-10.0.0.108:22-4.153.228.146:58358.service - OpenSSH per-connection server daemon (4.153.228.146:58358). Jan 23 18:00:59.976006 sshd[4440]: Accepted publickey for core from 4.153.228.146 port 58358 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:00:59.977988 sshd-session[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:59.982224 systemd-logind[1612]: New session 23 of user core. Jan 23 18:00:59.993290 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 18:01:00.465023 sshd[4443]: Connection closed by 4.153.228.146 port 58358 Jan 23 18:01:00.465402 sshd-session[4440]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:00.469115 systemd[1]: sshd@22-10.0.0.108:22-4.153.228.146:58358.service: Deactivated successfully. Jan 23 18:01:00.472381 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 18:01:00.474162 systemd-logind[1612]: Session 23 logged out. Waiting for processes to exit. Jan 23 18:01:00.475822 systemd-logind[1612]: Removed session 23. Jan 23 18:01:05.578082 systemd[1]: Started sshd@23-10.0.0.108:22-4.153.228.146:41308.service - OpenSSH per-connection server daemon (4.153.228.146:41308). Jan 23 18:01:06.210056 sshd[4457]: Accepted publickey for core from 4.153.228.146 port 41308 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:01:06.211501 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:06.215446 systemd-logind[1612]: New session 24 of user core. Jan 23 18:01:06.226218 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 18:01:06.704586 sshd[4460]: Connection closed by 4.153.228.146 port 41308 Jan 23 18:01:06.705165 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:06.708715 systemd[1]: sshd@23-10.0.0.108:22-4.153.228.146:41308.service: Deactivated successfully. Jan 23 18:01:06.710328 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 18:01:06.711101 systemd-logind[1612]: Session 24 logged out. Waiting for processes to exit. Jan 23 18:01:06.712364 systemd-logind[1612]: Removed session 24. Jan 23 18:01:11.817413 systemd[1]: Started sshd@24-10.0.0.108:22-4.153.228.146:41314.service - OpenSSH per-connection server daemon (4.153.228.146:41314). Jan 23 18:01:12.459715 sshd[4473]: Accepted publickey for core from 4.153.228.146 port 41314 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:01:12.461087 sshd-session[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:12.464833 systemd-logind[1612]: New session 25 of user core. Jan 23 18:01:12.472231 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 18:01:12.954200 sshd[4476]: Connection closed by 4.153.228.146 port 41314 Jan 23 18:01:12.954657 sshd-session[4473]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:12.958363 systemd[1]: sshd@24-10.0.0.108:22-4.153.228.146:41314.service: Deactivated successfully. Jan 23 18:01:12.959965 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 18:01:12.960690 systemd-logind[1612]: Session 25 logged out. Waiting for processes to exit. Jan 23 18:01:12.961766 systemd-logind[1612]: Removed session 25. Jan 23 18:01:13.059519 systemd[1]: Started sshd@25-10.0.0.108:22-4.153.228.146:41320.service - OpenSSH per-connection server daemon (4.153.228.146:41320). Jan 23 18:01:13.665829 sshd[4489]: Accepted publickey for core from 4.153.228.146 port 41320 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:01:13.666629 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:13.670232 systemd-logind[1612]: New session 26 of user core. Jan 23 18:01:13.680065 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 18:01:15.849657 containerd[1632]: time="2026-01-23T18:01:15.849577816Z" level=info msg="StopContainer for \"a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59\" with timeout 30 (s)" Jan 23 18:01:15.851232 containerd[1632]: time="2026-01-23T18:01:15.851174540Z" level=info msg="Stop container \"a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59\" with signal terminated" Jan 23 18:01:15.861318 systemd[1]: cri-containerd-a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59.scope: Deactivated successfully. Jan 23 18:01:15.862827 containerd[1632]: time="2026-01-23T18:01:15.862792449Z" level=info msg="received container exit event container_id:\"a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59\" id:\"a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59\" pid:3272 exited_at:{seconds:1769191275 nanos:862493408}" Jan 23 18:01:15.874139 containerd[1632]: time="2026-01-23T18:01:15.874096158Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:01:15.881138 containerd[1632]: time="2026-01-23T18:01:15.881092935Z" level=info msg="StopContainer for \"bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa\" with timeout 2 (s)" Jan 23 18:01:15.881514 containerd[1632]: time="2026-01-23T18:01:15.881460616Z" level=info msg="Stop container \"bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa\" with signal terminated" Jan 23 18:01:15.887515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59-rootfs.mount: Deactivated successfully. Jan 23 18:01:15.890816 systemd-networkd[1442]: lxc_health: Link DOWN Jan 23 18:01:15.890824 systemd-networkd[1442]: lxc_health: Lost carrier Jan 23 18:01:15.920152 systemd[1]: cri-containerd-bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa.scope: Deactivated successfully. Jan 23 18:01:15.920869 systemd[1]: cri-containerd-bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa.scope: Consumed 6.647s CPU time, 132.6M memory peak, 120K read from disk, 12.9M written to disk. Jan 23 18:01:15.923184 containerd[1632]: time="2026-01-23T18:01:15.923143401Z" level=info msg="received container exit event container_id:\"bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa\" id:\"bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa\" pid:3506 exited_at:{seconds:1769191275 nanos:922711640}" Jan 23 18:01:15.938087 containerd[1632]: time="2026-01-23T18:01:15.938044879Z" level=info msg="StopContainer for \"a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59\" returns successfully" Jan 23 18:01:15.938887 containerd[1632]: time="2026-01-23T18:01:15.938852041Z" level=info msg="StopPodSandbox for \"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\"" Jan 23 18:01:15.938985 containerd[1632]: time="2026-01-23T18:01:15.938946561Z" level=info msg="Container to stop \"a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:01:15.945075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa-rootfs.mount: Deactivated successfully. Jan 23 18:01:15.950274 systemd[1]: cri-containerd-3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f.scope: Deactivated successfully. Jan 23 18:01:15.955615 containerd[1632]: time="2026-01-23T18:01:15.955562523Z" level=info msg="StopContainer for \"bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa\" returns successfully" Jan 23 18:01:15.956238 containerd[1632]: time="2026-01-23T18:01:15.956216285Z" level=info msg="StopPodSandbox for \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\"" Jan 23 18:01:15.956644 containerd[1632]: time="2026-01-23T18:01:15.956424285Z" level=info msg="Container to stop \"ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:01:15.956644 containerd[1632]: time="2026-01-23T18:01:15.956443485Z" level=info msg="Container to stop \"510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:01:15.956644 containerd[1632]: time="2026-01-23T18:01:15.956453165Z" level=info msg="Container to stop \"edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:01:15.956644 containerd[1632]: time="2026-01-23T18:01:15.956461725Z" level=info msg="Container to stop \"bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:01:15.956644 containerd[1632]: time="2026-01-23T18:01:15.956469325Z" level=info msg="Container to stop \"fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 18:01:15.958417 containerd[1632]: time="2026-01-23T18:01:15.958377850Z" level=info msg="received sandbox exit event container_id:\"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\" id:\"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\" exit_status:137 exited_at:{seconds:1769191275 nanos:957186087}" monitor_name=podsandbox Jan 23 18:01:15.962750 systemd[1]: cri-containerd-61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb.scope: Deactivated successfully. Jan 23 18:01:15.969849 containerd[1632]: time="2026-01-23T18:01:15.969731279Z" level=info msg="received sandbox exit event container_id:\"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" id:\"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" exit_status:137 exited_at:{seconds:1769191275 nanos:969504398}" monitor_name=podsandbox Jan 23 18:01:15.979041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f-rootfs.mount: Deactivated successfully. Jan 23 18:01:15.985250 containerd[1632]: time="2026-01-23T18:01:15.985208878Z" level=info msg="shim disconnected" id=3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f namespace=k8s.io Jan 23 18:01:15.985416 containerd[1632]: time="2026-01-23T18:01:15.985255318Z" level=warning msg="cleaning up after shim disconnected" id=3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f namespace=k8s.io Jan 23 18:01:15.985416 containerd[1632]: time="2026-01-23T18:01:15.985285158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 18:01:15.998233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb-rootfs.mount: Deactivated successfully. Jan 23 18:01:16.003044 containerd[1632]: time="2026-01-23T18:01:16.003001243Z" level=info msg="received sandbox container exit event sandbox_id:\"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\" exit_status:137 exited_at:{seconds:1769191275 nanos:957186087}" monitor_name=criService Jan 23 18:01:16.005579 containerd[1632]: time="2026-01-23T18:01:16.003257123Z" level=info msg="TearDown network for sandbox \"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\" successfully" Jan 23 18:01:16.005579 containerd[1632]: time="2026-01-23T18:01:16.003277163Z" level=info msg="StopPodSandbox for \"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\" returns successfully" Jan 23 18:01:16.005278 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f-shm.mount: Deactivated successfully. Jan 23 18:01:16.006275 containerd[1632]: time="2026-01-23T18:01:16.006242971Z" level=info msg="shim disconnected" id=61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb namespace=k8s.io Jan 23 18:01:16.006435 containerd[1632]: time="2026-01-23T18:01:16.006397971Z" level=warning msg="cleaning up after shim disconnected" id=61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb namespace=k8s.io Jan 23 18:01:16.006489 containerd[1632]: time="2026-01-23T18:01:16.006478011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 18:01:16.018366 containerd[1632]: time="2026-01-23T18:01:16.018284681Z" level=info msg="received sandbox container exit event sandbox_id:\"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" exit_status:137 exited_at:{seconds:1769191275 nanos:969504398}" monitor_name=criService Jan 23 18:01:16.018656 containerd[1632]: time="2026-01-23T18:01:16.018632482Z" level=info msg="TearDown network for sandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" successfully" Jan 23 18:01:16.018695 containerd[1632]: time="2026-01-23T18:01:16.018656522Z" level=info msg="StopPodSandbox for \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" returns successfully" Jan 23 18:01:16.127068 kubelet[2867]: I0123 18:01:16.126942 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-host-proc-sys-net\") pod \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " Jan 23 18:01:16.127068 kubelet[2867]: I0123 18:01:16.126988 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-etc-cni-netd\") pod \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " Jan 23 18:01:16.127068 kubelet[2867]: I0123 18:01:16.127020 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cni-path\") pod \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " Jan 23 18:01:16.127068 kubelet[2867]: I0123 18:01:16.127036 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-lib-modules\") pod \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " Jan 23 18:01:16.127068 kubelet[2867]: I0123 18:01:16.127052 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-bpf-maps\") pod \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " Jan 23 18:01:16.127494 kubelet[2867]: I0123 18:01:16.127086 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-xtables-lock\") pod \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " Jan 23 18:01:16.127494 kubelet[2867]: I0123 18:01:16.127098 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "57180e6e-8cec-4f96-8655-ff94dd6f5fc5" (UID: "57180e6e-8cec-4f96-8655-ff94dd6f5fc5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:01:16.127494 kubelet[2867]: I0123 18:01:16.127125 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "57180e6e-8cec-4f96-8655-ff94dd6f5fc5" (UID: "57180e6e-8cec-4f96-8655-ff94dd6f5fc5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:01:16.127494 kubelet[2867]: I0123 18:01:16.127105 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "57180e6e-8cec-4f96-8655-ff94dd6f5fc5" (UID: "57180e6e-8cec-4f96-8655-ff94dd6f5fc5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:01:16.127494 kubelet[2867]: I0123 18:01:16.127106 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b62q4\" (UniqueName: \"kubernetes.io/projected/8ca3711e-8d49-4a07-9947-8c219e121534-kube-api-access-b62q4\") pod \"8ca3711e-8d49-4a07-9947-8c219e121534\" (UID: \"8ca3711e-8d49-4a07-9947-8c219e121534\") " Jan 23 18:01:16.127601 kubelet[2867]: I0123 18:01:16.127163 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cni-path" (OuterVolumeSpecName: "cni-path") pod "57180e6e-8cec-4f96-8655-ff94dd6f5fc5" (UID: "57180e6e-8cec-4f96-8655-ff94dd6f5fc5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:01:16.127601 kubelet[2867]: I0123 18:01:16.127170 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cilium-cgroup\") pod \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " Jan 23 18:01:16.127601 kubelet[2867]: I0123 18:01:16.127180 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "57180e6e-8cec-4f96-8655-ff94dd6f5fc5" (UID: "57180e6e-8cec-4f96-8655-ff94dd6f5fc5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:01:16.127601 kubelet[2867]: I0123 18:01:16.127189 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cilium-run\") pod \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " Jan 23 18:01:16.127601 kubelet[2867]: I0123 18:01:16.127195 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "57180e6e-8cec-4f96-8655-ff94dd6f5fc5" (UID: "57180e6e-8cec-4f96-8655-ff94dd6f5fc5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:01:16.127698 kubelet[2867]: I0123 18:01:16.127208 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "57180e6e-8cec-4f96-8655-ff94dd6f5fc5" (UID: "57180e6e-8cec-4f96-8655-ff94dd6f5fc5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:01:16.127698 kubelet[2867]: I0123 18:01:16.127208 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cilium-config-path\") pod \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " Jan 23 18:01:16.127698 kubelet[2867]: I0123 18:01:16.127222 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "57180e6e-8cec-4f96-8655-ff94dd6f5fc5" (UID: "57180e6e-8cec-4f96-8655-ff94dd6f5fc5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:01:16.127698 kubelet[2867]: I0123 18:01:16.127231 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkmjm\" (UniqueName: \"kubernetes.io/projected/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-kube-api-access-dkmjm\") pod \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " Jan 23 18:01:16.127698 kubelet[2867]: I0123 18:01:16.127272 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-hubble-tls\") pod \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " Jan 23 18:01:16.127796 kubelet[2867]: I0123 18:01:16.127290 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-hostproc\") pod \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " Jan 23 18:01:16.127796 kubelet[2867]: I0123 18:01:16.127306 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-host-proc-sys-kernel\") pod \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " Jan 23 18:01:16.127796 kubelet[2867]: I0123 18:01:16.127338 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ca3711e-8d49-4a07-9947-8c219e121534-cilium-config-path\") pod \"8ca3711e-8d49-4a07-9947-8c219e121534\" (UID: \"8ca3711e-8d49-4a07-9947-8c219e121534\") " Jan 23 18:01:16.127796 kubelet[2867]: I0123 18:01:16.127355 2867 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-clustermesh-secrets\") pod \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\" (UID: \"57180e6e-8cec-4f96-8655-ff94dd6f5fc5\") " Jan 23 18:01:16.127796 kubelet[2867]: I0123 18:01:16.127401 2867 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-etc-cni-netd\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.127796 kubelet[2867]: I0123 18:01:16.127413 2867 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cni-path\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.127796 kubelet[2867]: I0123 18:01:16.127425 2867 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-lib-modules\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.129445 kubelet[2867]: I0123 18:01:16.127434 2867 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-bpf-maps\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.129445 kubelet[2867]: I0123 18:01:16.127441 2867 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-xtables-lock\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.129445 kubelet[2867]: I0123 18:01:16.127450 2867 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cilium-cgroup\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.129445 kubelet[2867]: I0123 18:01:16.127457 2867 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cilium-run\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.129445 kubelet[2867]: I0123 18:01:16.127473 2867 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-host-proc-sys-net\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.129445 kubelet[2867]: I0123 18:01:16.128990 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-hostproc" (OuterVolumeSpecName: "hostproc") pod "57180e6e-8cec-4f96-8655-ff94dd6f5fc5" (UID: "57180e6e-8cec-4f96-8655-ff94dd6f5fc5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:01:16.130096 kubelet[2867]: I0123 18:01:16.129644 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "57180e6e-8cec-4f96-8655-ff94dd6f5fc5" (UID: "57180e6e-8cec-4f96-8655-ff94dd6f5fc5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 18:01:16.130297 kubelet[2867]: I0123 18:01:16.130268 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca3711e-8d49-4a07-9947-8c219e121534-kube-api-access-b62q4" (OuterVolumeSpecName: "kube-api-access-b62q4") pod "8ca3711e-8d49-4a07-9947-8c219e121534" (UID: "8ca3711e-8d49-4a07-9947-8c219e121534"). InnerVolumeSpecName "kube-api-access-b62q4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:01:16.130841 kubelet[2867]: I0123 18:01:16.130789 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "57180e6e-8cec-4f96-8655-ff94dd6f5fc5" (UID: "57180e6e-8cec-4f96-8655-ff94dd6f5fc5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:01:16.131406 kubelet[2867]: I0123 18:01:16.131365 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-kube-api-access-dkmjm" (OuterVolumeSpecName: "kube-api-access-dkmjm") pod "57180e6e-8cec-4f96-8655-ff94dd6f5fc5" (UID: "57180e6e-8cec-4f96-8655-ff94dd6f5fc5"). InnerVolumeSpecName "kube-api-access-dkmjm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:01:16.131823 kubelet[2867]: I0123 18:01:16.131788 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "57180e6e-8cec-4f96-8655-ff94dd6f5fc5" (UID: "57180e6e-8cec-4f96-8655-ff94dd6f5fc5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:01:16.132018 kubelet[2867]: I0123 18:01:16.131990 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "57180e6e-8cec-4f96-8655-ff94dd6f5fc5" (UID: "57180e6e-8cec-4f96-8655-ff94dd6f5fc5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 18:01:16.132871 kubelet[2867]: I0123 18:01:16.132831 2867 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ca3711e-8d49-4a07-9947-8c219e121534-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8ca3711e-8d49-4a07-9947-8c219e121534" (UID: "8ca3711e-8d49-4a07-9947-8c219e121534"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:01:16.228435 kubelet[2867]: I0123 18:01:16.228358 2867 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-host-proc-sys-kernel\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.228435 kubelet[2867]: I0123 18:01:16.228406 2867 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ca3711e-8d49-4a07-9947-8c219e121534-cilium-config-path\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.228435 kubelet[2867]: I0123 18:01:16.228419 2867 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-clustermesh-secrets\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.228435 kubelet[2867]: I0123 18:01:16.228429 2867 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b62q4\" (UniqueName: \"kubernetes.io/projected/8ca3711e-8d49-4a07-9947-8c219e121534-kube-api-access-b62q4\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.228435 kubelet[2867]: I0123 18:01:16.228439 2867 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dkmjm\" (UniqueName: \"kubernetes.io/projected/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-kube-api-access-dkmjm\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.228435 kubelet[2867]: I0123 18:01:16.228447 2867 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-hubble-tls\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.228435 kubelet[2867]: I0123 18:01:16.228456 2867 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-hostproc\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.228694 kubelet[2867]: I0123 18:01:16.228464 2867 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57180e6e-8cec-4f96-8655-ff94dd6f5fc5-cilium-config-path\") on node \"ci-4459-2-3-a-575e6c418a\" DevicePath \"\"" Jan 23 18:01:16.549670 systemd[1]: Removed slice kubepods-burstable-pod57180e6e_8cec_4f96_8655_ff94dd6f5fc5.slice - libcontainer container kubepods-burstable-pod57180e6e_8cec_4f96_8655_ff94dd6f5fc5.slice. Jan 23 18:01:16.549760 systemd[1]: kubepods-burstable-pod57180e6e_8cec_4f96_8655_ff94dd6f5fc5.slice: Consumed 6.734s CPU time, 133.1M memory peak, 120K read from disk, 12.9M written to disk. Jan 23 18:01:16.550969 systemd[1]: Removed slice kubepods-besteffort-pod8ca3711e_8d49_4a07_9947_8c219e121534.slice - libcontainer container kubepods-besteffort-pod8ca3711e_8d49_4a07_9947_8c219e121534.slice. Jan 23 18:01:16.887108 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb-shm.mount: Deactivated successfully. Jan 23 18:01:16.887214 systemd[1]: var-lib-kubelet-pods-8ca3711e\x2d8d49\x2d4a07\x2d9947\x2d8c219e121534-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db62q4.mount: Deactivated successfully. Jan 23 18:01:16.887275 systemd[1]: var-lib-kubelet-pods-57180e6e\x2d8cec\x2d4f96\x2d8655\x2dff94dd6f5fc5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddkmjm.mount: Deactivated successfully. Jan 23 18:01:16.887326 systemd[1]: var-lib-kubelet-pods-57180e6e\x2d8cec\x2d4f96\x2d8655\x2dff94dd6f5fc5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 18:01:16.887372 systemd[1]: var-lib-kubelet-pods-57180e6e\x2d8cec\x2d4f96\x2d8655\x2dff94dd6f5fc5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 18:01:16.996977 kubelet[2867]: I0123 18:01:16.996866 2867 scope.go:117] "RemoveContainer" containerID="bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa" Jan 23 18:01:16.998867 containerd[1632]: time="2026-01-23T18:01:16.998835432Z" level=info msg="RemoveContainer for \"bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa\"" Jan 23 18:01:17.006793 containerd[1632]: time="2026-01-23T18:01:17.006612932Z" level=info msg="RemoveContainer for \"bb0caf245acf86b4cee10d4fb8ca37b4cfc717f59a6a0f92ef6fea22aa0587fa\" returns successfully" Jan 23 18:01:17.007182 kubelet[2867]: I0123 18:01:17.007154 2867 scope.go:117] "RemoveContainer" containerID="edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332" Jan 23 18:01:17.009203 containerd[1632]: time="2026-01-23T18:01:17.009134058Z" level=info msg="RemoveContainer for \"edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332\"" Jan 23 18:01:17.015190 containerd[1632]: time="2026-01-23T18:01:17.015140993Z" level=info msg="RemoveContainer for \"edf42aadfea0ca201f6146657306d333b0d34ba0c8bd29d83e50fe58b9e34332\" returns successfully" Jan 23 18:01:17.015431 kubelet[2867]: I0123 18:01:17.015407 2867 scope.go:117] "RemoveContainer" containerID="510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7" Jan 23 18:01:17.019909 containerd[1632]: time="2026-01-23T18:01:17.019857605Z" level=info msg="RemoveContainer for \"510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7\"" Jan 23 18:01:17.025545 containerd[1632]: time="2026-01-23T18:01:17.025468339Z" level=info msg="RemoveContainer for \"510378c1d3a4a13dc16fcf8f4ec0713c938edf8c07bee3b4e0d627cd02d25fc7\" returns successfully" Jan 23 18:01:17.025869 kubelet[2867]: I0123 18:01:17.025842 2867 scope.go:117] "RemoveContainer" containerID="fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd" Jan 23 18:01:17.028139 containerd[1632]: time="2026-01-23T18:01:17.028100506Z" level=info msg="RemoveContainer for \"fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd\"" Jan 23 18:01:17.033049 containerd[1632]: time="2026-01-23T18:01:17.032987838Z" level=info msg="RemoveContainer for \"fab94da84e3656a3706249825a703d0ae908c75370c0185d3a35cd9aae8be9dd\" returns successfully" Jan 23 18:01:17.033248 kubelet[2867]: I0123 18:01:17.033220 2867 scope.go:117] "RemoveContainer" containerID="ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36" Jan 23 18:01:17.035167 containerd[1632]: time="2026-01-23T18:01:17.035061283Z" level=info msg="RemoveContainer for \"ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36\"" Jan 23 18:01:17.039685 containerd[1632]: time="2026-01-23T18:01:17.039625975Z" level=info msg="RemoveContainer for \"ecd0661d7b9bd69f21b9b2df803f0d1ed71dbb402980674dec0e5291abb29a36\" returns successfully" Jan 23 18:01:17.039929 kubelet[2867]: I0123 18:01:17.039879 2867 scope.go:117] "RemoveContainer" containerID="a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59" Jan 23 18:01:17.041581 containerd[1632]: time="2026-01-23T18:01:17.041550740Z" level=info msg="RemoveContainer for \"a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59\"" Jan 23 18:01:17.045205 containerd[1632]: time="2026-01-23T18:01:17.045168149Z" level=info msg="RemoveContainer for \"a2aa041aef2062642998da428e0da2a98aaf24af23e5017af822a151baabfe59\" returns successfully" Jan 23 18:01:17.892542 sshd[4492]: Connection closed by 4.153.228.146 port 41320 Jan 23 18:01:17.893153 sshd-session[4489]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:17.896550 systemd[1]: sshd@25-10.0.0.108:22-4.153.228.146:41320.service: Deactivated successfully. Jan 23 18:01:17.898157 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 18:01:17.898353 systemd[1]: session-26.scope: Consumed 1.238s CPU time, 25.8M memory peak. Jan 23 18:01:17.898989 systemd-logind[1612]: Session 26 logged out. Waiting for processes to exit. Jan 23 18:01:17.900210 systemd-logind[1612]: Removed session 26. Jan 23 18:01:17.998354 systemd[1]: Started sshd@26-10.0.0.108:22-4.153.228.146:59892.service - OpenSSH per-connection server daemon (4.153.228.146:59892). Jan 23 18:01:18.545624 kubelet[2867]: I0123 18:01:18.544856 2867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57180e6e-8cec-4f96-8655-ff94dd6f5fc5" path="/var/lib/kubelet/pods/57180e6e-8cec-4f96-8655-ff94dd6f5fc5/volumes" Jan 23 18:01:18.545624 kubelet[2867]: I0123 18:01:18.545395 2867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ca3711e-8d49-4a07-9947-8c219e121534" path="/var/lib/kubelet/pods/8ca3711e-8d49-4a07-9947-8c219e121534/volumes" Jan 23 18:01:18.618923 sshd[4644]: Accepted publickey for core from 4.153.228.146 port 59892 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:01:18.620615 sshd-session[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:18.625166 systemd-logind[1612]: New session 27 of user core. Jan 23 18:01:18.639094 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 18:01:19.682186 kubelet[2867]: I0123 18:01:19.682133 2867 memory_manager.go:355] "RemoveStaleState removing state" podUID="8ca3711e-8d49-4a07-9947-8c219e121534" containerName="cilium-operator" Jan 23 18:01:19.682186 kubelet[2867]: I0123 18:01:19.682175 2867 memory_manager.go:355] "RemoveStaleState removing state" podUID="57180e6e-8cec-4f96-8655-ff94dd6f5fc5" containerName="cilium-agent" Jan 23 18:01:19.689617 systemd[1]: Created slice kubepods-burstable-poda348ca6e_647b_48f0_b380_a8fb5ed57842.slice - libcontainer container kubepods-burstable-poda348ca6e_647b_48f0_b380_a8fb5ed57842.slice. Jan 23 18:01:19.775464 sshd[4649]: Connection closed by 4.153.228.146 port 59892 Jan 23 18:01:19.776136 sshd-session[4644]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:19.780425 systemd[1]: sshd@26-10.0.0.108:22-4.153.228.146:59892.service: Deactivated successfully. Jan 23 18:01:19.782532 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 18:01:19.783665 systemd-logind[1612]: Session 27 logged out. Waiting for processes to exit. Jan 23 18:01:19.785340 systemd-logind[1612]: Removed session 27. Jan 23 18:01:19.848022 kubelet[2867]: I0123 18:01:19.847893 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a348ca6e-647b-48f0-b380-a8fb5ed57842-cilium-run\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.848181 kubelet[2867]: I0123 18:01:19.848044 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a348ca6e-647b-48f0-b380-a8fb5ed57842-etc-cni-netd\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.848181 kubelet[2867]: I0123 18:01:19.848096 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a348ca6e-647b-48f0-b380-a8fb5ed57842-cilium-cgroup\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.848181 kubelet[2867]: I0123 18:01:19.848124 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a348ca6e-647b-48f0-b380-a8fb5ed57842-lib-modules\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.848181 kubelet[2867]: I0123 18:01:19.848140 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a348ca6e-647b-48f0-b380-a8fb5ed57842-clustermesh-secrets\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.848181 kubelet[2867]: I0123 18:01:19.848158 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a348ca6e-647b-48f0-b380-a8fb5ed57842-hostproc\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.848292 kubelet[2867]: I0123 18:01:19.848173 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a348ca6e-647b-48f0-b380-a8fb5ed57842-cilium-ipsec-secrets\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.848316 kubelet[2867]: I0123 18:01:19.848275 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a348ca6e-647b-48f0-b380-a8fb5ed57842-xtables-lock\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.848397 kubelet[2867]: I0123 18:01:19.848354 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a348ca6e-647b-48f0-b380-a8fb5ed57842-bpf-maps\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.848426 kubelet[2867]: I0123 18:01:19.848408 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a348ca6e-647b-48f0-b380-a8fb5ed57842-cni-path\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.848451 kubelet[2867]: I0123 18:01:19.848426 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a348ca6e-647b-48f0-b380-a8fb5ed57842-cilium-config-path\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.848451 kubelet[2867]: I0123 18:01:19.848442 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a348ca6e-647b-48f0-b380-a8fb5ed57842-host-proc-sys-net\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.848539 kubelet[2867]: I0123 18:01:19.848457 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a348ca6e-647b-48f0-b380-a8fb5ed57842-host-proc-sys-kernel\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.848539 kubelet[2867]: I0123 18:01:19.848472 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a348ca6e-647b-48f0-b380-a8fb5ed57842-hubble-tls\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.848539 kubelet[2867]: I0123 18:01:19.848486 2867 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7slsm\" (UniqueName: \"kubernetes.io/projected/a348ca6e-647b-48f0-b380-a8fb5ed57842-kube-api-access-7slsm\") pod \"cilium-wrpr7\" (UID: \"a348ca6e-647b-48f0-b380-a8fb5ed57842\") " pod="kube-system/cilium-wrpr7" Jan 23 18:01:19.899452 systemd[1]: Started sshd@27-10.0.0.108:22-4.153.228.146:59898.service - OpenSSH per-connection server daemon (4.153.228.146:59898). Jan 23 18:01:19.993564 containerd[1632]: time="2026-01-23T18:01:19.993446459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wrpr7,Uid:a348ca6e-647b-48f0-b380-a8fb5ed57842,Namespace:kube-system,Attempt:0,}" Jan 23 18:01:20.011379 containerd[1632]: time="2026-01-23T18:01:20.011320304Z" level=info msg="connecting to shim cfc501e1b664f15958efb14c53a72a4a80d0d3253ab6252e26b4df7f420cac13" address="unix:///run/containerd/s/64a1355580144cc63c264de2320de1dc42411cb09f6d409c1910be293acf7c94" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:01:20.034223 systemd[1]: Started cri-containerd-cfc501e1b664f15958efb14c53a72a4a80d0d3253ab6252e26b4df7f420cac13.scope - libcontainer container cfc501e1b664f15958efb14c53a72a4a80d0d3253ab6252e26b4df7f420cac13. Jan 23 18:01:20.056380 containerd[1632]: time="2026-01-23T18:01:20.056343137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wrpr7,Uid:a348ca6e-647b-48f0-b380-a8fb5ed57842,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfc501e1b664f15958efb14c53a72a4a80d0d3253ab6252e26b4df7f420cac13\"" Jan 23 18:01:20.059299 containerd[1632]: time="2026-01-23T18:01:20.059262665Z" level=info msg="CreateContainer within sandbox \"cfc501e1b664f15958efb14c53a72a4a80d0d3253ab6252e26b4df7f420cac13\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 18:01:20.069498 containerd[1632]: time="2026-01-23T18:01:20.069437290Z" level=info msg="Container 7f219f2deaed992c2486fca5ac0c56bf4a7f73b4e1da46f392c0ba1c0d10ff3c: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:01:20.080509 containerd[1632]: time="2026-01-23T18:01:20.080433198Z" level=info msg="CreateContainer within sandbox \"cfc501e1b664f15958efb14c53a72a4a80d0d3253ab6252e26b4df7f420cac13\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7f219f2deaed992c2486fca5ac0c56bf4a7f73b4e1da46f392c0ba1c0d10ff3c\"" Jan 23 18:01:20.081301 containerd[1632]: time="2026-01-23T18:01:20.081264680Z" level=info msg="StartContainer for \"7f219f2deaed992c2486fca5ac0c56bf4a7f73b4e1da46f392c0ba1c0d10ff3c\"" Jan 23 18:01:20.082519 containerd[1632]: time="2026-01-23T18:01:20.082481483Z" level=info msg="connecting to shim 7f219f2deaed992c2486fca5ac0c56bf4a7f73b4e1da46f392c0ba1c0d10ff3c" address="unix:///run/containerd/s/64a1355580144cc63c264de2320de1dc42411cb09f6d409c1910be293acf7c94" protocol=ttrpc version=3 Jan 23 18:01:20.100064 systemd[1]: Started cri-containerd-7f219f2deaed992c2486fca5ac0c56bf4a7f73b4e1da46f392c0ba1c0d10ff3c.scope - libcontainer container 7f219f2deaed992c2486fca5ac0c56bf4a7f73b4e1da46f392c0ba1c0d10ff3c. Jan 23 18:01:20.129158 containerd[1632]: time="2026-01-23T18:01:20.129097241Z" level=info msg="StartContainer for \"7f219f2deaed992c2486fca5ac0c56bf4a7f73b4e1da46f392c0ba1c0d10ff3c\" returns successfully" Jan 23 18:01:20.133699 systemd[1]: cri-containerd-7f219f2deaed992c2486fca5ac0c56bf4a7f73b4e1da46f392c0ba1c0d10ff3c.scope: Deactivated successfully. Jan 23 18:01:20.136299 containerd[1632]: time="2026-01-23T18:01:20.136253379Z" level=info msg="received container exit event container_id:\"7f219f2deaed992c2486fca5ac0c56bf4a7f73b4e1da46f392c0ba1c0d10ff3c\" id:\"7f219f2deaed992c2486fca5ac0c56bf4a7f73b4e1da46f392c0ba1c0d10ff3c\" pid:4727 exited_at:{seconds:1769191280 nanos:136008418}" Jan 23 18:01:20.511983 sshd[4661]: Accepted publickey for core from 4.153.228.146 port 59898 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:01:20.513436 sshd-session[4661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:20.517764 systemd-logind[1612]: New session 28 of user core. Jan 23 18:01:20.529111 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 18:01:20.633865 kubelet[2867]: E0123 18:01:20.633801 2867 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 18:01:20.932857 sshd[4758]: Connection closed by 4.153.228.146 port 59898 Jan 23 18:01:20.933236 sshd-session[4661]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:20.936714 systemd[1]: sshd@27-10.0.0.108:22-4.153.228.146:59898.service: Deactivated successfully. Jan 23 18:01:20.939008 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 18:01:20.941053 systemd-logind[1612]: Session 28 logged out. Waiting for processes to exit. Jan 23 18:01:20.942547 systemd-logind[1612]: Removed session 28. Jan 23 18:01:21.015340 containerd[1632]: time="2026-01-23T18:01:21.015246714Z" level=info msg="CreateContainer within sandbox \"cfc501e1b664f15958efb14c53a72a4a80d0d3253ab6252e26b4df7f420cac13\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 18:01:21.040560 systemd[1]: Started sshd@28-10.0.0.108:22-4.153.228.146:59900.service - OpenSSH per-connection server daemon (4.153.228.146:59900). Jan 23 18:01:21.426491 containerd[1632]: time="2026-01-23T18:01:21.425826148Z" level=info msg="Container fdd4962bea234fee80cb2ab461ef4a51574d43527022ad038923d39b3a806d4e: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:01:21.431870 containerd[1632]: time="2026-01-23T18:01:21.431829203Z" level=info msg="CreateContainer within sandbox \"cfc501e1b664f15958efb14c53a72a4a80d0d3253ab6252e26b4df7f420cac13\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fdd4962bea234fee80cb2ab461ef4a51574d43527022ad038923d39b3a806d4e\"" Jan 23 18:01:21.432597 containerd[1632]: time="2026-01-23T18:01:21.432542925Z" level=info msg="StartContainer for \"fdd4962bea234fee80cb2ab461ef4a51574d43527022ad038923d39b3a806d4e\"" Jan 23 18:01:21.433637 containerd[1632]: time="2026-01-23T18:01:21.433598328Z" level=info msg="connecting to shim fdd4962bea234fee80cb2ab461ef4a51574d43527022ad038923d39b3a806d4e" address="unix:///run/containerd/s/64a1355580144cc63c264de2320de1dc42411cb09f6d409c1910be293acf7c94" protocol=ttrpc version=3 Jan 23 18:01:21.453161 systemd[1]: Started cri-containerd-fdd4962bea234fee80cb2ab461ef4a51574d43527022ad038923d39b3a806d4e.scope - libcontainer container fdd4962bea234fee80cb2ab461ef4a51574d43527022ad038923d39b3a806d4e. Jan 23 18:01:21.479636 containerd[1632]: time="2026-01-23T18:01:21.479588164Z" level=info msg="StartContainer for \"fdd4962bea234fee80cb2ab461ef4a51574d43527022ad038923d39b3a806d4e\" returns successfully" Jan 23 18:01:21.485710 systemd[1]: cri-containerd-fdd4962bea234fee80cb2ab461ef4a51574d43527022ad038923d39b3a806d4e.scope: Deactivated successfully. Jan 23 18:01:21.486427 containerd[1632]: time="2026-01-23T18:01:21.486378261Z" level=info msg="received container exit event container_id:\"fdd4962bea234fee80cb2ab461ef4a51574d43527022ad038923d39b3a806d4e\" id:\"fdd4962bea234fee80cb2ab461ef4a51574d43527022ad038923d39b3a806d4e\" pid:4782 exited_at:{seconds:1769191281 nanos:486154420}" Jan 23 18:01:21.505099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdd4962bea234fee80cb2ab461ef4a51574d43527022ad038923d39b3a806d4e-rootfs.mount: Deactivated successfully. Jan 23 18:01:21.638511 sshd[4765]: Accepted publickey for core from 4.153.228.146 port 59900 ssh2: RSA SHA256:DtPMPPiDXNr6dTB3hNLy7Bfdxxt4oEJO4d6yweNHNGc Jan 23 18:01:21.639887 sshd-session[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:21.643624 systemd-logind[1612]: New session 29 of user core. Jan 23 18:01:21.653075 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 18:01:22.025462 containerd[1632]: time="2026-01-23T18:01:22.025158299Z" level=info msg="CreateContainer within sandbox \"cfc501e1b664f15958efb14c53a72a4a80d0d3253ab6252e26b4df7f420cac13\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 18:01:22.042940 containerd[1632]: time="2026-01-23T18:01:22.040139496Z" level=info msg="Container 4dc4b9d1031ae021a57eba73ae9af8a52d5f7b2439c995f41583658d173988f0: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:01:22.051843 containerd[1632]: time="2026-01-23T18:01:22.051775806Z" level=info msg="CreateContainer within sandbox \"cfc501e1b664f15958efb14c53a72a4a80d0d3253ab6252e26b4df7f420cac13\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4dc4b9d1031ae021a57eba73ae9af8a52d5f7b2439c995f41583658d173988f0\"" Jan 23 18:01:22.052804 containerd[1632]: time="2026-01-23T18:01:22.052756168Z" level=info msg="StartContainer for \"4dc4b9d1031ae021a57eba73ae9af8a52d5f7b2439c995f41583658d173988f0\"" Jan 23 18:01:22.054625 containerd[1632]: time="2026-01-23T18:01:22.054508813Z" level=info msg="connecting to shim 4dc4b9d1031ae021a57eba73ae9af8a52d5f7b2439c995f41583658d173988f0" address="unix:///run/containerd/s/64a1355580144cc63c264de2320de1dc42411cb09f6d409c1910be293acf7c94" protocol=ttrpc version=3 Jan 23 18:01:22.077110 systemd[1]: Started cri-containerd-4dc4b9d1031ae021a57eba73ae9af8a52d5f7b2439c995f41583658d173988f0.scope - libcontainer container 4dc4b9d1031ae021a57eba73ae9af8a52d5f7b2439c995f41583658d173988f0. Jan 23 18:01:22.161373 containerd[1632]: time="2026-01-23T18:01:22.161333602Z" level=info msg="StartContainer for \"4dc4b9d1031ae021a57eba73ae9af8a52d5f7b2439c995f41583658d173988f0\" returns successfully" Jan 23 18:01:22.164002 systemd[1]: cri-containerd-4dc4b9d1031ae021a57eba73ae9af8a52d5f7b2439c995f41583658d173988f0.scope: Deactivated successfully. Jan 23 18:01:22.166082 containerd[1632]: time="2026-01-23T18:01:22.166037774Z" level=info msg="received container exit event container_id:\"4dc4b9d1031ae021a57eba73ae9af8a52d5f7b2439c995f41583658d173988f0\" id:\"4dc4b9d1031ae021a57eba73ae9af8a52d5f7b2439c995f41583658d173988f0\" pid:4835 exited_at:{seconds:1769191282 nanos:165779173}" Jan 23 18:01:22.185189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dc4b9d1031ae021a57eba73ae9af8a52d5f7b2439c995f41583658d173988f0-rootfs.mount: Deactivated successfully. Jan 23 18:01:23.026630 containerd[1632]: time="2026-01-23T18:01:23.026585302Z" level=info msg="CreateContainer within sandbox \"cfc501e1b664f15958efb14c53a72a4a80d0d3253ab6252e26b4df7f420cac13\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 18:01:23.038163 containerd[1632]: time="2026-01-23T18:01:23.037875851Z" level=info msg="Container 71a27676d92e55a07384f97e2bc5aa3644658356d96a67bd21f2f41b088210b9: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:01:23.041300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3224250409.mount: Deactivated successfully. Jan 23 18:01:23.048314 containerd[1632]: time="2026-01-23T18:01:23.048210997Z" level=info msg="CreateContainer within sandbox \"cfc501e1b664f15958efb14c53a72a4a80d0d3253ab6252e26b4df7f420cac13\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"71a27676d92e55a07384f97e2bc5aa3644658356d96a67bd21f2f41b088210b9\"" Jan 23 18:01:23.048630 containerd[1632]: time="2026-01-23T18:01:23.048607118Z" level=info msg="StartContainer for \"71a27676d92e55a07384f97e2bc5aa3644658356d96a67bd21f2f41b088210b9\"" Jan 23 18:01:23.049796 containerd[1632]: time="2026-01-23T18:01:23.049750041Z" level=info msg="connecting to shim 71a27676d92e55a07384f97e2bc5aa3644658356d96a67bd21f2f41b088210b9" address="unix:///run/containerd/s/64a1355580144cc63c264de2320de1dc42411cb09f6d409c1910be293acf7c94" protocol=ttrpc version=3 Jan 23 18:01:23.073155 systemd[1]: Started cri-containerd-71a27676d92e55a07384f97e2bc5aa3644658356d96a67bd21f2f41b088210b9.scope - libcontainer container 71a27676d92e55a07384f97e2bc5aa3644658356d96a67bd21f2f41b088210b9. Jan 23 18:01:23.095475 systemd[1]: cri-containerd-71a27676d92e55a07384f97e2bc5aa3644658356d96a67bd21f2f41b088210b9.scope: Deactivated successfully. Jan 23 18:01:23.098068 containerd[1632]: time="2026-01-23T18:01:23.097302841Z" level=info msg="received container exit event container_id:\"71a27676d92e55a07384f97e2bc5aa3644658356d96a67bd21f2f41b088210b9\" id:\"71a27676d92e55a07384f97e2bc5aa3644658356d96a67bd21f2f41b088210b9\" pid:4874 exited_at:{seconds:1769191283 nanos:96021677}" Jan 23 18:01:23.099167 containerd[1632]: time="2026-01-23T18:01:23.099134925Z" level=info msg="StartContainer for \"71a27676d92e55a07384f97e2bc5aa3644658356d96a67bd21f2f41b088210b9\" returns successfully" Jan 23 18:01:23.117009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71a27676d92e55a07384f97e2bc5aa3644658356d96a67bd21f2f41b088210b9-rootfs.mount: Deactivated successfully. Jan 23 18:01:24.032409 containerd[1632]: time="2026-01-23T18:01:24.032368637Z" level=info msg="CreateContainer within sandbox \"cfc501e1b664f15958efb14c53a72a4a80d0d3253ab6252e26b4df7f420cac13\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 18:01:24.045873 containerd[1632]: time="2026-01-23T18:01:24.045830151Z" level=info msg="Container 5f105b1ce5e8e19f061df07c779ae32fa3793719ab098f906e997c65951270db: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:01:24.056061 containerd[1632]: time="2026-01-23T18:01:24.056023177Z" level=info msg="CreateContainer within sandbox \"cfc501e1b664f15958efb14c53a72a4a80d0d3253ab6252e26b4df7f420cac13\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5f105b1ce5e8e19f061df07c779ae32fa3793719ab098f906e997c65951270db\"" Jan 23 18:01:24.056491 containerd[1632]: time="2026-01-23T18:01:24.056465618Z" level=info msg="StartContainer for \"5f105b1ce5e8e19f061df07c779ae32fa3793719ab098f906e997c65951270db\"" Jan 23 18:01:24.057830 containerd[1632]: time="2026-01-23T18:01:24.057757301Z" level=info msg="connecting to shim 5f105b1ce5e8e19f061df07c779ae32fa3793719ab098f906e997c65951270db" address="unix:///run/containerd/s/64a1355580144cc63c264de2320de1dc42411cb09f6d409c1910be293acf7c94" protocol=ttrpc version=3 Jan 23 18:01:24.082164 systemd[1]: Started cri-containerd-5f105b1ce5e8e19f061df07c779ae32fa3793719ab098f906e997c65951270db.scope - libcontainer container 5f105b1ce5e8e19f061df07c779ae32fa3793719ab098f906e997c65951270db. Jan 23 18:01:24.123075 containerd[1632]: time="2026-01-23T18:01:24.123038705Z" level=info msg="StartContainer for \"5f105b1ce5e8e19f061df07c779ae32fa3793719ab098f906e997c65951270db\" returns successfully" Jan 23 18:01:24.379954 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 23 18:01:25.051644 kubelet[2867]: I0123 18:01:25.051570 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wrpr7" podStartSLOduration=6.051551725 podStartE2EDuration="6.051551725s" podCreationTimestamp="2026-01-23 18:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:01:25.051405765 +0000 UTC m=+224.613565430" watchObservedRunningTime="2026-01-23 18:01:25.051551725 +0000 UTC m=+224.613711430" Jan 23 18:01:25.064607 kubelet[2867]: I0123 18:01:25.064555 2867 setters.go:602] "Node became not ready" node="ci-4459-2-3-a-575e6c418a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:01:25Z","lastTransitionTime":"2026-01-23T18:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 18:01:27.221801 systemd-networkd[1442]: lxc_health: Link UP Jan 23 18:01:27.222042 systemd-networkd[1442]: lxc_health: Gained carrier Jan 23 18:01:28.833002 systemd-networkd[1442]: lxc_health: Gained IPv6LL Jan 23 18:01:34.735544 sshd[4813]: Connection closed by 4.153.228.146 port 59900 Jan 23 18:01:34.736290 sshd-session[4765]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:34.739930 systemd[1]: sshd@28-10.0.0.108:22-4.153.228.146:59900.service: Deactivated successfully. Jan 23 18:01:34.741523 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 18:01:34.742342 systemd-logind[1612]: Session 29 logged out. Waiting for processes to exit. Jan 23 18:01:34.743543 systemd-logind[1612]: Removed session 29. Jan 23 18:01:40.531963 containerd[1632]: time="2026-01-23T18:01:40.531666616Z" level=info msg="StopPodSandbox for \"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\"" Jan 23 18:01:40.531963 containerd[1632]: time="2026-01-23T18:01:40.531831177Z" level=info msg="TearDown network for sandbox \"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\" successfully" Jan 23 18:01:40.531963 containerd[1632]: time="2026-01-23T18:01:40.531846577Z" level=info msg="StopPodSandbox for \"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\" returns successfully" Jan 23 18:01:40.532877 containerd[1632]: time="2026-01-23T18:01:40.532846859Z" level=info msg="RemovePodSandbox for \"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\"" Jan 23 18:01:40.532934 containerd[1632]: time="2026-01-23T18:01:40.532884179Z" level=info msg="Forcibly stopping sandbox \"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\"" Jan 23 18:01:40.532990 containerd[1632]: time="2026-01-23T18:01:40.532972620Z" level=info msg="TearDown network for sandbox \"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\" successfully" Jan 23 18:01:40.534011 containerd[1632]: time="2026-01-23T18:01:40.533987182Z" level=info msg="Ensure that sandbox 3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f in task-service has been cleanup successfully" Jan 23 18:01:40.538231 containerd[1632]: time="2026-01-23T18:01:40.538201193Z" level=info msg="RemovePodSandbox \"3d3cf529c9a54a0481f8861a973c639136b872fe5897f62ae7ec7195d4b8cb9f\" returns successfully" Jan 23 18:01:40.538693 containerd[1632]: time="2026-01-23T18:01:40.538665554Z" level=info msg="StopPodSandbox for \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\"" Jan 23 18:01:40.538789 containerd[1632]: time="2026-01-23T18:01:40.538762794Z" level=info msg="TearDown network for sandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" successfully" Jan 23 18:01:40.538789 containerd[1632]: time="2026-01-23T18:01:40.538784354Z" level=info msg="StopPodSandbox for \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" returns successfully" Jan 23 18:01:40.539310 containerd[1632]: time="2026-01-23T18:01:40.539284355Z" level=info msg="RemovePodSandbox for \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\"" Jan 23 18:01:40.539361 containerd[1632]: time="2026-01-23T18:01:40.539316395Z" level=info msg="Forcibly stopping sandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\"" Jan 23 18:01:40.539403 containerd[1632]: time="2026-01-23T18:01:40.539386796Z" level=info msg="TearDown network for sandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" successfully" Jan 23 18:01:40.540531 containerd[1632]: time="2026-01-23T18:01:40.540483878Z" level=info msg="Ensure that sandbox 61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb in task-service has been cleanup successfully" Jan 23 18:01:40.545603 containerd[1632]: time="2026-01-23T18:01:40.545550291Z" level=info msg="RemovePodSandbox \"61ad79b7a635528bd8118df235b72a9eeaee60450764a05a35d4b5c541b704cb\" returns successfully" Jan 23 18:02:03.752325 systemd[1]: cri-containerd-a31e67ed0f77e44880ada2b0936d8a081b4551844534cbd82f05aeda8a597528.scope: Deactivated successfully. Jan 23 18:02:03.752691 systemd[1]: cri-containerd-a31e67ed0f77e44880ada2b0936d8a081b4551844534cbd82f05aeda8a597528.scope: Consumed 4.349s CPU time, 58.7M memory peak. Jan 23 18:02:03.755357 containerd[1632]: time="2026-01-23T18:02:03.754151778Z" level=info msg="received container exit event container_id:\"a31e67ed0f77e44880ada2b0936d8a081b4551844534cbd82f05aeda8a597528\" id:\"a31e67ed0f77e44880ada2b0936d8a081b4551844534cbd82f05aeda8a597528\" pid:2718 exit_status:1 exited_at:{seconds:1769191323 nanos:753270776}" Jan 23 18:02:03.773275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a31e67ed0f77e44880ada2b0936d8a081b4551844534cbd82f05aeda8a597528-rootfs.mount: Deactivated successfully. Jan 23 18:02:03.970444 kubelet[2867]: E0123 18:02:03.970329 2867 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.108:54020->10.0.0.74:2379: read: connection timed out" Jan 23 18:02:03.973336 systemd[1]: cri-containerd-97a629a00acece9056be33d0a682768bdc0215d7b90dc06ac0587a077049e6b6.scope: Deactivated successfully. Jan 23 18:02:03.973617 systemd[1]: cri-containerd-97a629a00acece9056be33d0a682768bdc0215d7b90dc06ac0587a077049e6b6.scope: Consumed 3.769s CPU time, 24.8M memory peak. Jan 23 18:02:03.975379 containerd[1632]: time="2026-01-23T18:02:03.975334696Z" level=info msg="received container exit event container_id:\"97a629a00acece9056be33d0a682768bdc0215d7b90dc06ac0587a077049e6b6\" id:\"97a629a00acece9056be33d0a682768bdc0215d7b90dc06ac0587a077049e6b6\" pid:2725 exit_status:1 exited_at:{seconds:1769191323 nanos:975043535}" Jan 23 18:02:03.994358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97a629a00acece9056be33d0a682768bdc0215d7b90dc06ac0587a077049e6b6-rootfs.mount: Deactivated successfully. Jan 23 18:02:04.109036 kubelet[2867]: I0123 18:02:04.108985 2867 scope.go:117] "RemoveContainer" containerID="97a629a00acece9056be33d0a682768bdc0215d7b90dc06ac0587a077049e6b6" Jan 23 18:02:04.111114 kubelet[2867]: I0123 18:02:04.111060 2867 scope.go:117] "RemoveContainer" containerID="a31e67ed0f77e44880ada2b0936d8a081b4551844534cbd82f05aeda8a597528" Jan 23 18:02:04.111602 containerd[1632]: time="2026-01-23T18:02:04.111536559Z" level=info msg="CreateContainer within sandbox \"6d00336dfbdaa6459835b9dd356fab7b7e086429d1b7fe3a36cc207ef69ad9ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 18:02:04.112720 containerd[1632]: time="2026-01-23T18:02:04.112690522Z" level=info msg="CreateContainer within sandbox \"2bf052853e6b843a481301343bc7f6de5f205feb73b167d19a83dd78ca200cbd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 18:02:04.121469 containerd[1632]: time="2026-01-23T18:02:04.120674222Z" level=info msg="Container d119eee16b58ddd9583327d9fc33a0c539d695ae0afb3822f592464a9c865af4: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:02:04.123608 containerd[1632]: time="2026-01-23T18:02:04.123568509Z" level=info msg="Container 243f9a308fbd76492a537a60e24cfae086d24269cdf898e164385debdaac9eea: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:02:04.131183 containerd[1632]: time="2026-01-23T18:02:04.131123288Z" level=info msg="CreateContainer within sandbox \"6d00336dfbdaa6459835b9dd356fab7b7e086429d1b7fe3a36cc207ef69ad9ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d119eee16b58ddd9583327d9fc33a0c539d695ae0afb3822f592464a9c865af4\"" Jan 23 18:02:04.131969 containerd[1632]: time="2026-01-23T18:02:04.131784130Z" level=info msg="StartContainer for \"d119eee16b58ddd9583327d9fc33a0c539d695ae0afb3822f592464a9c865af4\"" Jan 23 18:02:04.132872 containerd[1632]: time="2026-01-23T18:02:04.132828053Z" level=info msg="connecting to shim d119eee16b58ddd9583327d9fc33a0c539d695ae0afb3822f592464a9c865af4" address="unix:///run/containerd/s/6fe6947255243be60798dff1f7ee60c23a9726892f84da22197ea33410739ef8" protocol=ttrpc version=3 Jan 23 18:02:04.134159 containerd[1632]: time="2026-01-23T18:02:04.134128936Z" level=info msg="CreateContainer within sandbox \"2bf052853e6b843a481301343bc7f6de5f205feb73b167d19a83dd78ca200cbd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"243f9a308fbd76492a537a60e24cfae086d24269cdf898e164385debdaac9eea\"" Jan 23 18:02:04.134750 containerd[1632]: time="2026-01-23T18:02:04.134679137Z" level=info msg="StartContainer for \"243f9a308fbd76492a537a60e24cfae086d24269cdf898e164385debdaac9eea\"" Jan 23 18:02:04.136080 containerd[1632]: time="2026-01-23T18:02:04.136040101Z" level=info msg="connecting to shim 243f9a308fbd76492a537a60e24cfae086d24269cdf898e164385debdaac9eea" address="unix:///run/containerd/s/9ba3784b833b13a75bb4781ea567ae0690572369a63ddf2973d36a0d3144b43e" protocol=ttrpc version=3 Jan 23 18:02:04.149082 systemd[1]: Started cri-containerd-d119eee16b58ddd9583327d9fc33a0c539d695ae0afb3822f592464a9c865af4.scope - libcontainer container d119eee16b58ddd9583327d9fc33a0c539d695ae0afb3822f592464a9c865af4. Jan 23 18:02:04.152656 systemd[1]: Started cri-containerd-243f9a308fbd76492a537a60e24cfae086d24269cdf898e164385debdaac9eea.scope - libcontainer container 243f9a308fbd76492a537a60e24cfae086d24269cdf898e164385debdaac9eea. Jan 23 18:02:04.196169 containerd[1632]: time="2026-01-23T18:02:04.196124092Z" level=info msg="StartContainer for \"243f9a308fbd76492a537a60e24cfae086d24269cdf898e164385debdaac9eea\" returns successfully" Jan 23 18:02:04.197475 containerd[1632]: time="2026-01-23T18:02:04.197323615Z" level=info msg="StartContainer for \"d119eee16b58ddd9583327d9fc33a0c539d695ae0afb3822f592464a9c865af4\" returns successfully" Jan 23 18:02:06.402324 kubelet[2867]: E0123 18:02:06.402167 2867 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.108:53870->10.0.0.74:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-2-3-a-575e6c418a.188d6e20d57f62d4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-2-3-a-575e6c418a,UID:84ab83549237a695c4f6b13b0f94860d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-2-3-a-575e6c418a,},FirstTimestamp:2026-01-23 18:01:55.926377172 +0000 UTC m=+255.488536837,LastTimestamp:2026-01-23 18:01:55.926377172 +0000 UTC m=+255.488536837,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-a-575e6c418a,}" Jan 23 18:02:13.971230 kubelet[2867]: E0123 18:02:13.970739 2867 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-a-575e6c418a?timeout=10s\": context deadline exceeded" Jan 23 18:02:14.531544 kubelet[2867]: I0123 18:02:14.531485 2867 status_manager.go:890] "Failed to get status for pod" podUID="71321f1ec46c45c21935cacc5cbfd824" pod="kube-system/kube-scheduler-ci-4459-2-3-a-575e6c418a" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.108:53932->10.0.0.74:2379: read: connection timed out"