Mar 13 00:40:17.808289 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:40:17.808316 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:40:17.808326 kernel: BIOS-provided physical RAM map: Mar 13 00:40:17.808332 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 13 00:40:17.808338 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 13 00:40:17.808344 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 13 00:40:17.808352 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 13 00:40:17.808359 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 13 00:40:17.808364 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 13 00:40:17.808371 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 13 00:40:17.808377 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000007e93efff] usable Mar 13 00:40:17.808382 kernel: BIOS-e820: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Mar 13 00:40:17.808388 kernel: BIOS-e820: [mem 0x000000007ea00000-0x000000007ec70fff] usable Mar 13 00:40:17.808394 kernel: BIOS-e820: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Mar 13 00:40:17.808403 kernel: BIOS-e820: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Mar 13 00:40:17.808410 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Mar 13 00:40:17.808416 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Mar 13 00:40:17.808422 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Mar 13 00:40:17.808428 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007feaefff] usable Mar 13 00:40:17.808434 kernel: BIOS-e820: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Mar 13 00:40:17.808440 kernel: BIOS-e820: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Mar 13 00:40:17.808448 kernel: BIOS-e820: [mem 0x000000007feb5000-0x000000007feebfff] usable Mar 13 00:40:17.808454 kernel: BIOS-e820: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Mar 13 00:40:17.808460 kernel: BIOS-e820: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Mar 13 00:40:17.808466 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 13 00:40:17.808472 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:40:17.808478 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 13 00:40:17.808484 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 13 00:40:17.808490 kernel: NX (Execute Disable) protection: active Mar 13 00:40:17.808496 kernel: APIC: Static calls initialized Mar 13 00:40:17.808502 kernel: e820: update [mem 0x7df7f018-0x7df88a57] usable ==> usable Mar 13 00:40:17.808509 kernel: e820: update [mem 0x7df57018-0x7df7e457] usable ==> usable Mar 13 00:40:17.808515 kernel: extended physical RAM map: Mar 13 00:40:17.808523 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 13 00:40:17.808529 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 13 00:40:17.808535 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 13 00:40:17.808541 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 13 00:40:17.808547 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 13 00:40:17.808553 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 13 00:40:17.808560 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 13 00:40:17.808569 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000007df57017] usable Mar 13 00:40:17.808577 kernel: reserve setup_data: [mem 0x000000007df57018-0x000000007df7e457] usable Mar 13 00:40:17.808584 kernel: reserve setup_data: [mem 0x000000007df7e458-0x000000007df7f017] usable Mar 13 00:40:17.808590 kernel: reserve setup_data: [mem 0x000000007df7f018-0x000000007df88a57] usable Mar 13 00:40:17.808597 kernel: reserve setup_data: [mem 0x000000007df88a58-0x000000007e93efff] usable Mar 13 00:40:17.808603 kernel: reserve setup_data: [mem 0x000000007e93f000-0x000000007e9fffff] reserved Mar 13 00:40:17.808609 kernel: reserve setup_data: [mem 0x000000007ea00000-0x000000007ec70fff] usable Mar 13 00:40:17.808616 kernel: reserve setup_data: [mem 0x000000007ec71000-0x000000007ed84fff] reserved Mar 13 00:40:17.808624 kernel: reserve setup_data: [mem 0x000000007ed85000-0x000000007f8ecfff] usable Mar 13 00:40:17.808630 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Mar 13 00:40:17.808637 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Mar 13 00:40:17.808643 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Mar 13 00:40:17.808650 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007feaefff] usable Mar 13 00:40:17.808656 kernel: reserve setup_data: [mem 0x000000007feaf000-0x000000007feb2fff] reserved Mar 13 00:40:17.808662 kernel: reserve setup_data: [mem 0x000000007feb3000-0x000000007feb4fff] ACPI NVS Mar 13 00:40:17.808669 kernel: reserve setup_data: [mem 0x000000007feb5000-0x000000007feebfff] usable Mar 13 00:40:17.808675 kernel: reserve setup_data: [mem 0x000000007feec000-0x000000007ff6ffff] reserved Mar 13 00:40:17.808682 kernel: reserve setup_data: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS Mar 13 00:40:17.808688 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 13 00:40:17.808697 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:40:17.808703 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 13 00:40:17.808710 kernel: reserve setup_data: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 13 00:40:17.808716 kernel: efi: EFI v2.7 by EDK II Mar 13 00:40:17.808723 kernel: efi: SMBIOS=0x7f972000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7dfd8018 RNG=0x7fb72018 Mar 13 00:40:17.808729 kernel: random: crng init done Mar 13 00:40:17.808736 kernel: efi: Remove mem139: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 13 00:40:17.808742 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 13 00:40:17.808749 kernel: secureboot: Secure boot disabled Mar 13 00:40:17.808755 kernel: SMBIOS 2.8 present. Mar 13 00:40:17.808762 kernel: DMI: STACKIT Cloud OpenStack Nova/Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 13 00:40:17.808768 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:40:17.808776 kernel: Hypervisor detected: KVM Mar 13 00:40:17.808783 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Mar 13 00:40:17.808789 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:40:17.808795 kernel: kvm-clock: using sched offset of 5915567790 cycles Mar 13 00:40:17.808802 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:40:17.808809 kernel: tsc: Detected 2294.590 MHz processor Mar 13 00:40:17.808816 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:40:17.808823 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:40:17.808830 kernel: last_pfn = 0x180000 max_arch_pfn = 0x10000000000 Mar 13 00:40:17.808836 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 13 00:40:17.808845 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:40:17.808851 kernel: last_pfn = 0x7feec max_arch_pfn = 0x10000000000 Mar 13 00:40:17.808858 kernel: Using GB pages for direct mapping Mar 13 00:40:17.808865 kernel: ACPI: Early table checksum verification disabled Mar 13 00:40:17.808871 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Mar 13 00:40:17.808878 kernel: ACPI: XSDT 0x000000007FB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Mar 13 00:40:17.808885 kernel: ACPI: FACP 0x000000007FB77000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:40:17.808892 kernel: ACPI: DSDT 0x000000007FB78000 00424E (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:40:17.808898 kernel: ACPI: FACS 0x000000007FBDD000 000040 Mar 13 00:40:17.808907 kernel: ACPI: APIC 0x000000007FB76000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:40:17.808913 kernel: ACPI: MCFG 0x000000007FB75000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:40:17.808920 kernel: ACPI: WAET 0x000000007FB74000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:40:17.808926 kernel: ACPI: BGRT 0x000000007FB73000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 13 00:40:17.808933 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb77000-0x7fb770f3] Mar 13 00:40:17.808940 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb78000-0x7fb7c24d] Mar 13 00:40:17.808947 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Mar 13 00:40:17.808953 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb76000-0x7fb7607f] Mar 13 00:40:17.808960 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb75000-0x7fb7503b] Mar 13 00:40:17.808968 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb74000-0x7fb74027] Mar 13 00:40:17.808975 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb73000-0x7fb73037] Mar 13 00:40:17.808981 kernel: No NUMA configuration found Mar 13 00:40:17.808988 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Mar 13 00:40:17.808995 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Mar 13 00:40:17.809001 kernel: Zone ranges: Mar 13 00:40:17.809008 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:40:17.809015 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 13 00:40:17.809021 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Mar 13 00:40:17.809029 kernel: Device empty Mar 13 00:40:17.809036 kernel: Movable zone start for each node Mar 13 00:40:17.809042 kernel: Early memory node ranges Mar 13 00:40:17.809049 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 13 00:40:17.809056 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 13 00:40:17.809062 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 13 00:40:17.809069 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 13 00:40:17.809075 kernel: node 0: [mem 0x0000000000900000-0x000000007e93efff] Mar 13 00:40:17.809082 kernel: node 0: [mem 0x000000007ea00000-0x000000007ec70fff] Mar 13 00:40:17.809089 kernel: node 0: [mem 0x000000007ed85000-0x000000007f8ecfff] Mar 13 00:40:17.809103 kernel: node 0: [mem 0x000000007fbff000-0x000000007feaefff] Mar 13 00:40:17.809110 kernel: node 0: [mem 0x000000007feb5000-0x000000007feebfff] Mar 13 00:40:17.809117 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Mar 13 00:40:17.809125 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Mar 13 00:40:17.809140 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:40:17.809148 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 13 00:40:17.809155 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 13 00:40:17.809163 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:40:17.809172 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 13 00:40:17.809180 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 13 00:40:17.809187 kernel: On node 0, zone DMA32: 276 pages in unavailable ranges Mar 13 00:40:17.809194 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 13 00:40:17.809201 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 13 00:40:17.809209 kernel: On node 0, zone Normal: 276 pages in unavailable ranges Mar 13 00:40:17.809216 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 13 00:40:17.809224 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:40:17.809231 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:40:17.809240 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 13 00:40:17.809247 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:40:17.809254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:40:17.809262 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:40:17.809269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:40:17.809276 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:40:17.809284 kernel: TSC deadline timer available Mar 13 00:40:17.809291 kernel: CPU topo: Max. logical packages: 2 Mar 13 00:40:17.809299 kernel: CPU topo: Max. logical dies: 2 Mar 13 00:40:17.809308 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:40:17.809315 kernel: CPU topo: Max. threads per core: 1 Mar 13 00:40:17.809322 kernel: CPU topo: Num. cores per package: 1 Mar 13 00:40:17.809329 kernel: CPU topo: Num. threads per package: 1 Mar 13 00:40:17.809337 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 13 00:40:17.809344 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 13 00:40:17.809351 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 13 00:40:17.809359 kernel: kvm-guest: setup PV sched yield Mar 13 00:40:17.809366 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Mar 13 00:40:17.809374 kernel: Booting paravirtualized kernel on KVM Mar 13 00:40:17.809381 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:40:17.809388 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 13 00:40:17.809395 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 13 00:40:17.809403 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 13 00:40:17.809410 kernel: pcpu-alloc: [0] 0 1 Mar 13 00:40:17.809416 kernel: kvm-guest: PV spinlocks enabled Mar 13 00:40:17.809423 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:40:17.809432 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:40:17.809441 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:40:17.809448 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 00:40:17.809455 kernel: Fallback order for Node 0: 0 Mar 13 00:40:17.809462 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1046694 Mar 13 00:40:17.809469 kernel: Policy zone: Normal Mar 13 00:40:17.809476 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:40:17.809483 kernel: software IO TLB: area num 2. Mar 13 00:40:17.809490 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 00:40:17.809499 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:40:17.809506 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:40:17.809514 kernel: Dynamic Preempt: voluntary Mar 13 00:40:17.809521 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:40:17.809529 kernel: rcu: RCU event tracing is enabled. Mar 13 00:40:17.809536 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 00:40:17.809544 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:40:17.809551 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:40:17.809558 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:40:17.809566 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:40:17.809575 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 00:40:17.809582 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:40:17.809590 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:40:17.809597 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:40:17.809604 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 13 00:40:17.809612 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:40:17.809619 kernel: Console: colour dummy device 80x25 Mar 13 00:40:17.809626 kernel: printk: legacy console [tty0] enabled Mar 13 00:40:17.809635 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:40:17.809643 kernel: ACPI: Core revision 20240827 Mar 13 00:40:17.809650 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:40:17.809658 kernel: x2apic enabled Mar 13 00:40:17.809665 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:40:17.809673 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 13 00:40:17.809680 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 13 00:40:17.809687 kernel: kvm-guest: setup PV IPIs Mar 13 00:40:17.809695 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21133e85697, max_idle_ns: 440795250946 ns Mar 13 00:40:17.809702 kernel: Calibrating delay loop (skipped) preset value.. 4589.18 BogoMIPS (lpj=2294590) Mar 13 00:40:17.809711 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 13 00:40:17.809718 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 13 00:40:17.809726 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 13 00:40:17.809733 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:40:17.809740 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Mar 13 00:40:17.809747 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Mar 13 00:40:17.809754 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Mar 13 00:40:17.809761 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 13 00:40:17.809768 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 13 00:40:17.809775 kernel: TAA: Mitigation: Clear CPU buffers Mar 13 00:40:17.809784 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Mar 13 00:40:17.809791 kernel: active return thunk: its_return_thunk Mar 13 00:40:17.809798 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 13 00:40:17.809805 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:40:17.809812 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:40:17.809819 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:40:17.809826 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 13 00:40:17.809833 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 13 00:40:17.809840 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 13 00:40:17.809847 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 13 00:40:17.809854 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:40:17.809863 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 13 00:40:17.809870 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 13 00:40:17.809877 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 13 00:40:17.809884 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Mar 13 00:40:17.809891 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Mar 13 00:40:17.809898 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:40:17.809905 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:40:17.809913 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:40:17.809920 kernel: landlock: Up and running. Mar 13 00:40:17.809927 kernel: SELinux: Initializing. Mar 13 00:40:17.809934 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:40:17.809942 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:40:17.809950 kernel: smpboot: CPU0: Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz (family: 0x6, model: 0x6a, stepping: 0x6) Mar 13 00:40:17.809957 kernel: Performance Events: PEBS fmt0-, Icelake events, full-width counters, Intel PMU driver. Mar 13 00:40:17.809964 kernel: ... version: 2 Mar 13 00:40:17.809971 kernel: ... bit width: 48 Mar 13 00:40:17.809979 kernel: ... generic registers: 8 Mar 13 00:40:17.809986 kernel: ... value mask: 0000ffffffffffff Mar 13 00:40:17.809993 kernel: ... max period: 00007fffffffffff Mar 13 00:40:17.810001 kernel: ... fixed-purpose events: 3 Mar 13 00:40:17.810008 kernel: ... event mask: 00000007000000ff Mar 13 00:40:17.810017 kernel: signal: max sigframe size: 3632 Mar 13 00:40:17.810024 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:40:17.810031 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:40:17.810039 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:40:17.810046 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:40:17.810053 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:40:17.810060 kernel: .... node #0, CPUs: #1 Mar 13 00:40:17.810068 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 00:40:17.810075 kernel: smpboot: Total of 2 processors activated (9178.36 BogoMIPS) Mar 13 00:40:17.810084 kernel: Memory: 3945188K/4186776K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 236712K reserved, 0K cma-reserved) Mar 13 00:40:17.810091 kernel: devtmpfs: initialized Mar 13 00:40:17.810099 kernel: x86/mm: Memory block size: 128MB Mar 13 00:40:17.810106 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 13 00:40:17.810113 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 13 00:40:17.810121 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 13 00:40:17.810128 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Mar 13 00:40:17.814022 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feb3000-0x7feb4fff] (8192 bytes) Mar 13 00:40:17.814038 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7ff70000-0x7fffffff] (589824 bytes) Mar 13 00:40:17.814052 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:40:17.814060 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 00:40:17.814068 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:40:17.814075 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:40:17.814083 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:40:17.814090 kernel: audit: type=2000 audit(1773362414.820:1): state=initialized audit_enabled=0 res=1 Mar 13 00:40:17.814097 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:40:17.814104 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:40:17.814112 kernel: cpuidle: using governor menu Mar 13 00:40:17.814121 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:40:17.814128 kernel: dca service started, version 1.12.1 Mar 13 00:40:17.814145 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Mar 13 00:40:17.814152 kernel: PCI: Using configuration type 1 for base access Mar 13 00:40:17.814159 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:40:17.814166 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:40:17.814173 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:40:17.814181 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:40:17.814188 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:40:17.814198 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:40:17.814205 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:40:17.814212 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:40:17.814219 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 00:40:17.814226 kernel: ACPI: Interpreter enabled Mar 13 00:40:17.814233 kernel: ACPI: PM: (supports S0 S3 S5) Mar 13 00:40:17.814240 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:40:17.814247 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:40:17.814254 kernel: PCI: Using E820 reservations for host bridge windows Mar 13 00:40:17.814263 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 13 00:40:17.814271 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:40:17.814416 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:40:17.814492 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 13 00:40:17.814558 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 13 00:40:17.814568 kernel: PCI host bridge to bus 0000:00 Mar 13 00:40:17.814640 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:40:17.814706 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:40:17.814779 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:40:17.814840 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Mar 13 00:40:17.814899 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 13 00:40:17.814958 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x38e800003fff window] Mar 13 00:40:17.815022 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:40:17.815108 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:40:17.815199 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Mar 13 00:40:17.815269 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Mar 13 00:40:17.815347 kernel: pci 0000:00:01.0: BAR 2 [mem 0x38e800000000-0x38e800003fff 64bit pref] Mar 13 00:40:17.815414 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8439e000-0x8439efff] Mar 13 00:40:17.815483 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Mar 13 00:40:17.815552 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 13 00:40:17.815631 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.815698 kernel: pci 0000:00:02.0: BAR 0 [mem 0x8439d000-0x8439dfff] Mar 13 00:40:17.815765 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 13 00:40:17.815833 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Mar 13 00:40:17.815899 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Mar 13 00:40:17.815965 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Mar 13 00:40:17.816031 kernel: pci 0000:00:02.0: enabling Extended Tags Mar 13 00:40:17.816105 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.817254 kernel: pci 0000:00:02.1: BAR 0 [mem 0x8439c000-0x8439cfff] Mar 13 00:40:17.817341 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 13 00:40:17.817411 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Mar 13 00:40:17.817480 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Mar 13 00:40:17.817546 kernel: pci 0000:00:02.1: enabling Extended Tags Mar 13 00:40:17.817620 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.817693 kernel: pci 0000:00:02.2: BAR 0 [mem 0x8439b000-0x8439bfff] Mar 13 00:40:17.817759 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 13 00:40:17.817825 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Mar 13 00:40:17.817891 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Mar 13 00:40:17.817957 kernel: pci 0000:00:02.2: enabling Extended Tags Mar 13 00:40:17.818028 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.818098 kernel: pci 0000:00:02.3: BAR 0 [mem 0x8439a000-0x8439afff] Mar 13 00:40:17.818175 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 13 00:40:17.818241 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Mar 13 00:40:17.818308 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Mar 13 00:40:17.818373 kernel: pci 0000:00:02.3: enabling Extended Tags Mar 13 00:40:17.818446 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.818512 kernel: pci 0000:00:02.4: BAR 0 [mem 0x84399000-0x84399fff] Mar 13 00:40:17.818581 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 13 00:40:17.818646 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Mar 13 00:40:17.818713 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Mar 13 00:40:17.818793 kernel: pci 0000:00:02.4: enabling Extended Tags Mar 13 00:40:17.818869 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.818937 kernel: pci 0000:00:02.5: BAR 0 [mem 0x84398000-0x84398fff] Mar 13 00:40:17.819005 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 13 00:40:17.819072 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Mar 13 00:40:17.823182 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Mar 13 00:40:17.823294 kernel: pci 0000:00:02.5: enabling Extended Tags Mar 13 00:40:17.823374 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.823446 kernel: pci 0000:00:02.6: BAR 0 [mem 0x84397000-0x84397fff] Mar 13 00:40:17.823514 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 13 00:40:17.823586 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Mar 13 00:40:17.823652 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Mar 13 00:40:17.823717 kernel: pci 0000:00:02.6: enabling Extended Tags Mar 13 00:40:17.823791 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.823857 kernel: pci 0000:00:02.7: BAR 0 [mem 0x84396000-0x84396fff] Mar 13 00:40:17.823922 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 13 00:40:17.823986 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Mar 13 00:40:17.824068 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Mar 13 00:40:17.824145 kernel: pci 0000:00:02.7: enabling Extended Tags Mar 13 00:40:17.824219 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.824288 kernel: pci 0000:00:03.0: BAR 0 [mem 0x84395000-0x84395fff] Mar 13 00:40:17.824354 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Mar 13 00:40:17.824421 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Mar 13 00:40:17.824487 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Mar 13 00:40:17.824555 kernel: pci 0000:00:03.0: enabling Extended Tags Mar 13 00:40:17.824631 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.824698 kernel: pci 0000:00:03.1: BAR 0 [mem 0x84394000-0x84394fff] Mar 13 00:40:17.824764 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Mar 13 00:40:17.824831 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Mar 13 00:40:17.824909 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Mar 13 00:40:17.824975 kernel: pci 0000:00:03.1: enabling Extended Tags Mar 13 00:40:17.825049 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.825115 kernel: pci 0000:00:03.2: BAR 0 [mem 0x84393000-0x84393fff] Mar 13 00:40:17.829241 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Mar 13 00:40:17.829329 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Mar 13 00:40:17.829401 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Mar 13 00:40:17.829474 kernel: pci 0000:00:03.2: enabling Extended Tags Mar 13 00:40:17.829550 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.829624 kernel: pci 0000:00:03.3: BAR 0 [mem 0x84392000-0x84392fff] Mar 13 00:40:17.829694 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Mar 13 00:40:17.829761 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Mar 13 00:40:17.829828 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Mar 13 00:40:17.829895 kernel: pci 0000:00:03.3: enabling Extended Tags Mar 13 00:40:17.829970 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.830039 kernel: pci 0000:00:03.4: BAR 0 [mem 0x84391000-0x84391fff] Mar 13 00:40:17.830107 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Mar 13 00:40:17.830701 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Mar 13 00:40:17.830791 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Mar 13 00:40:17.830860 kernel: pci 0000:00:03.4: enabling Extended Tags Mar 13 00:40:17.830934 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.831006 kernel: pci 0000:00:03.5: BAR 0 [mem 0x84390000-0x84390fff] Mar 13 00:40:17.831073 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Mar 13 00:40:17.833152 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Mar 13 00:40:17.833248 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Mar 13 00:40:17.833317 kernel: pci 0000:00:03.5: enabling Extended Tags Mar 13 00:40:17.833392 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.833465 kernel: pci 0000:00:03.6: BAR 0 [mem 0x8438f000-0x8438ffff] Mar 13 00:40:17.833532 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Mar 13 00:40:17.833602 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Mar 13 00:40:17.833668 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Mar 13 00:40:17.833734 kernel: pci 0000:00:03.6: enabling Extended Tags Mar 13 00:40:17.833807 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.833877 kernel: pci 0000:00:03.7: BAR 0 [mem 0x8438e000-0x8438efff] Mar 13 00:40:17.833945 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Mar 13 00:40:17.834013 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Mar 13 00:40:17.834083 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Mar 13 00:40:17.835194 kernel: pci 0000:00:03.7: enabling Extended Tags Mar 13 00:40:17.835285 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.835356 kernel: pci 0000:00:04.0: BAR 0 [mem 0x8438d000-0x8438dfff] Mar 13 00:40:17.835423 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Mar 13 00:40:17.835495 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Mar 13 00:40:17.835562 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Mar 13 00:40:17.835628 kernel: pci 0000:00:04.0: enabling Extended Tags Mar 13 00:40:17.835703 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.835770 kernel: pci 0000:00:04.1: BAR 0 [mem 0x8438c000-0x8438cfff] Mar 13 00:40:17.835836 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Mar 13 00:40:17.835902 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Mar 13 00:40:17.835971 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Mar 13 00:40:17.836037 kernel: pci 0000:00:04.1: enabling Extended Tags Mar 13 00:40:17.836110 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.836217 kernel: pci 0000:00:04.2: BAR 0 [mem 0x8438b000-0x8438bfff] Mar 13 00:40:17.836289 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Mar 13 00:40:17.836356 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Mar 13 00:40:17.836423 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Mar 13 00:40:17.836488 kernel: pci 0000:00:04.2: enabling Extended Tags Mar 13 00:40:17.836560 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.836627 kernel: pci 0000:00:04.3: BAR 0 [mem 0x8438a000-0x8438afff] Mar 13 00:40:17.836693 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Mar 13 00:40:17.836761 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Mar 13 00:40:17.836827 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Mar 13 00:40:17.836893 kernel: pci 0000:00:04.3: enabling Extended Tags Mar 13 00:40:17.836965 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.837032 kernel: pci 0000:00:04.4: BAR 0 [mem 0x84389000-0x84389fff] Mar 13 00:40:17.837100 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Mar 13 00:40:17.839117 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Mar 13 00:40:17.839232 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Mar 13 00:40:17.839300 kernel: pci 0000:00:04.4: enabling Extended Tags Mar 13 00:40:17.839375 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.839443 kernel: pci 0000:00:04.5: BAR 0 [mem 0x84388000-0x84388fff] Mar 13 00:40:17.839509 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Mar 13 00:40:17.839575 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Mar 13 00:40:17.839641 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Mar 13 00:40:17.839710 kernel: pci 0000:00:04.5: enabling Extended Tags Mar 13 00:40:17.839782 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.839849 kernel: pci 0000:00:04.6: BAR 0 [mem 0x84387000-0x84387fff] Mar 13 00:40:17.839931 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Mar 13 00:40:17.839997 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Mar 13 00:40:17.840063 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Mar 13 00:40:17.840129 kernel: pci 0000:00:04.6: enabling Extended Tags Mar 13 00:40:17.840223 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.840290 kernel: pci 0000:00:04.7: BAR 0 [mem 0x84386000-0x84386fff] Mar 13 00:40:17.840358 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Mar 13 00:40:17.840424 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Mar 13 00:40:17.840490 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Mar 13 00:40:17.840556 kernel: pci 0000:00:04.7: enabling Extended Tags Mar 13 00:40:17.840627 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.840698 kernel: pci 0000:00:05.0: BAR 0 [mem 0x84385000-0x84385fff] Mar 13 00:40:17.840765 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Mar 13 00:40:17.840831 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Mar 13 00:40:17.840897 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Mar 13 00:40:17.840963 kernel: pci 0000:00:05.0: enabling Extended Tags Mar 13 00:40:17.841037 kernel: pci 0000:00:05.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.841104 kernel: pci 0000:00:05.1: BAR 0 [mem 0x84384000-0x84384fff] Mar 13 00:40:17.841187 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Mar 13 00:40:17.841254 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Mar 13 00:40:17.841323 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Mar 13 00:40:17.841389 kernel: pci 0000:00:05.1: enabling Extended Tags Mar 13 00:40:17.841461 kernel: pci 0000:00:05.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.841527 kernel: pci 0000:00:05.2: BAR 0 [mem 0x84383000-0x84383fff] Mar 13 00:40:17.841593 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Mar 13 00:40:17.841661 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Mar 13 00:40:17.841728 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Mar 13 00:40:17.841795 kernel: pci 0000:00:05.2: enabling Extended Tags Mar 13 00:40:17.841866 kernel: pci 0000:00:05.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.841933 kernel: pci 0000:00:05.3: BAR 0 [mem 0x84382000-0x84382fff] Mar 13 00:40:17.841999 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Mar 13 00:40:17.842065 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Mar 13 00:40:17.844159 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Mar 13 00:40:17.844279 kernel: pci 0000:00:05.3: enabling Extended Tags Mar 13 00:40:17.844360 kernel: pci 0000:00:05.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 00:40:17.844431 kernel: pci 0000:00:05.4: BAR 0 [mem 0x84381000-0x84381fff] Mar 13 00:40:17.844499 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Mar 13 00:40:17.844566 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Mar 13 00:40:17.844633 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Mar 13 00:40:17.844705 kernel: pci 0000:00:05.4: enabling Extended Tags Mar 13 00:40:17.844782 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:40:17.844851 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 13 00:40:17.844922 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 13 00:40:17.844989 kernel: pci 0000:00:1f.2: BAR 4 [io 0x7040-0x705f] Mar 13 00:40:17.845055 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x84380000-0x84380fff] Mar 13 00:40:17.845146 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 13 00:40:17.845216 kernel: pci 0000:00:1f.3: BAR 4 [io 0x7000-0x703f] Mar 13 00:40:17.845291 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Mar 13 00:40:17.845363 kernel: pci 0000:01:00.0: BAR 0 [mem 0x84200000-0x842000ff 64bit] Mar 13 00:40:17.845432 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 13 00:40:17.845501 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Mar 13 00:40:17.845569 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Mar 13 00:40:17.846172 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Mar 13 00:40:17.846252 kernel: pci 0000:01:00.0: enabling Extended Tags Mar 13 00:40:17.846322 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 13 00:40:17.846405 kernel: pci_bus 0000:02: extended config space not accessible Mar 13 00:40:17.846417 kernel: acpiphp: Slot [1] registered Mar 13 00:40:17.846425 kernel: acpiphp: Slot [0] registered Mar 13 00:40:17.846432 kernel: acpiphp: Slot [2] registered Mar 13 00:40:17.846440 kernel: acpiphp: Slot [3] registered Mar 13 00:40:17.846450 kernel: acpiphp: Slot [4] registered Mar 13 00:40:17.846458 kernel: acpiphp: Slot [5] registered Mar 13 00:40:17.846465 kernel: acpiphp: Slot [6] registered Mar 13 00:40:17.846473 kernel: acpiphp: Slot [7] registered Mar 13 00:40:17.846480 kernel: acpiphp: Slot [8] registered Mar 13 00:40:17.846488 kernel: acpiphp: Slot [9] registered Mar 13 00:40:17.846495 kernel: acpiphp: Slot [10] registered Mar 13 00:40:17.846503 kernel: acpiphp: Slot [11] registered Mar 13 00:40:17.846510 kernel: acpiphp: Slot [12] registered Mar 13 00:40:17.846518 kernel: acpiphp: Slot [13] registered Mar 13 00:40:17.846527 kernel: acpiphp: Slot [14] registered Mar 13 00:40:17.846534 kernel: acpiphp: Slot [15] registered Mar 13 00:40:17.846542 kernel: acpiphp: Slot [16] registered Mar 13 00:40:17.846549 kernel: acpiphp: Slot [17] registered Mar 13 00:40:17.846556 kernel: acpiphp: Slot [18] registered Mar 13 00:40:17.846564 kernel: acpiphp: Slot [19] registered Mar 13 00:40:17.846571 kernel: acpiphp: Slot [20] registered Mar 13 00:40:17.846579 kernel: acpiphp: Slot [21] registered Mar 13 00:40:17.846586 kernel: acpiphp: Slot [22] registered Mar 13 00:40:17.846597 kernel: acpiphp: Slot [23] registered Mar 13 00:40:17.846605 kernel: acpiphp: Slot [24] registered Mar 13 00:40:17.846612 kernel: acpiphp: Slot [25] registered Mar 13 00:40:17.846619 kernel: acpiphp: Slot [26] registered Mar 13 00:40:17.846627 kernel: acpiphp: Slot [27] registered Mar 13 00:40:17.846634 kernel: acpiphp: Slot [28] registered Mar 13 00:40:17.846641 kernel: acpiphp: Slot [29] registered Mar 13 00:40:17.846648 kernel: acpiphp: Slot [30] registered Mar 13 00:40:17.846656 kernel: acpiphp: Slot [31] registered Mar 13 00:40:17.846747 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Mar 13 00:40:17.846826 kernel: pci 0000:02:01.0: BAR 4 [io 0x6000-0x601f] Mar 13 00:40:17.846910 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 13 00:40:17.846933 kernel: acpiphp: Slot [0-2] registered Mar 13 00:40:17.847044 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Mar 13 00:40:17.847115 kernel: pci 0000:03:00.0: BAR 1 [mem 0x83e00000-0x83e00fff] Mar 13 00:40:17.848201 kernel: pci 0000:03:00.0: BAR 4 [mem 0x380800000000-0x380800003fff 64bit pref] Mar 13 00:40:17.848282 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref] Mar 13 00:40:17.848354 kernel: pci 0000:03:00.0: enabling Extended Tags Mar 13 00:40:17.848422 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 13 00:40:17.848432 kernel: acpiphp: Slot [0-3] registered Mar 13 00:40:17.848506 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Mar 13 00:40:17.848573 kernel: pci 0000:04:00.0: BAR 1 [mem 0x83c00000-0x83c00fff] Mar 13 00:40:17.848640 kernel: pci 0000:04:00.0: BAR 4 [mem 0x381000000000-0x381000003fff 64bit pref] Mar 13 00:40:17.848707 kernel: pci 0000:04:00.0: enabling Extended Tags Mar 13 00:40:17.848773 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 13 00:40:17.848785 kernel: acpiphp: Slot [0-4] registered Mar 13 00:40:17.848861 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Mar 13 00:40:17.848929 kernel: pci 0000:05:00.0: BAR 4 [mem 0x381800000000-0x381800003fff 64bit pref] Mar 13 00:40:17.848996 kernel: pci 0000:05:00.0: enabling Extended Tags Mar 13 00:40:17.849061 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 13 00:40:17.850422 kernel: acpiphp: Slot [0-5] registered Mar 13 00:40:17.850539 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Mar 13 00:40:17.850624 kernel: pci 0000:06:00.0: BAR 1 [mem 0x83800000-0x83800fff] Mar 13 00:40:17.850697 kernel: pci 0000:06:00.0: BAR 4 [mem 0x382000000000-0x382000003fff 64bit pref] Mar 13 00:40:17.850781 kernel: pci 0000:06:00.0: enabling Extended Tags Mar 13 00:40:17.850853 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 13 00:40:17.850863 kernel: acpiphp: Slot [0-6] registered Mar 13 00:40:17.850931 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 13 00:40:17.850942 kernel: acpiphp: Slot [0-7] registered Mar 13 00:40:17.851012 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 13 00:40:17.851022 kernel: acpiphp: Slot [0-8] registered Mar 13 00:40:17.851094 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 13 00:40:17.851105 kernel: acpiphp: Slot [0-9] registered Mar 13 00:40:17.851191 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Mar 13 00:40:17.851202 kernel: acpiphp: Slot [0-10] registered Mar 13 00:40:17.851271 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Mar 13 00:40:17.851281 kernel: acpiphp: Slot [0-11] registered Mar 13 00:40:17.851350 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Mar 13 00:40:17.851360 kernel: acpiphp: Slot [0-12] registered Mar 13 00:40:17.851430 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Mar 13 00:40:17.851441 kernel: acpiphp: Slot [0-13] registered Mar 13 00:40:17.851510 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Mar 13 00:40:17.851520 kernel: acpiphp: Slot [0-14] registered Mar 13 00:40:17.851588 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Mar 13 00:40:17.851597 kernel: acpiphp: Slot [0-15] registered Mar 13 00:40:17.851662 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Mar 13 00:40:17.851672 kernel: acpiphp: Slot [0-16] registered Mar 13 00:40:17.851739 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Mar 13 00:40:17.851748 kernel: acpiphp: Slot [0-17] registered Mar 13 00:40:17.851813 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Mar 13 00:40:17.851825 kernel: acpiphp: Slot [0-18] registered Mar 13 00:40:17.851890 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Mar 13 00:40:17.851900 kernel: acpiphp: Slot [0-19] registered Mar 13 00:40:17.851964 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Mar 13 00:40:17.851974 kernel: acpiphp: Slot [0-20] registered Mar 13 00:40:17.852039 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Mar 13 00:40:17.852049 kernel: acpiphp: Slot [0-21] registered Mar 13 00:40:17.852112 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Mar 13 00:40:17.852122 kernel: acpiphp: Slot [0-22] registered Mar 13 00:40:17.853119 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Mar 13 00:40:17.853155 kernel: acpiphp: Slot [0-23] registered Mar 13 00:40:17.853229 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Mar 13 00:40:17.853240 kernel: acpiphp: Slot [0-24] registered Mar 13 00:40:17.853308 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Mar 13 00:40:17.853318 kernel: acpiphp: Slot [0-25] registered Mar 13 00:40:17.853386 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Mar 13 00:40:17.853396 kernel: acpiphp: Slot [0-26] registered Mar 13 00:40:17.853465 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Mar 13 00:40:17.853475 kernel: acpiphp: Slot [0-27] registered Mar 13 00:40:17.853547 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Mar 13 00:40:17.853557 kernel: acpiphp: Slot [0-28] registered Mar 13 00:40:17.853626 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Mar 13 00:40:17.853636 kernel: acpiphp: Slot [0-29] registered Mar 13 00:40:17.853756 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Mar 13 00:40:17.853780 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:40:17.853797 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:40:17.853815 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:40:17.853832 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:40:17.853853 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 13 00:40:17.853870 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 13 00:40:17.853887 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 13 00:40:17.853904 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 13 00:40:17.853922 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 13 00:40:17.853937 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 13 00:40:17.853954 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 13 00:40:17.853971 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 13 00:40:17.853989 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 13 00:40:17.854009 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 13 00:40:17.854027 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 13 00:40:17.854042 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 13 00:40:17.854058 kernel: iommu: Default domain type: Translated Mar 13 00:40:17.854075 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:40:17.854093 kernel: efivars: Registered efivars operations Mar 13 00:40:17.854107 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:40:17.854125 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:40:17.854153 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 13 00:40:17.854162 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 13 00:40:17.854169 kernel: e820: reserve RAM buffer [mem 0x7df57018-0x7fffffff] Mar 13 00:40:17.854176 kernel: e820: reserve RAM buffer [mem 0x7df7f018-0x7fffffff] Mar 13 00:40:17.854184 kernel: e820: reserve RAM buffer [mem 0x7e93f000-0x7fffffff] Mar 13 00:40:17.854191 kernel: e820: reserve RAM buffer [mem 0x7ec71000-0x7fffffff] Mar 13 00:40:17.854198 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Mar 13 00:40:17.854205 kernel: e820: reserve RAM buffer [mem 0x7feaf000-0x7fffffff] Mar 13 00:40:17.854213 kernel: e820: reserve RAM buffer [mem 0x7feec000-0x7fffffff] Mar 13 00:40:17.854283 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 13 00:40:17.854352 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 13 00:40:17.854420 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 13 00:40:17.854430 kernel: vgaarb: loaded Mar 13 00:40:17.854438 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:40:17.854445 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:40:17.854453 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:40:17.854461 kernel: pnp: PnP ACPI init Mar 13 00:40:17.854535 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Mar 13 00:40:17.855197 kernel: pnp: PnP ACPI: found 5 devices Mar 13 00:40:17.855211 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:40:17.855219 kernel: NET: Registered PF_INET protocol family Mar 13 00:40:17.855227 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:40:17.855235 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 00:40:17.855243 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:40:17.855251 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 00:40:17.855258 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 00:40:17.855266 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 00:40:17.855277 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:40:17.855284 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:40:17.855292 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:40:17.855300 kernel: NET: Registered PF_XDP protocol family Mar 13 00:40:17.855391 kernel: pci 0000:03:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Mar 13 00:40:17.855462 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 13 00:40:17.855531 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 13 00:40:17.855599 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 13 00:40:17.855670 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 13 00:40:17.855738 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 13 00:40:17.855805 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 13 00:40:17.855874 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 13 00:40:17.855942 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Mar 13 00:40:17.856008 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Mar 13 00:40:17.856073 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Mar 13 00:40:17.856149 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Mar 13 00:40:17.856219 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Mar 13 00:40:17.856286 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Mar 13 00:40:17.856353 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Mar 13 00:40:17.856421 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Mar 13 00:40:17.856488 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Mar 13 00:40:17.856555 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Mar 13 00:40:17.856623 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Mar 13 00:40:17.856692 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Mar 13 00:40:17.856760 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Mar 13 00:40:17.856829 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Mar 13 00:40:17.856899 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Mar 13 00:40:17.856969 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Mar 13 00:40:17.857040 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Mar 13 00:40:17.857108 kernel: pci 0000:00:05.1: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Mar 13 00:40:17.857195 kernel: pci 0000:00:05.2: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Mar 13 00:40:17.857270 kernel: pci 0000:00:05.3: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Mar 13 00:40:17.857346 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Mar 13 00:40:17.857411 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Mar 13 00:40:17.857475 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Mar 13 00:40:17.857538 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Mar 13 00:40:17.857601 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Mar 13 00:40:17.857665 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Mar 13 00:40:17.857728 kernel: pci 0000:00:02.6: bridge window [io 0x8000-0x8fff]: assigned Mar 13 00:40:17.857794 kernel: pci 0000:00:02.7: bridge window [io 0x9000-0x9fff]: assigned Mar 13 00:40:17.857858 kernel: pci 0000:00:03.0: bridge window [io 0xa000-0xafff]: assigned Mar 13 00:40:17.857922 kernel: pci 0000:00:03.1: bridge window [io 0xb000-0xbfff]: assigned Mar 13 00:40:17.857987 kernel: pci 0000:00:03.2: bridge window [io 0xc000-0xcfff]: assigned Mar 13 00:40:17.858053 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff]: assigned Mar 13 00:40:17.858119 kernel: pci 0000:00:03.4: bridge window [io 0xe000-0xefff]: assigned Mar 13 00:40:17.859159 kernel: pci 0000:00:03.5: bridge window [io 0xf000-0xffff]: assigned Mar 13 00:40:17.859240 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.859315 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.859389 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.859458 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.859528 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.859599 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.860284 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.860372 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.860444 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.860517 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.860587 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.860656 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.860725 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.860792 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.860861 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.860928 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.860996 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.861068 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.861226 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.861314 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.862247 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.862325 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.862395 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.862466 kernel: pci 0000:00:05.1: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.862540 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.862610 kernel: pci 0000:00:05.2: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.862680 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.862758 kernel: pci 0000:00:05.3: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.862829 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.862896 kernel: pci 0000:00:05.4: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.862962 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff]: assigned Mar 13 00:40:17.863028 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff]: assigned Mar 13 00:40:17.863097 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff]: assigned Mar 13 00:40:17.865537 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff]: assigned Mar 13 00:40:17.865618 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff]: assigned Mar 13 00:40:17.865689 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff]: assigned Mar 13 00:40:17.865758 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff]: assigned Mar 13 00:40:17.865827 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff]: assigned Mar 13 00:40:17.865896 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff]: assigned Mar 13 00:40:17.865966 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff]: assigned Mar 13 00:40:17.866039 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff]: assigned Mar 13 00:40:17.866107 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff]: assigned Mar 13 00:40:17.867373 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff]: assigned Mar 13 00:40:17.867452 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.867523 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.867594 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.867662 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.867730 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.867798 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.867870 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.867937 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.868005 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.868072 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.868161 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.868230 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.868299 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.868366 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.868439 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.868506 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.868575 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.868642 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.868710 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.868776 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.868843 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.868912 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.868980 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.869046 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.869114 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.869193 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.869263 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.869331 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.869399 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Mar 13 00:40:17.869466 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Mar 13 00:40:17.869538 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 13 00:40:17.869607 kernel: pci 0000:01:00.0: bridge window [io 0x6000-0x6fff] Mar 13 00:40:17.869675 kernel: pci 0000:01:00.0: bridge window [mem 0x84000000-0x841fffff] Mar 13 00:40:17.869743 kernel: pci 0000:01:00.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Mar 13 00:40:17.869811 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 13 00:40:17.869880 kernel: pci 0000:00:02.0: bridge window [io 0x6000-0x6fff] Mar 13 00:40:17.869946 kernel: pci 0000:00:02.0: bridge window [mem 0x84000000-0x842fffff] Mar 13 00:40:17.870012 kernel: pci 0000:00:02.0: bridge window [mem 0x380000000000-0x3807ffffffff 64bit pref] Mar 13 00:40:17.870083 kernel: pci 0000:03:00.0: ROM [mem 0x83e80000-0x83efffff pref]: assigned Mar 13 00:40:17.871628 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 13 00:40:17.871710 kernel: pci 0000:00:02.1: bridge window [mem 0x83e00000-0x83ffffff] Mar 13 00:40:17.871778 kernel: pci 0000:00:02.1: bridge window [mem 0x380800000000-0x380fffffffff 64bit pref] Mar 13 00:40:17.871846 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 13 00:40:17.871914 kernel: pci 0000:00:02.2: bridge window [mem 0x83c00000-0x83dfffff] Mar 13 00:40:17.871989 kernel: pci 0000:00:02.2: bridge window [mem 0x381000000000-0x3817ffffffff 64bit pref] Mar 13 00:40:17.872059 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 13 00:40:17.872126 kernel: pci 0000:00:02.3: bridge window [mem 0x83a00000-0x83bfffff] Mar 13 00:40:17.872740 kernel: pci 0000:00:02.3: bridge window [mem 0x381800000000-0x381fffffffff 64bit pref] Mar 13 00:40:17.872813 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 13 00:40:17.872880 kernel: pci 0000:00:02.4: bridge window [mem 0x83800000-0x839fffff] Mar 13 00:40:17.872949 kernel: pci 0000:00:02.4: bridge window [mem 0x382000000000-0x3827ffffffff 64bit pref] Mar 13 00:40:17.873024 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 13 00:40:17.873093 kernel: pci 0000:00:02.5: bridge window [mem 0x83600000-0x837fffff] Mar 13 00:40:17.873197 kernel: pci 0000:00:02.5: bridge window [mem 0x382800000000-0x382fffffffff 64bit pref] Mar 13 00:40:17.873268 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 13 00:40:17.873337 kernel: pci 0000:00:02.6: bridge window [mem 0x83400000-0x835fffff] Mar 13 00:40:17.873406 kernel: pci 0000:00:02.6: bridge window [mem 0x383000000000-0x3837ffffffff 64bit pref] Mar 13 00:40:17.873475 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 13 00:40:17.873548 kernel: pci 0000:00:02.7: bridge window [mem 0x83200000-0x833fffff] Mar 13 00:40:17.873616 kernel: pci 0000:00:02.7: bridge window [mem 0x383800000000-0x383fffffffff 64bit pref] Mar 13 00:40:17.873686 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a] Mar 13 00:40:17.873754 kernel: pci 0000:00:03.0: bridge window [mem 0x83000000-0x831fffff] Mar 13 00:40:17.873822 kernel: pci 0000:00:03.0: bridge window [mem 0x384000000000-0x3847ffffffff 64bit pref] Mar 13 00:40:17.873891 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b] Mar 13 00:40:17.873960 kernel: pci 0000:00:03.1: bridge window [mem 0x82e00000-0x82ffffff] Mar 13 00:40:17.874028 kernel: pci 0000:00:03.1: bridge window [mem 0x384800000000-0x384fffffffff 64bit pref] Mar 13 00:40:17.874101 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c] Mar 13 00:40:17.874759 kernel: pci 0000:00:03.2: bridge window [mem 0x82c00000-0x82dfffff] Mar 13 00:40:17.874871 kernel: pci 0000:00:03.2: bridge window [mem 0x385000000000-0x3857ffffffff 64bit pref] Mar 13 00:40:17.874951 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d] Mar 13 00:40:17.875022 kernel: pci 0000:00:03.3: bridge window [mem 0x82a00000-0x82bfffff] Mar 13 00:40:17.875090 kernel: pci 0000:00:03.3: bridge window [mem 0x385800000000-0x385fffffffff 64bit pref] Mar 13 00:40:17.875180 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e] Mar 13 00:40:17.875248 kernel: pci 0000:00:03.4: bridge window [mem 0x82800000-0x829fffff] Mar 13 00:40:17.875314 kernel: pci 0000:00:03.4: bridge window [mem 0x386000000000-0x3867ffffffff 64bit pref] Mar 13 00:40:17.875382 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f] Mar 13 00:40:17.875449 kernel: pci 0000:00:03.5: bridge window [mem 0x82600000-0x827fffff] Mar 13 00:40:17.875515 kernel: pci 0000:00:03.5: bridge window [mem 0x386800000000-0x386fffffffff 64bit pref] Mar 13 00:40:17.875584 kernel: pci 0000:00:03.6: PCI bridge to [bus 10] Mar 13 00:40:17.875652 kernel: pci 0000:00:03.6: bridge window [mem 0x82400000-0x825fffff] Mar 13 00:40:17.875723 kernel: pci 0000:00:03.6: bridge window [mem 0x387000000000-0x3877ffffffff 64bit pref] Mar 13 00:40:17.875796 kernel: pci 0000:00:03.7: PCI bridge to [bus 11] Mar 13 00:40:17.875864 kernel: pci 0000:00:03.7: bridge window [mem 0x82200000-0x823fffff] Mar 13 00:40:17.875932 kernel: pci 0000:00:03.7: bridge window [mem 0x387800000000-0x387fffffffff 64bit pref] Mar 13 00:40:17.876551 kernel: pci 0000:00:04.0: PCI bridge to [bus 12] Mar 13 00:40:17.876636 kernel: pci 0000:00:04.0: bridge window [io 0xf000-0xffff] Mar 13 00:40:17.876706 kernel: pci 0000:00:04.0: bridge window [mem 0x82000000-0x821fffff] Mar 13 00:40:17.876780 kernel: pci 0000:00:04.0: bridge window [mem 0x388000000000-0x3887ffffffff 64bit pref] Mar 13 00:40:17.876855 kernel: pci 0000:00:04.1: PCI bridge to [bus 13] Mar 13 00:40:17.876939 kernel: pci 0000:00:04.1: bridge window [io 0xe000-0xefff] Mar 13 00:40:17.877010 kernel: pci 0000:00:04.1: bridge window [mem 0x81e00000-0x81ffffff] Mar 13 00:40:17.877080 kernel: pci 0000:00:04.1: bridge window [mem 0x388800000000-0x388fffffffff 64bit pref] Mar 13 00:40:17.877164 kernel: pci 0000:00:04.2: PCI bridge to [bus 14] Mar 13 00:40:17.877234 kernel: pci 0000:00:04.2: bridge window [io 0xd000-0xdfff] Mar 13 00:40:17.877305 kernel: pci 0000:00:04.2: bridge window [mem 0x81c00000-0x81dfffff] Mar 13 00:40:17.877375 kernel: pci 0000:00:04.2: bridge window [mem 0x389000000000-0x3897ffffffff 64bit pref] Mar 13 00:40:17.877449 kernel: pci 0000:00:04.3: PCI bridge to [bus 15] Mar 13 00:40:17.877520 kernel: pci 0000:00:04.3: bridge window [io 0xc000-0xcfff] Mar 13 00:40:17.877589 kernel: pci 0000:00:04.3: bridge window [mem 0x81a00000-0x81bfffff] Mar 13 00:40:17.877658 kernel: pci 0000:00:04.3: bridge window [mem 0x389800000000-0x389fffffffff 64bit pref] Mar 13 00:40:17.877731 kernel: pci 0000:00:04.4: PCI bridge to [bus 16] Mar 13 00:40:17.877800 kernel: pci 0000:00:04.4: bridge window [io 0xb000-0xbfff] Mar 13 00:40:17.877873 kernel: pci 0000:00:04.4: bridge window [mem 0x81800000-0x819fffff] Mar 13 00:40:17.877942 kernel: pci 0000:00:04.4: bridge window [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Mar 13 00:40:17.878014 kernel: pci 0000:00:04.5: PCI bridge to [bus 17] Mar 13 00:40:17.878085 kernel: pci 0000:00:04.5: bridge window [io 0xa000-0xafff] Mar 13 00:40:17.878166 kernel: pci 0000:00:04.5: bridge window [mem 0x81600000-0x817fffff] Mar 13 00:40:17.878236 kernel: pci 0000:00:04.5: bridge window [mem 0x38a800000000-0x38afffffffff 64bit pref] Mar 13 00:40:17.878308 kernel: pci 0000:00:04.6: PCI bridge to [bus 18] Mar 13 00:40:17.878381 kernel: pci 0000:00:04.6: bridge window [io 0x9000-0x9fff] Mar 13 00:40:17.878450 kernel: pci 0000:00:04.6: bridge window [mem 0x81400000-0x815fffff] Mar 13 00:40:17.878519 kernel: pci 0000:00:04.6: bridge window [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Mar 13 00:40:17.878591 kernel: pci 0000:00:04.7: PCI bridge to [bus 19] Mar 13 00:40:17.878673 kernel: pci 0000:00:04.7: bridge window [io 0x8000-0x8fff] Mar 13 00:40:17.878755 kernel: pci 0000:00:04.7: bridge window [mem 0x81200000-0x813fffff] Mar 13 00:40:17.878825 kernel: pci 0000:00:04.7: bridge window [mem 0x38b800000000-0x38bfffffffff 64bit pref] Mar 13 00:40:17.878899 kernel: pci 0000:00:05.0: PCI bridge to [bus 1a] Mar 13 00:40:17.878967 kernel: pci 0000:00:05.0: bridge window [io 0x5000-0x5fff] Mar 13 00:40:17.879036 kernel: pci 0000:00:05.0: bridge window [mem 0x81000000-0x811fffff] Mar 13 00:40:17.879108 kernel: pci 0000:00:05.0: bridge window [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Mar 13 00:40:17.879208 kernel: pci 0000:00:05.1: PCI bridge to [bus 1b] Mar 13 00:40:17.879278 kernel: pci 0000:00:05.1: bridge window [io 0x4000-0x4fff] Mar 13 00:40:17.879348 kernel: pci 0000:00:05.1: bridge window [mem 0x80e00000-0x80ffffff] Mar 13 00:40:17.879417 kernel: pci 0000:00:05.1: bridge window [mem 0x38c800000000-0x38cfffffffff 64bit pref] Mar 13 00:40:17.879493 kernel: pci 0000:00:05.2: PCI bridge to [bus 1c] Mar 13 00:40:17.879562 kernel: pci 0000:00:05.2: bridge window [io 0x3000-0x3fff] Mar 13 00:40:17.879629 kernel: pci 0000:00:05.2: bridge window [mem 0x80c00000-0x80dfffff] Mar 13 00:40:17.879696 kernel: pci 0000:00:05.2: bridge window [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Mar 13 00:40:17.879768 kernel: pci 0000:00:05.3: PCI bridge to [bus 1d] Mar 13 00:40:17.879838 kernel: pci 0000:00:05.3: bridge window [io 0x2000-0x2fff] Mar 13 00:40:17.879907 kernel: pci 0000:00:05.3: bridge window [mem 0x80a00000-0x80bfffff] Mar 13 00:40:17.879979 kernel: pci 0000:00:05.3: bridge window [mem 0x38d800000000-0x38dfffffffff 64bit pref] Mar 13 00:40:17.880052 kernel: pci 0000:00:05.4: PCI bridge to [bus 1e] Mar 13 00:40:17.880121 kernel: pci 0000:00:05.4: bridge window [io 0x1000-0x1fff] Mar 13 00:40:17.880215 kernel: pci 0000:00:05.4: bridge window [mem 0x80800000-0x809fffff] Mar 13 00:40:17.880285 kernel: pci 0000:00:05.4: bridge window [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Mar 13 00:40:17.880360 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:40:17.880426 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:40:17.880488 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:40:17.880555 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Mar 13 00:40:17.880618 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 13 00:40:17.880681 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x38e800003fff window] Mar 13 00:40:17.880762 kernel: pci_bus 0000:01: resource 0 [io 0x6000-0x6fff] Mar 13 00:40:17.880830 kernel: pci_bus 0000:01: resource 1 [mem 0x84000000-0x842fffff] Mar 13 00:40:17.880897 kernel: pci_bus 0000:01: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Mar 13 00:40:17.880972 kernel: pci_bus 0000:02: resource 0 [io 0x6000-0x6fff] Mar 13 00:40:17.881044 kernel: pci_bus 0000:02: resource 1 [mem 0x84000000-0x841fffff] Mar 13 00:40:17.881112 kernel: pci_bus 0000:02: resource 2 [mem 0x380000000000-0x3807ffffffff 64bit pref] Mar 13 00:40:17.883262 kernel: pci_bus 0000:03: resource 1 [mem 0x83e00000-0x83ffffff] Mar 13 00:40:17.883350 kernel: pci_bus 0000:03: resource 2 [mem 0x380800000000-0x380fffffffff 64bit pref] Mar 13 00:40:17.883428 kernel: pci_bus 0000:04: resource 1 [mem 0x83c00000-0x83dfffff] Mar 13 00:40:17.883495 kernel: pci_bus 0000:04: resource 2 [mem 0x381000000000-0x3817ffffffff 64bit pref] Mar 13 00:40:17.883576 kernel: pci_bus 0000:05: resource 1 [mem 0x83a00000-0x83bfffff] Mar 13 00:40:17.883642 kernel: pci_bus 0000:05: resource 2 [mem 0x381800000000-0x381fffffffff 64bit pref] Mar 13 00:40:17.883716 kernel: pci_bus 0000:06: resource 1 [mem 0x83800000-0x839fffff] Mar 13 00:40:17.883783 kernel: pci_bus 0000:06: resource 2 [mem 0x382000000000-0x3827ffffffff 64bit pref] Mar 13 00:40:17.883858 kernel: pci_bus 0000:07: resource 1 [mem 0x83600000-0x837fffff] Mar 13 00:40:17.883924 kernel: pci_bus 0000:07: resource 2 [mem 0x382800000000-0x382fffffffff 64bit pref] Mar 13 00:40:17.884002 kernel: pci_bus 0000:08: resource 1 [mem 0x83400000-0x835fffff] Mar 13 00:40:17.884070 kernel: pci_bus 0000:08: resource 2 [mem 0x383000000000-0x3837ffffffff 64bit pref] Mar 13 00:40:17.884167 kernel: pci_bus 0000:09: resource 1 [mem 0x83200000-0x833fffff] Mar 13 00:40:17.884235 kernel: pci_bus 0000:09: resource 2 [mem 0x383800000000-0x383fffffffff 64bit pref] Mar 13 00:40:17.884309 kernel: pci_bus 0000:0a: resource 1 [mem 0x83000000-0x831fffff] Mar 13 00:40:17.884380 kernel: pci_bus 0000:0a: resource 2 [mem 0x384000000000-0x3847ffffffff 64bit pref] Mar 13 00:40:17.884455 kernel: pci_bus 0000:0b: resource 1 [mem 0x82e00000-0x82ffffff] Mar 13 00:40:17.884525 kernel: pci_bus 0000:0b: resource 2 [mem 0x384800000000-0x384fffffffff 64bit pref] Mar 13 00:40:17.884602 kernel: pci_bus 0000:0c: resource 1 [mem 0x82c00000-0x82dfffff] Mar 13 00:40:17.884668 kernel: pci_bus 0000:0c: resource 2 [mem 0x385000000000-0x3857ffffffff 64bit pref] Mar 13 00:40:17.884742 kernel: pci_bus 0000:0d: resource 1 [mem 0x82a00000-0x82bfffff] Mar 13 00:40:17.884808 kernel: pci_bus 0000:0d: resource 2 [mem 0x385800000000-0x385fffffffff 64bit pref] Mar 13 00:40:17.884884 kernel: pci_bus 0000:0e: resource 1 [mem 0x82800000-0x829fffff] Mar 13 00:40:17.884951 kernel: pci_bus 0000:0e: resource 2 [mem 0x386000000000-0x3867ffffffff 64bit pref] Mar 13 00:40:17.885025 kernel: pci_bus 0000:0f: resource 1 [mem 0x82600000-0x827fffff] Mar 13 00:40:17.885091 kernel: pci_bus 0000:0f: resource 2 [mem 0x386800000000-0x386fffffffff 64bit pref] Mar 13 00:40:17.885252 kernel: pci_bus 0000:10: resource 1 [mem 0x82400000-0x825fffff] Mar 13 00:40:17.885321 kernel: pci_bus 0000:10: resource 2 [mem 0x387000000000-0x3877ffffffff 64bit pref] Mar 13 00:40:17.885398 kernel: pci_bus 0000:11: resource 1 [mem 0x82200000-0x823fffff] Mar 13 00:40:17.885463 kernel: pci_bus 0000:11: resource 2 [mem 0x387800000000-0x387fffffffff 64bit pref] Mar 13 00:40:17.885535 kernel: pci_bus 0000:12: resource 0 [io 0xf000-0xffff] Mar 13 00:40:17.885601 kernel: pci_bus 0000:12: resource 1 [mem 0x82000000-0x821fffff] Mar 13 00:40:17.885666 kernel: pci_bus 0000:12: resource 2 [mem 0x388000000000-0x3887ffffffff 64bit pref] Mar 13 00:40:17.885738 kernel: pci_bus 0000:13: resource 0 [io 0xe000-0xefff] Mar 13 00:40:17.885806 kernel: pci_bus 0000:13: resource 1 [mem 0x81e00000-0x81ffffff] Mar 13 00:40:17.885869 kernel: pci_bus 0000:13: resource 2 [mem 0x388800000000-0x388fffffffff 64bit pref] Mar 13 00:40:17.885940 kernel: pci_bus 0000:14: resource 0 [io 0xd000-0xdfff] Mar 13 00:40:17.886004 kernel: pci_bus 0000:14: resource 1 [mem 0x81c00000-0x81dfffff] Mar 13 00:40:17.886067 kernel: pci_bus 0000:14: resource 2 [mem 0x389000000000-0x3897ffffffff 64bit pref] Mar 13 00:40:17.886159 kernel: pci_bus 0000:15: resource 0 [io 0xc000-0xcfff] Mar 13 00:40:17.886227 kernel: pci_bus 0000:15: resource 1 [mem 0x81a00000-0x81bfffff] Mar 13 00:40:17.886296 kernel: pci_bus 0000:15: resource 2 [mem 0x389800000000-0x389fffffffff 64bit pref] Mar 13 00:40:17.886370 kernel: pci_bus 0000:16: resource 0 [io 0xb000-0xbfff] Mar 13 00:40:17.886438 kernel: pci_bus 0000:16: resource 1 [mem 0x81800000-0x819fffff] Mar 13 00:40:17.886518 kernel: pci_bus 0000:16: resource 2 [mem 0x38a000000000-0x38a7ffffffff 64bit pref] Mar 13 00:40:17.886591 kernel: pci_bus 0000:17: resource 0 [io 0xa000-0xafff] Mar 13 00:40:17.886656 kernel: pci_bus 0000:17: resource 1 [mem 0x81600000-0x817fffff] Mar 13 00:40:17.886729 kernel: pci_bus 0000:17: resource 2 [mem 0x38a800000000-0x38afffffffff 64bit pref] Mar 13 00:40:17.886805 kernel: pci_bus 0000:18: resource 0 [io 0x9000-0x9fff] Mar 13 00:40:17.886870 kernel: pci_bus 0000:18: resource 1 [mem 0x81400000-0x815fffff] Mar 13 00:40:17.886935 kernel: pci_bus 0000:18: resource 2 [mem 0x38b000000000-0x38b7ffffffff 64bit pref] Mar 13 00:40:17.887010 kernel: pci_bus 0000:19: resource 0 [io 0x8000-0x8fff] Mar 13 00:40:17.887075 kernel: pci_bus 0000:19: resource 1 [mem 0x81200000-0x813fffff] Mar 13 00:40:17.887148 kernel: pci_bus 0000:19: resource 2 [mem 0x38b800000000-0x38bfffffffff 64bit pref] Mar 13 00:40:17.887219 kernel: pci_bus 0000:1a: resource 0 [io 0x5000-0x5fff] Mar 13 00:40:17.887287 kernel: pci_bus 0000:1a: resource 1 [mem 0x81000000-0x811fffff] Mar 13 00:40:17.887353 kernel: pci_bus 0000:1a: resource 2 [mem 0x38c000000000-0x38c7ffffffff 64bit pref] Mar 13 00:40:17.887424 kernel: pci_bus 0000:1b: resource 0 [io 0x4000-0x4fff] Mar 13 00:40:17.887491 kernel: pci_bus 0000:1b: resource 1 [mem 0x80e00000-0x80ffffff] Mar 13 00:40:17.887555 kernel: pci_bus 0000:1b: resource 2 [mem 0x38c800000000-0x38cfffffffff 64bit pref] Mar 13 00:40:17.887627 kernel: pci_bus 0000:1c: resource 0 [io 0x3000-0x3fff] Mar 13 00:40:17.887698 kernel: pci_bus 0000:1c: resource 1 [mem 0x80c00000-0x80dfffff] Mar 13 00:40:17.887763 kernel: pci_bus 0000:1c: resource 2 [mem 0x38d000000000-0x38d7ffffffff 64bit pref] Mar 13 00:40:17.887839 kernel: pci_bus 0000:1d: resource 0 [io 0x2000-0x2fff] Mar 13 00:40:17.887904 kernel: pci_bus 0000:1d: resource 1 [mem 0x80a00000-0x80bfffff] Mar 13 00:40:17.887969 kernel: pci_bus 0000:1d: resource 2 [mem 0x38d800000000-0x38dfffffffff 64bit pref] Mar 13 00:40:17.888041 kernel: pci_bus 0000:1e: resource 0 [io 0x1000-0x1fff] Mar 13 00:40:17.888107 kernel: pci_bus 0000:1e: resource 1 [mem 0x80800000-0x809fffff] Mar 13 00:40:17.890250 kernel: pci_bus 0000:1e: resource 2 [mem 0x38e000000000-0x38e7ffffffff 64bit pref] Mar 13 00:40:17.890276 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 13 00:40:17.890285 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:40:17.890294 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 13 00:40:17.890302 kernel: software IO TLB: mapped [mem 0x0000000077ede000-0x000000007bede000] (64MB) Mar 13 00:40:17.890311 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 13 00:40:17.890319 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21133e85697, max_idle_ns: 440795250946 ns Mar 13 00:40:17.890327 kernel: Initialise system trusted keyrings Mar 13 00:40:17.890336 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 00:40:17.890348 kernel: Key type asymmetric registered Mar 13 00:40:17.890356 kernel: Asymmetric key parser 'x509' registered Mar 13 00:40:17.890364 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:40:17.890372 kernel: io scheduler mq-deadline registered Mar 13 00:40:17.890380 kernel: io scheduler kyber registered Mar 13 00:40:17.890388 kernel: io scheduler bfq registered Mar 13 00:40:17.890477 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 13 00:40:17.890555 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 13 00:40:17.890636 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 13 00:40:17.890710 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 13 00:40:17.890795 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 13 00:40:17.890867 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 13 00:40:17.890940 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 13 00:40:17.891012 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 13 00:40:17.891085 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 13 00:40:17.891173 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 13 00:40:17.891252 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 13 00:40:17.891325 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 13 00:40:17.891399 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 13 00:40:17.891471 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 13 00:40:17.891547 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 13 00:40:17.891619 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 13 00:40:17.891630 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 13 00:40:17.891703 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Mar 13 00:40:17.891779 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Mar 13 00:40:17.891853 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33 Mar 13 00:40:17.891923 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33 Mar 13 00:40:17.891997 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34 Mar 13 00:40:17.892068 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34 Mar 13 00:40:17.892155 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35 Mar 13 00:40:17.892230 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35 Mar 13 00:40:17.892303 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36 Mar 13 00:40:17.892378 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36 Mar 13 00:40:17.892451 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37 Mar 13 00:40:17.892522 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37 Mar 13 00:40:17.892595 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38 Mar 13 00:40:17.892666 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38 Mar 13 00:40:17.892740 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39 Mar 13 00:40:17.892811 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39 Mar 13 00:40:17.892821 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 13 00:40:17.892890 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40 Mar 13 00:40:17.892965 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40 Mar 13 00:40:17.893038 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 41 Mar 13 00:40:17.893108 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 41 Mar 13 00:40:17.893191 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 42 Mar 13 00:40:17.893263 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 42 Mar 13 00:40:17.893336 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 43 Mar 13 00:40:17.893407 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 43 Mar 13 00:40:17.893479 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 44 Mar 13 00:40:17.893554 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 44 Mar 13 00:40:17.893626 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 45 Mar 13 00:40:17.893698 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 45 Mar 13 00:40:17.893772 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 46 Mar 13 00:40:17.893842 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 46 Mar 13 00:40:17.893914 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 47 Mar 13 00:40:17.893985 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 47 Mar 13 00:40:17.893995 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Mar 13 00:40:17.894068 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 48 Mar 13 00:40:17.894150 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 48 Mar 13 00:40:17.894226 kernel: pcieport 0000:00:05.1: PME: Signaling with IRQ 49 Mar 13 00:40:17.894298 kernel: pcieport 0000:00:05.1: AER: enabled with IRQ 49 Mar 13 00:40:17.894370 kernel: pcieport 0000:00:05.2: PME: Signaling with IRQ 50 Mar 13 00:40:17.894441 kernel: pcieport 0000:00:05.2: AER: enabled with IRQ 50 Mar 13 00:40:17.894514 kernel: pcieport 0000:00:05.3: PME: Signaling with IRQ 51 Mar 13 00:40:17.894585 kernel: pcieport 0000:00:05.3: AER: enabled with IRQ 51 Mar 13 00:40:17.894660 kernel: pcieport 0000:00:05.4: PME: Signaling with IRQ 52 Mar 13 00:40:17.894740 kernel: pcieport 0000:00:05.4: AER: enabled with IRQ 52 Mar 13 00:40:17.894750 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:40:17.894758 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:40:17.894766 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:40:17.894775 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:40:17.894783 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:40:17.894791 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:40:17.894799 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 13 00:40:17.894881 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 13 00:40:17.894947 kernel: rtc_cmos 00:03: registered as rtc0 Mar 13 00:40:17.895009 kernel: rtc_cmos 00:03: setting system clock to 2026-03-13T00:40:17 UTC (1773362417) Mar 13 00:40:17.895070 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 13 00:40:17.895080 kernel: intel_pstate: CPU model not supported Mar 13 00:40:17.895088 kernel: efifb: probing for efifb Mar 13 00:40:17.895096 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Mar 13 00:40:17.895103 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 13 00:40:17.895113 kernel: efifb: scrolling: redraw Mar 13 00:40:17.895121 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 13 00:40:17.895129 kernel: Console: switching to colour frame buffer device 160x50 Mar 13 00:40:17.895147 kernel: fb0: EFI VGA frame buffer device Mar 13 00:40:17.895155 kernel: pstore: Using crash dump compression: deflate Mar 13 00:40:17.895162 kernel: pstore: Registered efi_pstore as persistent store backend Mar 13 00:40:17.895170 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:40:17.895193 kernel: Segment Routing with IPv6 Mar 13 00:40:17.895204 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:40:17.895212 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:40:17.895221 kernel: Key type dns_resolver registered Mar 13 00:40:17.895229 kernel: IPI shorthand broadcast: enabled Mar 13 00:40:17.895237 kernel: sched_clock: Marking stable (3894163280, 151723933)->(4146775768, -100888555) Mar 13 00:40:17.895245 kernel: registered taskstats version 1 Mar 13 00:40:17.895253 kernel: Loading compiled-in X.509 certificates Mar 13 00:40:17.895260 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:40:17.895268 kernel: Demotion targets for Node 0: null Mar 13 00:40:17.895276 kernel: Key type .fscrypt registered Mar 13 00:40:17.895284 kernel: Key type fscrypt-provisioning registered Mar 13 00:40:17.895293 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:40:17.895301 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:40:17.895309 kernel: ima: No architecture policies found Mar 13 00:40:17.895316 kernel: clk: Disabling unused clocks Mar 13 00:40:17.895324 kernel: Warning: unable to open an initial console. Mar 13 00:40:17.895333 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:40:17.895340 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:40:17.895348 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:40:17.895356 kernel: Run /init as init process Mar 13 00:40:17.895368 kernel: with arguments: Mar 13 00:40:17.895376 kernel: /init Mar 13 00:40:17.895384 kernel: with environment: Mar 13 00:40:17.895391 kernel: HOME=/ Mar 13 00:40:17.895399 kernel: TERM=linux Mar 13 00:40:17.895408 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:40:17.895420 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:40:17.895432 systemd[1]: Detected virtualization kvm. Mar 13 00:40:17.895439 systemd[1]: Detected architecture x86-64. Mar 13 00:40:17.895447 systemd[1]: Running in initrd. Mar 13 00:40:17.895455 systemd[1]: No hostname configured, using default hostname. Mar 13 00:40:17.895463 systemd[1]: Hostname set to . Mar 13 00:40:17.895471 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:40:17.895479 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:40:17.895491 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:40:17.895499 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:40:17.895510 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:40:17.895518 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:40:17.895528 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:40:17.895537 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:40:17.895547 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:40:17.895557 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:40:17.895566 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:40:17.895574 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:40:17.895582 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:40:17.895591 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:40:17.895599 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:40:17.895608 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:40:17.895616 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:40:17.895625 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:40:17.895636 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:40:17.895644 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:40:17.895653 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:40:17.895661 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:40:17.895670 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:40:17.895678 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:40:17.895687 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:40:17.895695 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:40:17.895706 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:40:17.895715 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:40:17.895723 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:40:17.895731 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:40:17.895739 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:40:17.895747 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:40:17.895755 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:40:17.895766 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:40:17.895774 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:40:17.895809 systemd-journald[224]: Collecting audit messages is disabled. Mar 13 00:40:17.895833 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:40:17.895842 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:17.895850 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:40:17.895859 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:40:17.895867 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:40:17.895877 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:40:17.895885 kernel: Bridge firewalling registered Mar 13 00:40:17.895896 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:40:17.895904 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:40:17.895913 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:40:17.895921 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:40:17.895929 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:40:17.895937 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:40:17.895948 systemd-journald[224]: Journal started Mar 13 00:40:17.895971 systemd-journald[224]: Runtime Journal (/run/log/journal/74ca41fc4d484f12bad56b963cedcd32) is 8M, max 78M, 70M free. Mar 13 00:40:17.821184 systemd-modules-load[226]: Inserted module 'overlay' Mar 13 00:40:17.861884 systemd-modules-load[226]: Inserted module 'br_netfilter' Mar 13 00:40:17.898747 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:40:17.905403 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:40:17.914022 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:40:17.919431 systemd-tmpfiles[263]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:40:17.923907 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:40:17.925976 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:40:17.966465 systemd-resolved[283]: Positive Trust Anchors: Mar 13 00:40:17.967575 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:40:17.968335 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:40:17.971950 systemd-resolved[283]: Defaulting to hostname 'linux'. Mar 13 00:40:17.975271 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:40:17.975984 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:40:18.012197 kernel: SCSI subsystem initialized Mar 13 00:40:18.022177 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:40:18.033168 kernel: iscsi: registered transport (tcp) Mar 13 00:40:18.055188 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:40:18.055275 kernel: QLogic iSCSI HBA Driver Mar 13 00:40:18.072788 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:40:18.089399 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:40:18.091991 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:40:18.131951 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:40:18.134929 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:40:18.188170 kernel: raid6: avx512x4 gen() 37899 MB/s Mar 13 00:40:18.205171 kernel: raid6: avx512x2 gen() 37347 MB/s Mar 13 00:40:18.223172 kernel: raid6: avx512x1 gen() 37177 MB/s Mar 13 00:40:18.241169 kernel: raid6: avx2x4 gen() 29617 MB/s Mar 13 00:40:18.259168 kernel: raid6: avx2x2 gen() 29553 MB/s Mar 13 00:40:18.275629 kernel: raid6: avx2x1 gen() 20187 MB/s Mar 13 00:40:18.275725 kernel: raid6: using algorithm avx512x4 gen() 37899 MB/s Mar 13 00:40:18.295173 kernel: raid6: .... xor() 8828 MB/s, rmw enabled Mar 13 00:40:18.295267 kernel: raid6: using avx512x2 recovery algorithm Mar 13 00:40:18.314162 kernel: xor: automatically using best checksumming function avx Mar 13 00:40:18.441182 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:40:18.447104 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:40:18.449250 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:40:18.479876 systemd-udevd[472]: Using default interface naming scheme 'v255'. Mar 13 00:40:18.484284 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:40:18.486891 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:40:18.507465 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Mar 13 00:40:18.529108 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:40:18.531319 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:40:18.604053 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:40:18.606923 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:40:18.681168 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Mar 13 00:40:18.690184 kernel: virtio_blk virtio2: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Mar 13 00:40:18.710170 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:40:18.713219 kernel: ACPI: bus type USB registered Mar 13 00:40:18.713315 kernel: usbcore: registered new interface driver usbfs Mar 13 00:40:18.718491 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:40:18.718532 kernel: GPT:17805311 != 104857599 Mar 13 00:40:18.718545 kernel: usbcore: registered new interface driver hub Mar 13 00:40:18.718555 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:40:18.720267 kernel: usbcore: registered new device driver usb Mar 13 00:40:18.720292 kernel: GPT:17805311 != 104857599 Mar 13 00:40:18.723500 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:40:18.723522 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:40:18.727151 kernel: AES CTR mode by8 optimization enabled Mar 13 00:40:18.746408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:40:18.746510 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:18.747953 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:40:18.752161 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller Mar 13 00:40:18.754256 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1 Mar 13 00:40:18.756343 kernel: uhci_hcd 0000:02:01.0: detected 2 ports Mar 13 00:40:18.756486 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x00006000 Mar 13 00:40:18.758827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:40:18.761820 kernel: hub 1-0:1.0: USB hub found Mar 13 00:40:18.764352 kernel: hub 1-0:1.0: 2 ports detected Mar 13 00:40:18.782781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:40:18.782866 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:18.788189 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 13 00:40:18.788312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:40:18.789719 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:40:18.801164 kernel: libata version 3.00 loaded. Mar 13 00:40:18.813164 kernel: ahci 0000:00:1f.2: version 3.0 Mar 13 00:40:18.817165 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 13 00:40:18.821187 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 13 00:40:18.821365 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 13 00:40:18.821458 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 13 00:40:18.825175 kernel: scsi host0: ahci Mar 13 00:40:18.825370 kernel: scsi host1: ahci Mar 13 00:40:18.826360 kernel: scsi host2: ahci Mar 13 00:40:18.826619 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:18.827857 kernel: scsi host3: ahci Mar 13 00:40:18.829407 kernel: scsi host4: ahci Mar 13 00:40:18.831152 kernel: scsi host5: ahci Mar 13 00:40:18.835963 kernel: ata1: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380100 irq 61 lpm-pol 1 Mar 13 00:40:18.836003 kernel: ata2: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380180 irq 61 lpm-pol 1 Mar 13 00:40:18.836014 kernel: ata3: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380200 irq 61 lpm-pol 1 Mar 13 00:40:18.838507 kernel: ata4: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380280 irq 61 lpm-pol 1 Mar 13 00:40:18.838538 kernel: ata5: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380300 irq 61 lpm-pol 1 Mar 13 00:40:18.840243 kernel: ata6: SATA max UDMA/133 abar m4096@0x84380000 port 0x84380380 irq 61 lpm-pol 1 Mar 13 00:40:18.851775 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 13 00:40:18.868287 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 13 00:40:18.874504 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 13 00:40:18.874947 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 13 00:40:18.882259 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 00:40:18.883484 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:40:18.902522 disk-uuid[671]: Primary Header is updated. Mar 13 00:40:18.902522 disk-uuid[671]: Secondary Entries is updated. Mar 13 00:40:18.902522 disk-uuid[671]: Secondary Header is updated. Mar 13 00:40:18.910152 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:40:18.989177 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Mar 13 00:40:19.159770 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 13 00:40:19.159872 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 13 00:40:19.159896 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 13 00:40:19.159906 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 13 00:40:19.159916 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 13 00:40:19.159926 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 13 00:40:19.176156 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 13 00:40:19.183624 kernel: usbcore: registered new interface driver usbhid Mar 13 00:40:19.183689 kernel: usbhid: USB HID core driver Mar 13 00:40:19.190099 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Mar 13 00:40:19.190170 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0 Mar 13 00:40:19.200038 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:40:19.201024 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:40:19.201508 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:40:19.202245 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:40:19.203781 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:40:19.224303 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:40:19.919237 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:40:19.919309 disk-uuid[672]: The operation has completed successfully. Mar 13 00:40:19.973483 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:40:19.973592 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:40:19.995040 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:40:20.013961 sh[697]: Success Mar 13 00:40:20.030285 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:40:20.030340 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:40:20.031273 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:40:20.040162 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 13 00:40:20.092579 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:40:20.097213 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:40:20.104598 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:40:20.117240 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (709) Mar 13 00:40:20.119529 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:40:20.119590 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:40:20.136584 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:40:20.136668 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:40:20.140171 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:40:20.141881 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:40:20.142894 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:40:20.144537 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:40:20.145990 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:40:20.179196 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (740) Mar 13 00:40:20.183382 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:40:20.183446 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:40:20.191438 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:40:20.191503 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:40:20.197173 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:40:20.198550 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:40:20.201332 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:40:20.249852 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:40:20.253332 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:40:20.284711 systemd-networkd[878]: lo: Link UP Mar 13 00:40:20.284720 systemd-networkd[878]: lo: Gained carrier Mar 13 00:40:20.285986 systemd-networkd[878]: Enumeration completed Mar 13 00:40:20.286074 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:40:20.286472 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:40:20.286476 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:40:20.287033 systemd-networkd[878]: eth0: Link UP Mar 13 00:40:20.287824 systemd-networkd[878]: eth0: Gained carrier Mar 13 00:40:20.287834 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:40:20.295057 systemd[1]: Reached target network.target - Network. Mar 13 00:40:20.303216 systemd-networkd[878]: eth0: DHCPv4 address 10.0.0.185/25, gateway 10.0.0.129 acquired from 10.0.0.129 Mar 13 00:40:20.365861 ignition[815]: Ignition 2.22.0 Mar 13 00:40:20.365875 ignition[815]: Stage: fetch-offline Mar 13 00:40:20.365909 ignition[815]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:40:20.367573 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:40:20.365917 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 13 00:40:20.366007 ignition[815]: parsed url from cmdline: "" Mar 13 00:40:20.366010 ignition[815]: no config URL provided Mar 13 00:40:20.366015 ignition[815]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:40:20.366020 ignition[815]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:40:20.369853 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 00:40:20.366026 ignition[815]: failed to fetch config: resource requires networking Mar 13 00:40:20.366268 ignition[815]: Ignition finished successfully Mar 13 00:40:20.406640 ignition[887]: Ignition 2.22.0 Mar 13 00:40:20.407416 ignition[887]: Stage: fetch Mar 13 00:40:20.407563 ignition[887]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:40:20.407572 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 13 00:40:20.407651 ignition[887]: parsed url from cmdline: "" Mar 13 00:40:20.407654 ignition[887]: no config URL provided Mar 13 00:40:20.407658 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:40:20.407664 ignition[887]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:40:20.407774 ignition[887]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 13 00:40:20.407799 ignition[887]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 13 00:40:20.407835 ignition[887]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 13 00:40:21.408002 ignition[887]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 13 00:40:21.408032 ignition[887]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 13 00:40:21.414330 ignition[887]: GET result: OK Mar 13 00:40:21.414440 ignition[887]: parsing config with SHA512: a63b2c5d48d5a00310584a273c9e6f0b7ecb8ffd6b63075026f3b52c10158441b56b3c882a2e8a9e804e45718f73cfdd2bead6cd9bc79f49f435b0549c60a86c Mar 13 00:40:21.420484 unknown[887]: fetched base config from "system" Mar 13 00:40:21.420802 ignition[887]: fetch: fetch complete Mar 13 00:40:21.420495 unknown[887]: fetched base config from "system" Mar 13 00:40:21.420807 ignition[887]: fetch: fetch passed Mar 13 00:40:21.420500 unknown[887]: fetched user config from "openstack" Mar 13 00:40:21.420845 ignition[887]: Ignition finished successfully Mar 13 00:40:21.423055 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 00:40:21.425950 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:40:21.458586 ignition[893]: Ignition 2.22.0 Mar 13 00:40:21.458598 ignition[893]: Stage: kargs Mar 13 00:40:21.458749 ignition[893]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:40:21.458757 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 13 00:40:21.459415 ignition[893]: kargs: kargs passed Mar 13 00:40:21.459455 ignition[893]: Ignition finished successfully Mar 13 00:40:21.461376 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:40:21.463841 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:40:21.485061 ignition[899]: Ignition 2.22.0 Mar 13 00:40:21.485074 ignition[899]: Stage: disks Mar 13 00:40:21.485218 ignition[899]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:40:21.485226 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 13 00:40:21.485859 ignition[899]: disks: disks passed Mar 13 00:40:21.487573 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:40:21.485897 ignition[899]: Ignition finished successfully Mar 13 00:40:21.488289 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:40:21.488608 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:40:21.488905 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:40:21.489206 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:40:21.489480 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:40:21.493287 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:40:21.523623 systemd-fsck[909]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Mar 13 00:40:21.526253 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:40:21.528448 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:40:21.677171 kernel: EXT4-fs (vda9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:40:21.677257 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:40:21.678327 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:40:21.681074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:40:21.683213 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:40:21.683858 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 00:40:21.686278 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 13 00:40:21.686732 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:40:21.686761 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:40:21.699075 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:40:21.701254 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:40:21.724507 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (917) Mar 13 00:40:21.728768 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:40:21.728824 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:40:21.743856 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:40:21.743919 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:40:21.746270 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:40:21.768169 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 00:40:21.778707 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:40:21.779412 systemd-networkd[878]: eth0: Gained IPv6LL Mar 13 00:40:21.784723 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:40:21.789659 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:40:21.792877 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:40:21.897374 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:40:21.899707 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:40:21.901283 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:40:21.913778 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:40:21.916492 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:40:21.941737 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:40:21.951531 ignition[1033]: INFO : Ignition 2.22.0 Mar 13 00:40:21.951531 ignition[1033]: INFO : Stage: mount Mar 13 00:40:21.951531 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:40:21.951531 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 13 00:40:21.951531 ignition[1033]: INFO : mount: mount passed Mar 13 00:40:21.951531 ignition[1033]: INFO : Ignition finished successfully Mar 13 00:40:21.954321 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:40:22.808186 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 00:40:24.816171 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 00:40:28.825169 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 00:40:28.832465 coreos-metadata[919]: Mar 13 00:40:28.832 WARN failed to locate config-drive, using the metadata service API instead Mar 13 00:40:28.844069 coreos-metadata[919]: Mar 13 00:40:28.844 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 13 00:40:31.127192 coreos-metadata[919]: Mar 13 00:40:31.127 INFO Fetch successful Mar 13 00:40:31.129289 coreos-metadata[919]: Mar 13 00:40:31.129 INFO wrote hostname ci-4459-2-4-n-8f702bd38e to /sysroot/etc/hostname Mar 13 00:40:31.130649 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 13 00:40:31.130765 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 13 00:40:31.132420 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:40:31.151737 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:40:31.181160 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1050) Mar 13 00:40:31.185266 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:40:31.185303 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:40:31.190252 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:40:31.190287 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:40:31.192295 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:40:31.224159 ignition[1068]: INFO : Ignition 2.22.0 Mar 13 00:40:31.224159 ignition[1068]: INFO : Stage: files Mar 13 00:40:31.224159 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:40:31.224159 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 13 00:40:31.227038 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:40:31.227806 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:40:31.227806 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:40:31.230328 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:40:31.230842 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:40:31.231238 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:40:31.231068 unknown[1068]: wrote ssh authorized keys file for user: core Mar 13 00:40:31.233258 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:40:31.233871 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:40:31.282369 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:40:31.387422 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:40:31.387422 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 00:40:31.389085 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 13 00:40:31.644313 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 13 00:40:31.798005 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 00:40:31.798005 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:40:31.798005 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:40:31.798005 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:40:31.798005 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:40:31.798005 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:40:31.798005 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:40:31.798005 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:40:31.798005 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:40:31.803654 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:40:31.803654 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:40:31.803654 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:40:31.803654 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:40:31.803654 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:40:31.803654 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 13 00:40:32.046338 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 13 00:40:32.600495 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:40:32.600495 ignition[1068]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 13 00:40:32.602322 ignition[1068]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:40:32.604578 ignition[1068]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:40:32.604578 ignition[1068]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 13 00:40:32.604578 ignition[1068]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:40:32.606167 ignition[1068]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:40:32.606167 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:40:32.606167 ignition[1068]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:40:32.606167 ignition[1068]: INFO : files: files passed Mar 13 00:40:32.606167 ignition[1068]: INFO : Ignition finished successfully Mar 13 00:40:32.606847 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:40:32.610022 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:40:32.611529 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:40:32.634350 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:40:32.634461 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:40:32.641670 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:40:32.641670 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:40:32.643515 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:40:32.645266 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:40:32.645993 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:40:32.647657 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:40:32.695498 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:40:32.695625 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:40:32.697060 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:40:32.698053 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:40:32.699440 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:40:32.700928 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:40:32.723574 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:40:32.726289 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:40:32.753423 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:40:32.754301 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:40:32.755514 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:40:32.756668 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:40:32.756789 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:40:32.758710 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:40:32.759737 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:40:32.760628 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:40:32.761540 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:40:32.762467 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:40:32.763346 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:40:32.764202 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:40:32.765050 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:40:32.766009 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:40:32.766931 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:40:32.767779 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:40:32.768646 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:40:32.768769 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:40:32.769919 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:40:32.770795 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:40:32.771564 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:40:32.771637 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:40:32.772435 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:40:32.772530 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:40:32.773772 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:40:32.773876 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:40:32.774644 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:40:32.774731 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:40:32.776290 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:40:32.778218 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:40:32.778322 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:40:32.780301 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:40:32.783215 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:40:32.783761 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:40:32.785373 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:40:32.785879 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:40:32.789817 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:40:32.790333 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:40:32.804273 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:40:32.808696 ignition[1121]: INFO : Ignition 2.22.0 Mar 13 00:40:32.810427 ignition[1121]: INFO : Stage: umount Mar 13 00:40:32.810427 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:40:32.810427 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 13 00:40:32.810427 ignition[1121]: INFO : umount: umount passed Mar 13 00:40:32.810427 ignition[1121]: INFO : Ignition finished successfully Mar 13 00:40:32.810444 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:40:32.810578 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:40:32.813430 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:40:32.813548 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:40:32.814365 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:40:32.814452 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:40:32.815264 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:40:32.815302 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:40:32.815935 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 00:40:32.815968 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 00:40:32.816645 systemd[1]: Stopped target network.target - Network. Mar 13 00:40:32.817386 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:40:32.817426 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:40:32.818168 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:40:32.818943 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:40:32.819005 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:40:32.819697 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:40:32.820447 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:40:32.821194 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:40:32.821232 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:40:32.821900 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:40:32.821932 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:40:32.822578 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:40:32.822624 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:40:32.823361 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:40:32.823400 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:40:32.824113 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:40:32.824187 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:40:32.824991 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:40:32.825856 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:40:32.833986 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:40:32.834096 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:40:32.837440 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:40:32.838158 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:40:32.838747 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:40:32.840409 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:40:32.840973 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:40:32.841457 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:40:32.841498 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:40:32.844239 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:40:32.845008 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:40:32.845434 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:40:32.846254 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:40:32.846743 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:40:32.847577 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:40:32.847951 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:40:32.848735 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:40:32.848769 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:40:32.849550 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:40:32.851519 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:40:32.851575 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:40:32.859657 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:40:32.867452 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:40:32.868641 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:40:32.868721 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:40:32.869534 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:40:32.869563 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:40:32.870284 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:40:32.870329 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:40:32.871477 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:40:32.871514 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:40:32.872719 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:40:32.872763 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:40:32.876291 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:40:32.877100 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:40:32.877564 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:40:32.878477 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:40:32.878903 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:40:32.879823 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:40:32.880244 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:32.882356 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:40:32.882406 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:40:32.882442 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:40:32.882767 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:40:32.882865 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:40:32.890539 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:40:32.890633 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:40:32.891801 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:40:32.893363 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:40:32.912176 systemd[1]: Switching root. Mar 13 00:40:32.954565 systemd-journald[224]: Journal stopped Mar 13 00:40:34.123720 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Mar 13 00:40:34.123796 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:40:34.123812 kernel: SELinux: policy capability open_perms=1 Mar 13 00:40:34.123823 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:40:34.123833 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:40:34.123845 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:40:34.123855 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:40:34.123864 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:40:34.123877 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:40:34.123894 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:40:34.123906 kernel: audit: type=1403 audit(1773362433.196:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:40:34.123921 systemd[1]: Successfully loaded SELinux policy in 69.904ms. Mar 13 00:40:34.123942 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.405ms. Mar 13 00:40:34.123954 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:40:34.123966 systemd[1]: Detected virtualization kvm. Mar 13 00:40:34.123978 systemd[1]: Detected architecture x86-64. Mar 13 00:40:34.123990 systemd[1]: Detected first boot. Mar 13 00:40:34.124001 systemd[1]: Hostname set to . Mar 13 00:40:34.124012 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:40:34.124024 zram_generator::config[1165]: No configuration found. Mar 13 00:40:34.124037 kernel: Guest personality initialized and is inactive Mar 13 00:40:34.124047 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:40:34.124060 kernel: Initialized host personality Mar 13 00:40:34.124070 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:40:34.124079 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:40:34.124092 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:40:34.124103 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:40:34.124113 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:40:34.124126 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:40:34.128189 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:40:34.128224 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:40:34.128236 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:40:34.128248 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:40:34.128261 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:40:34.128271 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:40:34.128282 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:40:34.128292 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:40:34.128302 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:40:34.128314 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:40:34.128325 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:40:34.128336 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:40:34.128348 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:40:34.128363 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:40:34.128373 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:40:34.128384 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:40:34.128395 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:40:34.128405 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:40:34.128415 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:40:34.128428 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:40:34.128438 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:40:34.128449 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:40:34.128462 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:40:34.128473 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:40:34.128487 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:40:34.128498 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:40:34.128510 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:40:34.128524 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:40:34.128536 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:40:34.128547 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:40:34.128557 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:40:34.128567 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:40:34.128577 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:40:34.128587 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:40:34.128598 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:40:34.128608 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:34.128619 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:40:34.128632 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:40:34.128643 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:40:34.128656 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:40:34.128666 systemd[1]: Reached target machines.target - Containers. Mar 13 00:40:34.128677 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:40:34.128687 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:40:34.128698 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:40:34.128709 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:40:34.128719 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:40:34.128732 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:40:34.128742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:40:34.128753 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:40:34.128763 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:40:34.128774 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:40:34.128785 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:40:34.128797 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:40:34.128808 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:40:34.128818 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:40:34.128829 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:40:34.128844 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:40:34.128857 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:40:34.128867 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:40:34.128878 kernel: loop: module loaded Mar 13 00:40:34.128890 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:40:34.128900 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:40:34.128911 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:40:34.128921 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:40:34.128934 systemd[1]: Stopped verity-setup.service. Mar 13 00:40:34.128944 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:34.128956 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:40:34.128967 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:40:34.128979 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:40:34.128989 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:40:34.129001 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:40:34.129012 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:40:34.129022 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:40:34.129034 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:40:34.129046 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:40:34.129057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:40:34.129067 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:40:34.129078 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:40:34.129088 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:40:34.129098 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:40:34.129109 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:40:34.129119 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:40:34.129131 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:40:34.131904 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:40:34.131920 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:40:34.131932 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:40:34.131943 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:40:34.131954 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:40:34.131964 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:40:34.132008 systemd-journald[1236]: Collecting audit messages is disabled. Mar 13 00:40:34.132042 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:40:34.132053 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:40:34.132064 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:40:34.132075 systemd-journald[1236]: Journal started Mar 13 00:40:34.132099 systemd-journald[1236]: Runtime Journal (/run/log/journal/74ca41fc4d484f12bad56b963cedcd32) is 8M, max 78M, 70M free. Mar 13 00:40:33.821957 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:40:33.841227 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 13 00:40:33.841596 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:40:34.137525 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:40:34.137571 kernel: fuse: init (API version 7.41) Mar 13 00:40:34.145153 kernel: ACPI: bus type drm_connector registered Mar 13 00:40:34.149533 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:40:34.149578 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:40:34.156161 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:40:34.161154 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:40:34.164065 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:40:34.165687 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:40:34.165873 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:40:34.166570 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:40:34.166722 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:40:34.167407 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:40:34.168614 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:40:34.190307 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:40:34.192182 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:40:34.200324 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:40:34.201675 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:40:34.203663 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:40:34.218314 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:40:34.223151 kernel: loop0: detected capacity change from 0 to 219192 Mar 13 00:40:34.235978 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:40:34.249322 systemd-journald[1236]: Time spent on flushing to /var/log/journal/74ca41fc4d484f12bad56b963cedcd32 is 67.857ms for 1754 entries. Mar 13 00:40:34.249322 systemd-journald[1236]: System Journal (/var/log/journal/74ca41fc4d484f12bad56b963cedcd32) is 8M, max 584.8M, 576.8M free. Mar 13 00:40:34.341452 systemd-journald[1236]: Received client request to flush runtime journal. Mar 13 00:40:34.341504 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:40:34.341525 kernel: loop1: detected capacity change from 0 to 128560 Mar 13 00:40:34.305728 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:40:34.317122 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:40:34.323047 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:40:34.344809 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:40:34.350273 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:40:34.355893 kernel: loop2: detected capacity change from 0 to 1640 Mar 13 00:40:34.370901 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Mar 13 00:40:34.371220 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Mar 13 00:40:34.377634 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:40:34.393468 kernel: loop3: detected capacity change from 0 to 110984 Mar 13 00:40:34.428142 kernel: loop4: detected capacity change from 0 to 219192 Mar 13 00:40:34.457430 kernel: loop5: detected capacity change from 0 to 128560 Mar 13 00:40:34.474528 kernel: loop6: detected capacity change from 0 to 1640 Mar 13 00:40:34.484048 kernel: loop7: detected capacity change from 0 to 110984 Mar 13 00:40:34.503180 (sd-merge)[1315]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-stackit'. Mar 13 00:40:34.503987 (sd-merge)[1315]: Merged extensions into '/usr'. Mar 13 00:40:34.509465 systemd[1]: Reload requested from client PID 1270 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:40:34.509788 systemd[1]: Reloading... Mar 13 00:40:34.615162 zram_generator::config[1340]: No configuration found. Mar 13 00:40:34.754241 ldconfig[1259]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:40:34.841268 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:40:34.841717 systemd[1]: Reloading finished in 330 ms. Mar 13 00:40:34.876708 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:40:34.877817 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:40:34.878796 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:40:34.883737 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:40:34.894158 systemd[1]: Starting ensure-sysext.service... Mar 13 00:40:34.897249 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:40:34.899364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:40:34.917613 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:40:34.922260 systemd[1]: Reload requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:40:34.922274 systemd[1]: Reloading... Mar 13 00:40:34.924064 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:40:34.924089 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:40:34.924294 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:40:34.924489 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:40:34.925104 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:40:34.925328 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Mar 13 00:40:34.925373 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Mar 13 00:40:34.929334 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:40:34.929344 systemd-tmpfiles[1387]: Skipping /boot Mar 13 00:40:34.935960 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:40:34.935971 systemd-tmpfiles[1387]: Skipping /boot Mar 13 00:40:34.954068 systemd-udevd[1388]: Using default interface naming scheme 'v255'. Mar 13 00:40:35.001195 zram_generator::config[1416]: No configuration found. Mar 13 00:40:35.225792 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:40:35.250003 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 13 00:40:35.249108 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:40:35.249423 systemd[1]: Reloading finished in 326 ms. Mar 13 00:40:35.257393 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:40:35.260211 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:40:35.264866 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:40:35.298090 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 00:40:35.300981 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:35.305412 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:40:35.309335 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:40:35.310384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:40:35.313254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:40:35.315358 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:40:35.320415 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:40:35.321298 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:40:35.323366 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:40:35.325676 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:40:35.328402 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:40:35.330724 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:40:35.339411 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:40:35.344611 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:40:35.345158 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:35.349511 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:35.350785 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:40:35.350978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:40:35.351079 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:40:35.351477 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:35.357051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:40:35.358461 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:40:35.359578 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:35.360257 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:40:35.366088 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:40:35.373851 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Mar 13 00:40:35.374573 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:40:35.374699 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:40:35.374859 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:40:35.376130 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:40:35.377099 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:40:35.378028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:40:35.385520 systemd[1]: Finished ensure-sysext.service. Mar 13 00:40:35.389375 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:40:35.391930 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:40:35.398659 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:40:35.400820 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:40:35.413935 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:40:35.416078 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:40:35.423453 augenrules[1537]: No rules Mar 13 00:40:35.426556 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Mar 13 00:40:35.424892 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:40:35.425821 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:40:35.428573 kernel: Console: switching to colour dummy device 80x25 Mar 13 00:40:35.428649 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Mar 13 00:40:35.429555 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:40:35.429711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:40:35.429913 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:40:35.432503 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 13 00:40:35.432543 kernel: [drm] features: -context_init Mar 13 00:40:35.436369 kernel: [drm] number of scanouts: 1 Mar 13 00:40:35.436407 kernel: [drm] number of cap sets: 0 Mar 13 00:40:35.440498 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Mar 13 00:40:35.443869 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:40:35.444357 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:40:35.444907 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:40:35.448553 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 13 00:40:35.448600 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 13 00:40:35.454173 kernel: PTP clock support registered Mar 13 00:40:35.458403 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Mar 13 00:40:35.458717 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Mar 13 00:40:35.460174 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 13 00:40:35.460219 kernel: Console: switching to colour frame buffer device 160x50 Mar 13 00:40:35.473348 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 13 00:40:35.485875 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:40:35.506332 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:40:35.507296 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:40:35.541177 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 13 00:40:35.541430 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 13 00:40:35.545075 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 13 00:40:35.596785 systemd-resolved[1511]: Positive Trust Anchors: Mar 13 00:40:35.596802 systemd-resolved[1511]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:40:35.596835 systemd-resolved[1511]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:40:35.602205 systemd-networkd[1507]: lo: Link UP Mar 13 00:40:35.602212 systemd-networkd[1507]: lo: Gained carrier Mar 13 00:40:35.603469 systemd-networkd[1507]: Enumeration completed Mar 13 00:40:35.603631 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:40:35.603864 systemd-networkd[1507]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:40:35.603919 systemd-networkd[1507]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:40:35.604496 systemd-networkd[1507]: eth0: Link UP Mar 13 00:40:35.604751 systemd-networkd[1507]: eth0: Gained carrier Mar 13 00:40:35.604870 systemd-networkd[1507]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:40:35.607326 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:40:35.609312 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:40:35.612357 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:40:35.615609 systemd-networkd[1507]: eth0: DHCPv4 address 10.0.0.185/25, gateway 10.0.0.129 acquired from 10.0.0.129 Mar 13 00:40:35.615881 systemd-resolved[1511]: Using system hostname 'ci-4459-2-4-n-8f702bd38e'. Mar 13 00:40:35.627239 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:40:35.628721 systemd[1]: Reached target network.target - Network. Mar 13 00:40:35.628841 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:40:35.650102 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:40:35.657273 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:40:35.657468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:35.663340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:40:35.726627 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:40:35.727186 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:35.734344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:40:35.802969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:40:35.804826 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:40:35.804992 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:40:35.805077 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:40:35.805293 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:40:35.806327 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:40:35.806878 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:40:35.808409 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:40:35.808887 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:40:35.808918 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:40:35.809368 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:40:35.813212 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:40:35.815551 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:40:35.819464 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:40:35.820721 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:40:35.821084 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:40:35.823208 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:40:35.824918 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:40:35.825967 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:40:35.829274 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:40:35.829630 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:40:35.830010 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:40:35.830036 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:40:35.832766 systemd[1]: Starting chronyd.service - NTP client/server... Mar 13 00:40:35.838235 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:40:35.842071 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 00:40:35.845246 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:40:35.857258 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:40:35.859416 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:40:35.861839 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:40:35.870715 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 00:40:35.871326 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:40:35.872311 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:40:35.882156 jq[1595]: false Mar 13 00:40:35.878261 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:40:35.883255 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:40:35.888156 extend-filesystems[1596]: Found /dev/vda6 Mar 13 00:40:35.893151 extend-filesystems[1596]: Found /dev/vda9 Mar 13 00:40:35.895075 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:40:35.899564 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:40:35.906734 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:40:35.910862 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:40:35.911448 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:40:35.912745 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:40:35.920230 extend-filesystems[1596]: Checking size of /dev/vda9 Mar 13 00:40:35.918463 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:40:35.924743 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:40:35.926232 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:40:35.927228 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:40:35.927471 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:40:35.927613 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:40:35.938722 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:40:35.939000 extend-filesystems[1596]: Resized partition /dev/vda9 Mar 13 00:40:35.939992 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:40:35.944348 extend-filesystems[1624]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:40:35.946205 jq[1618]: true Mar 13 00:40:35.951150 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Refreshing passwd entry cache Mar 13 00:40:35.948093 oslogin_cache_refresh[1599]: Refreshing passwd entry cache Mar 13 00:40:35.954209 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 12499963 blocks Mar 13 00:40:35.979754 oslogin_cache_refresh[1599]: Failure getting users, quitting Mar 13 00:40:35.980275 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Failure getting users, quitting Mar 13 00:40:35.980275 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:40:35.980275 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Refreshing group entry cache Mar 13 00:40:35.979772 oslogin_cache_refresh[1599]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:40:35.979810 oslogin_cache_refresh[1599]: Refreshing group entry cache Mar 13 00:40:35.985304 tar[1621]: linux-amd64/LICENSE Mar 13 00:40:35.989436 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Failure getting groups, quitting Mar 13 00:40:35.989436 google_oslogin_nss_cache[1599]: oslogin_cache_refresh[1599]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:40:35.988390 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:40:35.985597 oslogin_cache_refresh[1599]: Failure getting groups, quitting Mar 13 00:40:35.988577 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:40:35.985607 oslogin_cache_refresh[1599]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:40:35.989816 tar[1621]: linux-amd64/helm Mar 13 00:40:35.994923 jq[1629]: true Mar 13 00:40:35.997643 (ntainerd)[1633]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:40:36.005075 chronyd[1590]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Mar 13 00:40:36.012944 update_engine[1616]: I20260313 00:40:36.010958 1616 main.cc:92] Flatcar Update Engine starting Mar 13 00:40:36.006755 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:40:36.006581 dbus-daemon[1593]: [system] SELinux support is enabled Mar 13 00:40:36.008166 chronyd[1590]: Loaded seccomp filter (level 2) Mar 13 00:40:36.014583 systemd[1]: Started chronyd.service - NTP client/server. Mar 13 00:40:36.018312 update_engine[1616]: I20260313 00:40:36.016036 1616 update_check_scheduler.cc:74] Next update check in 2m6s Mar 13 00:40:36.019516 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:40:36.019546 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:40:36.020023 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:40:36.020038 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:40:36.022248 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:40:36.030364 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:40:36.092364 systemd-logind[1610]: New seat seat0. Mar 13 00:40:36.098563 systemd-logind[1610]: Watching system buttons on /dev/input/event3 (Power Button) Mar 13 00:40:36.098785 systemd-logind[1610]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:40:36.098940 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:40:36.136397 bash[1657]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:40:36.134987 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:40:36.140458 systemd[1]: Starting sshkeys.service... Mar 13 00:40:36.181094 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 13 00:40:36.184213 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 13 00:40:36.199200 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 00:40:36.213577 locksmithd[1642]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:40:36.257585 containerd[1633]: time="2026-03-13T00:40:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:40:36.258853 containerd[1633]: time="2026-03-13T00:40:36.258817071Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:40:36.278695 containerd[1633]: time="2026-03-13T00:40:36.278652032Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.071µs" Mar 13 00:40:36.278695 containerd[1633]: time="2026-03-13T00:40:36.278690306Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:40:36.278799 containerd[1633]: time="2026-03-13T00:40:36.278714877Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:40:36.279224 containerd[1633]: time="2026-03-13T00:40:36.279027426Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:40:36.279224 containerd[1633]: time="2026-03-13T00:40:36.279049765Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:40:36.279224 containerd[1633]: time="2026-03-13T00:40:36.279073347Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:40:36.279224 containerd[1633]: time="2026-03-13T00:40:36.279144710Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:40:36.279224 containerd[1633]: time="2026-03-13T00:40:36.279155524Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:40:36.279957 containerd[1633]: time="2026-03-13T00:40:36.279937855Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:40:36.279982 containerd[1633]: time="2026-03-13T00:40:36.279956076Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:40:36.279982 containerd[1633]: time="2026-03-13T00:40:36.279965914Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:40:36.279982 containerd[1633]: time="2026-03-13T00:40:36.279972109Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:40:36.280048 containerd[1633]: time="2026-03-13T00:40:36.280037206Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:40:36.280658 containerd[1633]: time="2026-03-13T00:40:36.280531210Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:40:36.280658 containerd[1633]: time="2026-03-13T00:40:36.280560701Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:40:36.280658 containerd[1633]: time="2026-03-13T00:40:36.280569963Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:40:36.280658 containerd[1633]: time="2026-03-13T00:40:36.280590048Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:40:36.281311 containerd[1633]: time="2026-03-13T00:40:36.281295651Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:40:36.281661 containerd[1633]: time="2026-03-13T00:40:36.281546584Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:40:36.293167 kernel: EXT4-fs (vda9): resized filesystem to 12499963 Mar 13 00:40:36.309036 containerd[1633]: time="2026-03-13T00:40:36.308968382Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:40:36.309036 containerd[1633]: time="2026-03-13T00:40:36.309024561Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:40:36.309036 containerd[1633]: time="2026-03-13T00:40:36.309038077Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:40:36.309036 containerd[1633]: time="2026-03-13T00:40:36.309048485Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:40:36.309416 containerd[1633]: time="2026-03-13T00:40:36.309059872Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:40:36.309416 containerd[1633]: time="2026-03-13T00:40:36.309069126Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:40:36.309416 containerd[1633]: time="2026-03-13T00:40:36.309081690Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:40:36.309416 containerd[1633]: time="2026-03-13T00:40:36.309091756Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:40:36.310275 containerd[1633]: time="2026-03-13T00:40:36.310176423Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:40:36.310275 containerd[1633]: time="2026-03-13T00:40:36.310191021Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:40:36.310275 containerd[1633]: time="2026-03-13T00:40:36.310200048Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:40:36.310275 containerd[1633]: time="2026-03-13T00:40:36.310223329Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:40:36.310382 containerd[1633]: time="2026-03-13T00:40:36.310370201Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:40:36.310598 containerd[1633]: time="2026-03-13T00:40:36.310403757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:40:36.310598 containerd[1633]: time="2026-03-13T00:40:36.310418933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:40:36.310598 containerd[1633]: time="2026-03-13T00:40:36.310438773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:40:36.310598 containerd[1633]: time="2026-03-13T00:40:36.310450245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:40:36.310598 containerd[1633]: time="2026-03-13T00:40:36.310459231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:40:36.310598 containerd[1633]: time="2026-03-13T00:40:36.310467632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:40:36.310598 containerd[1633]: time="2026-03-13T00:40:36.310484916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:40:36.310598 containerd[1633]: time="2026-03-13T00:40:36.310494390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:40:36.310598 containerd[1633]: time="2026-03-13T00:40:36.310503092Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:40:36.310598 containerd[1633]: time="2026-03-13T00:40:36.310511547Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:40:36.310598 containerd[1633]: time="2026-03-13T00:40:36.310556964Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:40:36.310598 containerd[1633]: time="2026-03-13T00:40:36.310569348Z" level=info msg="Start snapshots syncer" Mar 13 00:40:36.310598 containerd[1633]: time="2026-03-13T00:40:36.310593141Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:40:36.311051 containerd[1633]: time="2026-03-13T00:40:36.310906560Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:40:36.311156 extend-filesystems[1624]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 13 00:40:36.311156 extend-filesystems[1624]: old_desc_blocks = 1, new_desc_blocks = 6 Mar 13 00:40:36.311156 extend-filesystems[1624]: The filesystem on /dev/vda9 is now 12499963 (4k) blocks long. Mar 13 00:40:36.312700 extend-filesystems[1596]: Resized filesystem in /dev/vda9 Mar 13 00:40:36.312626 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:40:36.314939 containerd[1633]: time="2026-03-13T00:40:36.312870711Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:40:36.312805 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:40:36.319119 containerd[1633]: time="2026-03-13T00:40:36.317273190Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:40:36.319119 containerd[1633]: time="2026-03-13T00:40:36.317411716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:40:36.319119 containerd[1633]: time="2026-03-13T00:40:36.317435415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:40:36.319119 containerd[1633]: time="2026-03-13T00:40:36.317446291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:40:36.319119 containerd[1633]: time="2026-03-13T00:40:36.317455228Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:40:36.319119 containerd[1633]: time="2026-03-13T00:40:36.317466959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:40:36.319119 containerd[1633]: time="2026-03-13T00:40:36.317476557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:40:36.319119 containerd[1633]: time="2026-03-13T00:40:36.317485575Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:40:36.319119 containerd[1633]: time="2026-03-13T00:40:36.317511049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:40:36.319119 containerd[1633]: time="2026-03-13T00:40:36.317519842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:40:36.319119 containerd[1633]: time="2026-03-13T00:40:36.317529714Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:40:36.319119 containerd[1633]: time="2026-03-13T00:40:36.317562989Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:40:36.319119 containerd[1633]: time="2026-03-13T00:40:36.317576940Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:40:36.319119 containerd[1633]: time="2026-03-13T00:40:36.317584561Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:40:36.319391 containerd[1633]: time="2026-03-13T00:40:36.317594281Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:40:36.319391 containerd[1633]: time="2026-03-13T00:40:36.317600404Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:40:36.319391 containerd[1633]: time="2026-03-13T00:40:36.317610289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:40:36.319391 containerd[1633]: time="2026-03-13T00:40:36.317624678Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:40:36.319391 containerd[1633]: time="2026-03-13T00:40:36.317638970Z" level=info msg="runtime interface created" Mar 13 00:40:36.319391 containerd[1633]: time="2026-03-13T00:40:36.317644484Z" level=info msg="created NRI interface" Mar 13 00:40:36.319391 containerd[1633]: time="2026-03-13T00:40:36.317651292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:40:36.319391 containerd[1633]: time="2026-03-13T00:40:36.317665564Z" level=info msg="Connect containerd service" Mar 13 00:40:36.319391 containerd[1633]: time="2026-03-13T00:40:36.317682934Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:40:36.319391 containerd[1633]: time="2026-03-13T00:40:36.318281870Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:40:36.431904 sshd_keygen[1639]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:40:36.455575 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:40:36.469372 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:40:36.484498 containerd[1633]: time="2026-03-13T00:40:36.484467848Z" level=info msg="Start subscribing containerd event" Mar 13 00:40:36.484633 containerd[1633]: time="2026-03-13T00:40:36.484612512Z" level=info msg="Start recovering state" Mar 13 00:40:36.484749 containerd[1633]: time="2026-03-13T00:40:36.484740528Z" level=info msg="Start event monitor" Mar 13 00:40:36.484787 containerd[1633]: time="2026-03-13T00:40:36.484781226Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:40:36.484817 containerd[1633]: time="2026-03-13T00:40:36.484811913Z" level=info msg="Start streaming server" Mar 13 00:40:36.484858 containerd[1633]: time="2026-03-13T00:40:36.484850221Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:40:36.484925 containerd[1633]: time="2026-03-13T00:40:36.484916142Z" level=info msg="runtime interface starting up..." Mar 13 00:40:36.484961 containerd[1633]: time="2026-03-13T00:40:36.484954827Z" level=info msg="starting plugins..." Mar 13 00:40:36.486466 containerd[1633]: time="2026-03-13T00:40:36.485419588Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:40:36.486466 containerd[1633]: time="2026-03-13T00:40:36.485461498Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:40:36.486466 containerd[1633]: time="2026-03-13T00:40:36.485688339Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:40:36.485904 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:40:36.487546 containerd[1633]: time="2026-03-13T00:40:36.487527249Z" level=info msg="containerd successfully booted in 0.230682s" Mar 13 00:40:36.487750 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:40:36.487962 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:40:36.494800 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:40:36.516002 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:40:36.521461 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:40:36.523339 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:40:36.525376 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:40:36.562197 tar[1621]: linux-amd64/README.md Mar 13 00:40:36.582836 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:40:36.893159 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 00:40:37.203334 systemd-networkd[1507]: eth0: Gained IPv6LL Mar 13 00:40:37.205724 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:40:37.207773 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:40:37.210859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:40:37.213379 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:40:37.219165 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 00:40:37.241305 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:40:38.226802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:40:38.241747 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:40:38.847483 kubelet[1728]: E0313 00:40:38.847422 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:40:38.849802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:40:38.849934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:40:38.850443 systemd[1]: kubelet.service: Consumed 895ms CPU time, 258.5M memory peak. Mar 13 00:40:38.904177 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 00:40:39.234176 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 00:40:42.912166 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 00:40:42.918387 coreos-metadata[1592]: Mar 13 00:40:42.918 WARN failed to locate config-drive, using the metadata service API instead Mar 13 00:40:42.932495 coreos-metadata[1592]: Mar 13 00:40:42.932 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 13 00:40:43.247169 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 00:40:43.257417 coreos-metadata[1670]: Mar 13 00:40:43.257 WARN failed to locate config-drive, using the metadata service API instead Mar 13 00:40:43.268931 coreos-metadata[1670]: Mar 13 00:40:43.268 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 13 00:40:44.620038 coreos-metadata[1592]: Mar 13 00:40:44.619 INFO Fetch successful Mar 13 00:40:44.620038 coreos-metadata[1592]: Mar 13 00:40:44.619 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 13 00:40:45.337375 coreos-metadata[1670]: Mar 13 00:40:45.337 INFO Fetch successful Mar 13 00:40:45.337375 coreos-metadata[1670]: Mar 13 00:40:45.337 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 13 00:40:46.035160 coreos-metadata[1592]: Mar 13 00:40:46.035 INFO Fetch successful Mar 13 00:40:46.035160 coreos-metadata[1592]: Mar 13 00:40:46.035 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 13 00:40:46.738172 coreos-metadata[1670]: Mar 13 00:40:46.737 INFO Fetch successful Mar 13 00:40:46.740643 unknown[1670]: wrote ssh authorized keys file for user: core Mar 13 00:40:46.765966 update-ssh-keys[1747]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:40:46.766989 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 13 00:40:46.769215 systemd[1]: Finished sshkeys.service. Mar 13 00:40:46.775199 coreos-metadata[1592]: Mar 13 00:40:46.775 INFO Fetch successful Mar 13 00:40:46.775199 coreos-metadata[1592]: Mar 13 00:40:46.775 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 13 00:40:47.266374 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:40:47.267655 systemd[1]: Started sshd@0-10.0.0.185:22-4.153.228.146:36920.service - OpenSSH per-connection server daemon (4.153.228.146:36920). Mar 13 00:40:47.450349 coreos-metadata[1592]: Mar 13 00:40:47.450 INFO Fetch successful Mar 13 00:40:47.450349 coreos-metadata[1592]: Mar 13 00:40:47.450 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 13 00:40:47.796733 sshd[1751]: Accepted publickey for core from 4.153.228.146 port 36920 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:40:47.798886 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:47.810174 systemd-logind[1610]: New session 1 of user core. Mar 13 00:40:47.811298 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:40:47.812304 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:40:47.831903 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:40:47.834112 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:40:47.852284 (systemd)[1756]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:40:47.854336 systemd-logind[1610]: New session c1 of user core. Mar 13 00:40:47.981590 systemd[1756]: Queued start job for default target default.target. Mar 13 00:40:47.993120 systemd[1756]: Created slice app.slice - User Application Slice. Mar 13 00:40:47.993163 systemd[1756]: Reached target paths.target - Paths. Mar 13 00:40:47.993202 systemd[1756]: Reached target timers.target - Timers. Mar 13 00:40:47.994449 systemd[1756]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:40:48.005575 systemd[1756]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:40:48.005679 systemd[1756]: Reached target sockets.target - Sockets. Mar 13 00:40:48.005723 systemd[1756]: Reached target basic.target - Basic System. Mar 13 00:40:48.005757 systemd[1756]: Reached target default.target - Main User Target. Mar 13 00:40:48.005784 systemd[1756]: Startup finished in 146ms. Mar 13 00:40:48.005915 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:40:48.007458 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:40:48.127344 coreos-metadata[1592]: Mar 13 00:40:48.127 INFO Fetch successful Mar 13 00:40:48.127344 coreos-metadata[1592]: Mar 13 00:40:48.127 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 13 00:40:48.301494 systemd[1]: Started sshd@1-10.0.0.185:22-4.153.228.146:41884.service - OpenSSH per-connection server daemon (4.153.228.146:41884). Mar 13 00:40:48.801959 coreos-metadata[1592]: Mar 13 00:40:48.801 INFO Fetch successful Mar 13 00:40:48.807551 sshd[1767]: Accepted publickey for core from 4.153.228.146 port 41884 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:40:48.808731 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:48.813827 systemd-logind[1610]: New session 2 of user core. Mar 13 00:40:48.815986 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:40:48.825739 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 00:40:48.826120 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:40:48.826249 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:40:48.829184 systemd[1]: Startup finished in 3.948s (kernel) + 15.555s (initrd) + 15.701s (userspace) = 35.206s. Mar 13 00:40:48.892460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:40:48.893924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:40:49.032719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:40:49.042522 (kubelet)[1785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:40:49.080167 kubelet[1785]: E0313 00:40:49.079629 1785 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:40:49.083486 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:40:49.083624 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:40:49.084188 systemd[1]: kubelet.service: Consumed 143ms CPU time, 110.6M memory peak. Mar 13 00:40:49.095558 sshd[1772]: Connection closed by 4.153.228.146 port 41884 Mar 13 00:40:49.095948 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:49.100231 systemd[1]: sshd@1-10.0.0.185:22-4.153.228.146:41884.service: Deactivated successfully. Mar 13 00:40:49.101626 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:40:49.102270 systemd-logind[1610]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:40:49.103021 systemd-logind[1610]: Removed session 2. Mar 13 00:40:49.210250 systemd[1]: Started sshd@2-10.0.0.185:22-4.153.228.146:41890.service - OpenSSH per-connection server daemon (4.153.228.146:41890). Mar 13 00:40:49.723068 sshd[1796]: Accepted publickey for core from 4.153.228.146 port 41890 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:40:49.724235 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:49.727935 systemd-logind[1610]: New session 3 of user core. Mar 13 00:40:49.734294 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:40:50.004099 sshd[1799]: Connection closed by 4.153.228.146 port 41890 Mar 13 00:40:50.004817 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:50.008465 systemd[1]: sshd@2-10.0.0.185:22-4.153.228.146:41890.service: Deactivated successfully. Mar 13 00:40:50.009952 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:40:50.010584 systemd-logind[1610]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:40:50.011682 systemd-logind[1610]: Removed session 3. Mar 13 00:40:50.112187 systemd[1]: Started sshd@3-10.0.0.185:22-4.153.228.146:41894.service - OpenSSH per-connection server daemon (4.153.228.146:41894). Mar 13 00:40:50.622505 sshd[1805]: Accepted publickey for core from 4.153.228.146 port 41894 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:40:50.623263 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:50.627773 systemd-logind[1610]: New session 4 of user core. Mar 13 00:40:50.633407 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:40:50.909634 sshd[1808]: Connection closed by 4.153.228.146 port 41894 Mar 13 00:40:50.910188 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:50.913391 systemd[1]: sshd@3-10.0.0.185:22-4.153.228.146:41894.service: Deactivated successfully. Mar 13 00:40:50.914813 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:40:50.915407 systemd-logind[1610]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:40:50.916398 systemd-logind[1610]: Removed session 4. Mar 13 00:40:51.016361 systemd[1]: Started sshd@4-10.0.0.185:22-4.153.228.146:41896.service - OpenSSH per-connection server daemon (4.153.228.146:41896). Mar 13 00:40:51.519766 sshd[1814]: Accepted publickey for core from 4.153.228.146 port 41896 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:40:51.520848 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:51.524647 systemd-logind[1610]: New session 5 of user core. Mar 13 00:40:51.531287 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:40:51.723935 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:40:51.724180 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:40:51.735256 sudo[1818]: pam_unix(sudo:session): session closed for user root Mar 13 00:40:51.829108 sshd[1817]: Connection closed by 4.153.228.146 port 41896 Mar 13 00:40:51.830395 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:51.833803 systemd[1]: sshd@4-10.0.0.185:22-4.153.228.146:41896.service: Deactivated successfully. Mar 13 00:40:51.835513 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:40:51.836105 systemd-logind[1610]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:40:51.837150 systemd-logind[1610]: Removed session 5. Mar 13 00:40:51.933652 systemd[1]: Started sshd@5-10.0.0.185:22-4.153.228.146:41898.service - OpenSSH per-connection server daemon (4.153.228.146:41898). Mar 13 00:40:52.446355 sshd[1824]: Accepted publickey for core from 4.153.228.146 port 41898 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:40:52.447527 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:52.451693 systemd-logind[1610]: New session 6 of user core. Mar 13 00:40:52.457394 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:40:52.638306 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:40:52.638525 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:40:52.642982 sudo[1829]: pam_unix(sudo:session): session closed for user root Mar 13 00:40:52.647780 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:40:52.647990 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:40:52.657016 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:40:52.697823 augenrules[1851]: No rules Mar 13 00:40:52.699340 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:40:52.701875 sudo[1828]: pam_unix(sudo:session): session closed for user root Mar 13 00:40:52.699547 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:40:52.795274 sshd[1827]: Connection closed by 4.153.228.146 port 41898 Mar 13 00:40:52.795793 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:52.799656 systemd[1]: sshd@5-10.0.0.185:22-4.153.228.146:41898.service: Deactivated successfully. Mar 13 00:40:52.801290 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:40:52.802252 systemd-logind[1610]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:40:52.803153 systemd-logind[1610]: Removed session 6. Mar 13 00:40:52.899888 systemd[1]: Started sshd@6-10.0.0.185:22-4.153.228.146:41914.service - OpenSSH per-connection server daemon (4.153.228.146:41914). Mar 13 00:40:53.422176 sshd[1860]: Accepted publickey for core from 4.153.228.146 port 41914 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:40:53.423533 sshd-session[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:53.427916 systemd-logind[1610]: New session 7 of user core. Mar 13 00:40:53.433329 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:40:53.615891 sudo[1864]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:40:53.616119 sudo[1864]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:40:53.914682 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:40:53.927627 (dockerd)[1881]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:40:54.154998 dockerd[1881]: time="2026-03-13T00:40:54.154832988Z" level=info msg="Starting up" Mar 13 00:40:54.156036 dockerd[1881]: time="2026-03-13T00:40:54.156017437Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:40:54.169335 dockerd[1881]: time="2026-03-13T00:40:54.169186746Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:40:54.187619 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3474433859-merged.mount: Deactivated successfully. Mar 13 00:40:54.221532 dockerd[1881]: time="2026-03-13T00:40:54.221374983Z" level=info msg="Loading containers: start." Mar 13 00:40:54.233155 kernel: Initializing XFRM netlink socket Mar 13 00:40:54.465358 systemd-networkd[1507]: docker0: Link UP Mar 13 00:40:54.469446 dockerd[1881]: time="2026-03-13T00:40:54.469405035Z" level=info msg="Loading containers: done." Mar 13 00:40:54.482543 dockerd[1881]: time="2026-03-13T00:40:54.482478547Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:40:54.482700 dockerd[1881]: time="2026-03-13T00:40:54.482578568Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:40:54.482700 dockerd[1881]: time="2026-03-13T00:40:54.482658180Z" level=info msg="Initializing buildkit" Mar 13 00:40:54.518387 dockerd[1881]: time="2026-03-13T00:40:54.518337972Z" level=info msg="Completed buildkit initialization" Mar 13 00:40:54.524234 dockerd[1881]: time="2026-03-13T00:40:54.524192328Z" level=info msg="Daemon has completed initialization" Mar 13 00:40:54.524421 dockerd[1881]: time="2026-03-13T00:40:54.524242478Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:40:54.524614 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:40:55.109383 containerd[1633]: time="2026-03-13T00:40:55.109346613Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 13 00:40:55.597009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1016939433.mount: Deactivated successfully. Mar 13 00:40:56.674053 containerd[1633]: time="2026-03-13T00:40:56.673885002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:56.675321 containerd[1633]: time="2026-03-13T00:40:56.675299367Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074595" Mar 13 00:40:56.676218 containerd[1633]: time="2026-03-13T00:40:56.676059898Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:56.677720 containerd[1633]: time="2026-03-13T00:40:56.677700152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:56.678632 containerd[1633]: time="2026-03-13T00:40:56.678405337Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.569025252s" Mar 13 00:40:56.678632 containerd[1633]: time="2026-03-13T00:40:56.678440726Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 13 00:40:56.679158 containerd[1633]: time="2026-03-13T00:40:56.679142777Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 13 00:40:57.733172 containerd[1633]: time="2026-03-13T00:40:57.733049692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:57.734442 containerd[1633]: time="2026-03-13T00:40:57.734273457Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165843" Mar 13 00:40:57.736381 containerd[1633]: time="2026-03-13T00:40:57.736360682Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:57.738608 containerd[1633]: time="2026-03-13T00:40:57.738586987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:57.739346 containerd[1633]: time="2026-03-13T00:40:57.739329080Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.060112944s" Mar 13 00:40:57.739390 containerd[1633]: time="2026-03-13T00:40:57.739352001Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 13 00:40:57.739713 containerd[1633]: time="2026-03-13T00:40:57.739696665Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 13 00:40:58.589162 containerd[1633]: time="2026-03-13T00:40:58.588717096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:58.589645 containerd[1633]: time="2026-03-13T00:40:58.589627441Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729844" Mar 13 00:40:58.590782 containerd[1633]: time="2026-03-13T00:40:58.590766158Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:58.592900 containerd[1633]: time="2026-03-13T00:40:58.592874533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:58.593671 containerd[1633]: time="2026-03-13T00:40:58.593652369Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 853.932336ms" Mar 13 00:40:58.593714 containerd[1633]: time="2026-03-13T00:40:58.593676127Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 13 00:40:58.594083 containerd[1633]: time="2026-03-13T00:40:58.594071182Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 13 00:40:59.121606 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 00:40:59.123804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:40:59.268235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:40:59.276543 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:40:59.328012 kubelet[2173]: E0313 00:40:59.327937 2173 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:40:59.332305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:40:59.332427 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:40:59.333242 systemd[1]: kubelet.service: Consumed 148ms CPU time, 109.9M memory peak. Mar 13 00:40:59.466280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount119158567.mount: Deactivated successfully. Mar 13 00:40:59.713810 containerd[1633]: time="2026-03-13T00:40:59.713758614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:59.714906 containerd[1633]: time="2026-03-13T00:40:59.714679515Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861796" Mar 13 00:40:59.717283 containerd[1633]: time="2026-03-13T00:40:59.717213028Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:59.719029 containerd[1633]: time="2026-03-13T00:40:59.719007453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:59.719509 containerd[1633]: time="2026-03-13T00:40:59.719485407Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.125332617s" Mar 13 00:40:59.719577 containerd[1633]: time="2026-03-13T00:40:59.719566356Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 13 00:40:59.719976 containerd[1633]: time="2026-03-13T00:40:59.719955175Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 13 00:40:59.791294 chronyd[1590]: Selected source PHC0 Mar 13 00:40:59.791319 chronyd[1590]: System clock wrong by 1.369638 seconds Mar 13 00:41:01.161102 systemd-resolved[1511]: Clock change detected. Flushing caches. Mar 13 00:41:01.160978 chronyd[1590]: System clock was stepped by 1.369638 seconds Mar 13 00:41:01.638231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838986103.mount: Deactivated successfully. Mar 13 00:41:02.417823 containerd[1633]: time="2026-03-13T00:41:02.417665833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:02.419099 containerd[1633]: time="2026-03-13T00:41:02.419070540Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388099" Mar 13 00:41:02.420171 containerd[1633]: time="2026-03-13T00:41:02.420145940Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:02.423304 containerd[1633]: time="2026-03-13T00:41:02.423269742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:02.424032 containerd[1633]: time="2026-03-13T00:41:02.423832175Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.334210816s" Mar 13 00:41:02.424032 containerd[1633]: time="2026-03-13T00:41:02.423866492Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 13 00:41:02.424594 containerd[1633]: time="2026-03-13T00:41:02.424447407Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 13 00:41:02.948011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3198661312.mount: Deactivated successfully. Mar 13 00:41:02.954494 containerd[1633]: time="2026-03-13T00:41:02.954429971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:02.955587 containerd[1633]: time="2026-03-13T00:41:02.955560214Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321238" Mar 13 00:41:02.957141 containerd[1633]: time="2026-03-13T00:41:02.957103146Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:02.959703 containerd[1633]: time="2026-03-13T00:41:02.959669683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:02.960260 containerd[1633]: time="2026-03-13T00:41:02.960241132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 535.558117ms" Mar 13 00:41:02.960316 containerd[1633]: time="2026-03-13T00:41:02.960267083Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 13 00:41:02.961099 containerd[1633]: time="2026-03-13T00:41:02.960876034Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 13 00:41:03.583584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount595810679.mount: Deactivated successfully. Mar 13 00:41:04.278754 containerd[1633]: time="2026-03-13T00:41:04.278673979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:04.280344 containerd[1633]: time="2026-03-13T00:41:04.280269368Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860760" Mar 13 00:41:04.281642 containerd[1633]: time="2026-03-13T00:41:04.281606328Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:04.285994 containerd[1633]: time="2026-03-13T00:41:04.285937599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:04.286881 containerd[1633]: time="2026-03-13T00:41:04.286589576Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.325690327s" Mar 13 00:41:04.286881 containerd[1633]: time="2026-03-13T00:41:04.286618498Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 13 00:41:07.951541 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:41:07.951909 systemd[1]: kubelet.service: Consumed 148ms CPU time, 109.9M memory peak. Mar 13 00:41:07.954969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:41:07.980691 systemd[1]: Reload requested from client PID 2330 ('systemctl') (unit session-7.scope)... Mar 13 00:41:07.980832 systemd[1]: Reloading... Mar 13 00:41:08.083863 zram_generator::config[2373]: No configuration found. Mar 13 00:41:08.264216 systemd[1]: Reloading finished in 283 ms. Mar 13 00:41:08.327320 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:41:08.327410 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:41:08.327639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:41:08.327686 systemd[1]: kubelet.service: Consumed 95ms CPU time, 98.3M memory peak. Mar 13 00:41:08.329303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:41:08.470091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:41:08.479110 (kubelet)[2427]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:41:08.540520 kubelet[2427]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:41:08.541813 kubelet[2427]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:41:08.541813 kubelet[2427]: I0313 00:41:08.540927 2427 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:41:08.831487 kubelet[2427]: I0313 00:41:08.830344 2427 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:41:08.831487 kubelet[2427]: I0313 00:41:08.830371 2427 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:41:08.831487 kubelet[2427]: I0313 00:41:08.830396 2427 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:41:08.831487 kubelet[2427]: I0313 00:41:08.830407 2427 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:41:08.832790 kubelet[2427]: I0313 00:41:08.831856 2427 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:41:08.837746 kubelet[2427]: E0313 00:41:08.837719 2427 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.185:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.185:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:41:08.838539 kubelet[2427]: I0313 00:41:08.838521 2427 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:41:08.845755 kubelet[2427]: I0313 00:41:08.845735 2427 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:41:08.848380 kubelet[2427]: I0313 00:41:08.848363 2427 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:41:08.849995 kubelet[2427]: I0313 00:41:08.849967 2427 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:41:08.850139 kubelet[2427]: I0313 00:41:08.849994 2427 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-8f702bd38e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:41:08.850139 kubelet[2427]: I0313 00:41:08.850137 2427 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:41:08.850254 kubelet[2427]: I0313 00:41:08.850146 2427 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:41:08.850254 kubelet[2427]: I0313 00:41:08.850236 2427 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:41:08.853249 kubelet[2427]: I0313 00:41:08.853223 2427 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:41:08.853388 kubelet[2427]: I0313 00:41:08.853378 2427 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:41:08.853416 kubelet[2427]: I0313 00:41:08.853392 2427 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:41:08.853416 kubelet[2427]: I0313 00:41:08.853416 2427 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:41:08.853454 kubelet[2427]: I0313 00:41:08.853425 2427 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:41:08.857687 kubelet[2427]: E0313 00:41:08.856750 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.185:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.185:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:41:08.857687 kubelet[2427]: I0313 00:41:08.856855 2427 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:41:08.857687 kubelet[2427]: I0313 00:41:08.857312 2427 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:41:08.857687 kubelet[2427]: I0313 00:41:08.857335 2427 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:41:08.857687 kubelet[2427]: W0313 00:41:08.857376 2427 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:41:08.861930 kubelet[2427]: I0313 00:41:08.861915 2427 server.go:1262] "Started kubelet" Mar 13 00:41:08.862080 kubelet[2427]: E0313 00:41:08.862064 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.185:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-8f702bd38e&limit=500&resourceVersion=0\": dial tcp 10.0.0.185:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:41:08.862961 kubelet[2427]: I0313 00:41:08.862939 2427 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:41:08.864499 kubelet[2427]: I0313 00:41:08.864485 2427 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:41:08.867358 kubelet[2427]: I0313 00:41:08.867327 2427 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:41:08.867462 kubelet[2427]: I0313 00:41:08.867452 2427 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:41:08.868988 kubelet[2427]: I0313 00:41:08.868970 2427 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:41:08.869289 kubelet[2427]: I0313 00:41:08.869280 2427 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:41:08.871248 kubelet[2427]: E0313 00:41:08.869978 2427 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.185:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.185:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-8f702bd38e.189c3fc50e85cca5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-8f702bd38e,UID:ci-4459-2-4-n-8f702bd38e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-8f702bd38e,},FirstTimestamp:2026-03-13 00:41:08.861889701 +0000 UTC m=+0.378583988,LastTimestamp:2026-03-13 00:41:08.861889701 +0000 UTC m=+0.378583988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-8f702bd38e,}" Mar 13 00:41:08.871805 kubelet[2427]: I0313 00:41:08.871788 2427 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:41:08.875378 kubelet[2427]: E0313 00:41:08.875361 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:08.875446 kubelet[2427]: I0313 00:41:08.875388 2427 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:41:08.875558 kubelet[2427]: I0313 00:41:08.875519 2427 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:41:08.875558 kubelet[2427]: I0313 00:41:08.875550 2427 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:41:08.876207 kubelet[2427]: E0313 00:41:08.876183 2427 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:41:08.876379 kubelet[2427]: I0313 00:41:08.876365 2427 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:41:08.876453 kubelet[2427]: I0313 00:41:08.876440 2427 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:41:08.876694 kubelet[2427]: E0313 00:41:08.876678 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.185:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.185:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:41:08.877558 kubelet[2427]: E0313 00:41:08.877296 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.185:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-8f702bd38e?timeout=10s\": dial tcp 10.0.0.185:6443: connect: connection refused" interval="200ms" Mar 13 00:41:08.877558 kubelet[2427]: I0313 00:41:08.877382 2427 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:41:08.895514 kubelet[2427]: I0313 00:41:08.895484 2427 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:41:08.896671 kubelet[2427]: I0313 00:41:08.896656 2427 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:41:08.896744 kubelet[2427]: I0313 00:41:08.896738 2427 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:41:08.896814 kubelet[2427]: I0313 00:41:08.896809 2427 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:41:08.896884 kubelet[2427]: E0313 00:41:08.896871 2427 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:41:08.900490 kubelet[2427]: E0313 00:41:08.900449 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.185:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.185:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:41:08.900622 kubelet[2427]: I0313 00:41:08.900613 2427 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:41:08.900663 kubelet[2427]: I0313 00:41:08.900657 2427 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:41:08.900701 kubelet[2427]: I0313 00:41:08.900697 2427 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:41:08.902634 kubelet[2427]: I0313 00:41:08.902624 2427 policy_none.go:49] "None policy: Start" Mar 13 00:41:08.902952 kubelet[2427]: I0313 00:41:08.902779 2427 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:41:08.902952 kubelet[2427]: I0313 00:41:08.902799 2427 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:41:08.904474 kubelet[2427]: I0313 00:41:08.904465 2427 policy_none.go:47] "Start" Mar 13 00:41:08.908167 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:41:08.922659 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:41:08.925908 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:41:08.945947 kubelet[2427]: E0313 00:41:08.945828 2427 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:41:08.947071 kubelet[2427]: I0313 00:41:08.947058 2427 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:41:08.947207 kubelet[2427]: I0313 00:41:08.947181 2427 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:41:08.948011 kubelet[2427]: I0313 00:41:08.947743 2427 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:41:08.948591 kubelet[2427]: E0313 00:41:08.948574 2427 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:41:08.948671 kubelet[2427]: E0313 00:41:08.948607 2427 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:09.007620 systemd[1]: Created slice kubepods-burstable-podc8920cd6eb8e85d866502779c811a3cb.slice - libcontainer container kubepods-burstable-podc8920cd6eb8e85d866502779c811a3cb.slice. Mar 13 00:41:09.014806 kubelet[2427]: E0313 00:41:09.014468 2427 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.017840 systemd[1]: Created slice kubepods-burstable-poda772c94d994dbd4e08357f52789e6f2a.slice - libcontainer container kubepods-burstable-poda772c94d994dbd4e08357f52789e6f2a.slice. Mar 13 00:41:09.019694 kubelet[2427]: E0313 00:41:09.019674 2427 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.022213 systemd[1]: Created slice kubepods-burstable-pod79b704288966a07d8cf61c2b0098092e.slice - libcontainer container kubepods-burstable-pod79b704288966a07d8cf61c2b0098092e.slice. Mar 13 00:41:09.023945 kubelet[2427]: E0313 00:41:09.023931 2427 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.050392 kubelet[2427]: I0313 00:41:09.050315 2427 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.052814 kubelet[2427]: E0313 00:41:09.050557 2427 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.185:6443/api/v1/nodes\": dial tcp 10.0.0.185:6443: connect: connection refused" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.077190 kubelet[2427]: I0313 00:41:09.077159 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c8920cd6eb8e85d866502779c811a3cb-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-8f702bd38e\" (UID: \"c8920cd6eb8e85d866502779c811a3cb\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.077541 kubelet[2427]: I0313 00:41:09.077362 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a772c94d994dbd4e08357f52789e6f2a-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-8f702bd38e\" (UID: \"a772c94d994dbd4e08357f52789e6f2a\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.077541 kubelet[2427]: I0313 00:41:09.077381 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79b704288966a07d8cf61c2b0098092e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-8f702bd38e\" (UID: \"79b704288966a07d8cf61c2b0098092e\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.077541 kubelet[2427]: I0313 00:41:09.077402 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8920cd6eb8e85d866502779c811a3cb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-8f702bd38e\" (UID: \"c8920cd6eb8e85d866502779c811a3cb\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.077541 kubelet[2427]: I0313 00:41:09.077415 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79b704288966a07d8cf61c2b0098092e-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-8f702bd38e\" (UID: \"79b704288966a07d8cf61c2b0098092e\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.077541 kubelet[2427]: I0313 00:41:09.077440 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79b704288966a07d8cf61c2b0098092e-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-8f702bd38e\" (UID: \"79b704288966a07d8cf61c2b0098092e\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.077669 kubelet[2427]: I0313 00:41:09.077457 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8920cd6eb8e85d866502779c811a3cb-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-8f702bd38e\" (UID: \"c8920cd6eb8e85d866502779c811a3cb\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.077669 kubelet[2427]: I0313 00:41:09.077470 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c8920cd6eb8e85d866502779c811a3cb-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-8f702bd38e\" (UID: \"c8920cd6eb8e85d866502779c811a3cb\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.077669 kubelet[2427]: I0313 00:41:09.077484 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8920cd6eb8e85d866502779c811a3cb-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-8f702bd38e\" (UID: \"c8920cd6eb8e85d866502779c811a3cb\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.077870 kubelet[2427]: E0313 00:41:09.077854 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.185:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-8f702bd38e?timeout=10s\": dial tcp 10.0.0.185:6443: connect: connection refused" interval="400ms" Mar 13 00:41:09.253091 kubelet[2427]: I0313 00:41:09.253006 2427 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.254169 kubelet[2427]: E0313 00:41:09.254140 2427 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.185:6443/api/v1/nodes\": dial tcp 10.0.0.185:6443: connect: connection refused" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.318809 containerd[1633]: time="2026-03-13T00:41:09.318699763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-8f702bd38e,Uid:c8920cd6eb8e85d866502779c811a3cb,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:09.322639 containerd[1633]: time="2026-03-13T00:41:09.322404324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-8f702bd38e,Uid:a772c94d994dbd4e08357f52789e6f2a,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:09.325713 containerd[1633]: time="2026-03-13T00:41:09.325691158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-8f702bd38e,Uid:79b704288966a07d8cf61c2b0098092e,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:09.478717 kubelet[2427]: E0313 00:41:09.478666 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.185:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-8f702bd38e?timeout=10s\": dial tcp 10.0.0.185:6443: connect: connection refused" interval="800ms" Mar 13 00:41:09.656236 kubelet[2427]: I0313 00:41:09.656061 2427 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.656837 kubelet[2427]: E0313 00:41:09.656817 2427 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.185:6443/api/v1/nodes\": dial tcp 10.0.0.185:6443: connect: connection refused" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:09.912632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount539093300.mount: Deactivated successfully. Mar 13 00:41:09.919059 containerd[1633]: time="2026-03-13T00:41:09.918999603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:41:09.922343 kubelet[2427]: E0313 00:41:09.922314 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.185:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-8f702bd38e&limit=500&resourceVersion=0\": dial tcp 10.0.0.185:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:41:09.925598 containerd[1633]: time="2026-03-13T00:41:09.925560253Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321158" Mar 13 00:41:09.927800 containerd[1633]: time="2026-03-13T00:41:09.927757625Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:41:09.929364 containerd[1633]: time="2026-03-13T00:41:09.929321653Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:41:09.930149 containerd[1633]: time="2026-03-13T00:41:09.930113616Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:41:09.930759 containerd[1633]: time="2026-03-13T00:41:09.930735460Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:41:09.932164 containerd[1633]: time="2026-03-13T00:41:09.932131112Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:41:09.933014 containerd[1633]: time="2026-03-13T00:41:09.932974056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:41:09.934804 containerd[1633]: time="2026-03-13T00:41:09.934780060Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 611.093656ms" Mar 13 00:41:09.935411 containerd[1633]: time="2026-03-13T00:41:09.935371201Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 608.295076ms" Mar 13 00:41:09.938566 containerd[1633]: time="2026-03-13T00:41:09.938398722Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 618.061133ms" Mar 13 00:41:09.984347 containerd[1633]: time="2026-03-13T00:41:09.984311435Z" level=info msg="connecting to shim 135d448c793ff1da378922312f71913c488b0cd614fc489c5b96c4f3ced0f858" address="unix:///run/containerd/s/121cbfff542d368b2d337fac7dfd221d0c04273ae9b18301aed15936e4c893e5" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:09.986461 containerd[1633]: time="2026-03-13T00:41:09.986433948Z" level=info msg="connecting to shim df68a98c3c39f85c08d9d47278daa921c7218aa2e32a4a14dd54a2509187ff13" address="unix:///run/containerd/s/9a3f6e013740be169b5a5e8ab52aa4eca98700d2387f5ec5a3e21842501a27d4" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:09.994179 containerd[1633]: time="2026-03-13T00:41:09.993762828Z" level=info msg="connecting to shim 91acbdd7ec8c4587330374a6283cd1874691c43e1271f1bd574e3dae3c0351c8" address="unix:///run/containerd/s/4653ba72ade6c16ed5f4149008427b865630e851109a8c8020042a2281195f36" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:10.010094 systemd[1]: Started cri-containerd-df68a98c3c39f85c08d9d47278daa921c7218aa2e32a4a14dd54a2509187ff13.scope - libcontainer container df68a98c3c39f85c08d9d47278daa921c7218aa2e32a4a14dd54a2509187ff13. Mar 13 00:41:10.024944 systemd[1]: Started cri-containerd-135d448c793ff1da378922312f71913c488b0cd614fc489c5b96c4f3ced0f858.scope - libcontainer container 135d448c793ff1da378922312f71913c488b0cd614fc489c5b96c4f3ced0f858. Mar 13 00:41:10.029461 systemd[1]: Started cri-containerd-91acbdd7ec8c4587330374a6283cd1874691c43e1271f1bd574e3dae3c0351c8.scope - libcontainer container 91acbdd7ec8c4587330374a6283cd1874691c43e1271f1bd574e3dae3c0351c8. Mar 13 00:41:10.081272 kubelet[2427]: E0313 00:41:10.081225 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.185:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.185:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:41:10.081526 containerd[1633]: time="2026-03-13T00:41:10.081503660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-8f702bd38e,Uid:c8920cd6eb8e85d866502779c811a3cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"df68a98c3c39f85c08d9d47278daa921c7218aa2e32a4a14dd54a2509187ff13\"" Mar 13 00:41:10.089073 containerd[1633]: time="2026-03-13T00:41:10.088701528Z" level=info msg="CreateContainer within sandbox \"df68a98c3c39f85c08d9d47278daa921c7218aa2e32a4a14dd54a2509187ff13\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:41:10.099188 kubelet[2427]: E0313 00:41:10.099151 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.185:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.185:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:41:10.102412 containerd[1633]: time="2026-03-13T00:41:10.102370100Z" level=info msg="Container 05f46bcde9fa72ca69a5850204e7ad5395c849aea89dfacca36c88330871b870: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:10.106135 containerd[1633]: time="2026-03-13T00:41:10.106098215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-8f702bd38e,Uid:a772c94d994dbd4e08357f52789e6f2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"135d448c793ff1da378922312f71913c488b0cd614fc489c5b96c4f3ced0f858\"" Mar 13 00:41:10.110527 containerd[1633]: time="2026-03-13T00:41:10.110505656Z" level=info msg="CreateContainer within sandbox \"135d448c793ff1da378922312f71913c488b0cd614fc489c5b96c4f3ced0f858\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:41:10.111868 containerd[1633]: time="2026-03-13T00:41:10.111849087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-8f702bd38e,Uid:79b704288966a07d8cf61c2b0098092e,Namespace:kube-system,Attempt:0,} returns sandbox id \"91acbdd7ec8c4587330374a6283cd1874691c43e1271f1bd574e3dae3c0351c8\"" Mar 13 00:41:10.112926 containerd[1633]: time="2026-03-13T00:41:10.112902233Z" level=info msg="CreateContainer within sandbox \"df68a98c3c39f85c08d9d47278daa921c7218aa2e32a4a14dd54a2509187ff13\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"05f46bcde9fa72ca69a5850204e7ad5395c849aea89dfacca36c88330871b870\"" Mar 13 00:41:10.119801 containerd[1633]: time="2026-03-13T00:41:10.119415760Z" level=info msg="Container a5d3d3982d153eb03553d05478b948186b0c0127cc9a9c4c70917d7e93545ac6: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:10.130903 containerd[1633]: time="2026-03-13T00:41:10.130876860Z" level=info msg="StartContainer for \"05f46bcde9fa72ca69a5850204e7ad5395c849aea89dfacca36c88330871b870\"" Mar 13 00:41:10.132713 containerd[1633]: time="2026-03-13T00:41:10.132689181Z" level=info msg="connecting to shim 05f46bcde9fa72ca69a5850204e7ad5395c849aea89dfacca36c88330871b870" address="unix:///run/containerd/s/9a3f6e013740be169b5a5e8ab52aa4eca98700d2387f5ec5a3e21842501a27d4" protocol=ttrpc version=3 Mar 13 00:41:10.134786 containerd[1633]: time="2026-03-13T00:41:10.134751864Z" level=info msg="CreateContainer within sandbox \"91acbdd7ec8c4587330374a6283cd1874691c43e1271f1bd574e3dae3c0351c8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:41:10.140648 containerd[1633]: time="2026-03-13T00:41:10.140515169Z" level=info msg="CreateContainer within sandbox \"135d448c793ff1da378922312f71913c488b0cd614fc489c5b96c4f3ced0f858\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a5d3d3982d153eb03553d05478b948186b0c0127cc9a9c4c70917d7e93545ac6\"" Mar 13 00:41:10.141052 containerd[1633]: time="2026-03-13T00:41:10.140940865Z" level=info msg="StartContainer for \"a5d3d3982d153eb03553d05478b948186b0c0127cc9a9c4c70917d7e93545ac6\"" Mar 13 00:41:10.142040 containerd[1633]: time="2026-03-13T00:41:10.141997504Z" level=info msg="connecting to shim a5d3d3982d153eb03553d05478b948186b0c0127cc9a9c4c70917d7e93545ac6" address="unix:///run/containerd/s/121cbfff542d368b2d337fac7dfd221d0c04273ae9b18301aed15936e4c893e5" protocol=ttrpc version=3 Mar 13 00:41:10.146512 containerd[1633]: time="2026-03-13T00:41:10.145936534Z" level=info msg="Container 620ce956ded697f7eebc507f4c952ab890bcae308aa1c04cdc10268f7fc0b67c: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:10.155408 containerd[1633]: time="2026-03-13T00:41:10.155291298Z" level=info msg="CreateContainer within sandbox \"91acbdd7ec8c4587330374a6283cd1874691c43e1271f1bd574e3dae3c0351c8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"620ce956ded697f7eebc507f4c952ab890bcae308aa1c04cdc10268f7fc0b67c\"" Mar 13 00:41:10.155912 systemd[1]: Started cri-containerd-05f46bcde9fa72ca69a5850204e7ad5395c849aea89dfacca36c88330871b870.scope - libcontainer container 05f46bcde9fa72ca69a5850204e7ad5395c849aea89dfacca36c88330871b870. Mar 13 00:41:10.156421 containerd[1633]: time="2026-03-13T00:41:10.156401008Z" level=info msg="StartContainer for \"620ce956ded697f7eebc507f4c952ab890bcae308aa1c04cdc10268f7fc0b67c\"" Mar 13 00:41:10.159671 containerd[1633]: time="2026-03-13T00:41:10.159650028Z" level=info msg="connecting to shim 620ce956ded697f7eebc507f4c952ab890bcae308aa1c04cdc10268f7fc0b67c" address="unix:///run/containerd/s/4653ba72ade6c16ed5f4149008427b865630e851109a8c8020042a2281195f36" protocol=ttrpc version=3 Mar 13 00:41:10.165897 systemd[1]: Started cri-containerd-a5d3d3982d153eb03553d05478b948186b0c0127cc9a9c4c70917d7e93545ac6.scope - libcontainer container a5d3d3982d153eb03553d05478b948186b0c0127cc9a9c4c70917d7e93545ac6. Mar 13 00:41:10.181952 systemd[1]: Started cri-containerd-620ce956ded697f7eebc507f4c952ab890bcae308aa1c04cdc10268f7fc0b67c.scope - libcontainer container 620ce956ded697f7eebc507f4c952ab890bcae308aa1c04cdc10268f7fc0b67c. Mar 13 00:41:10.229689 containerd[1633]: time="2026-03-13T00:41:10.229649422Z" level=info msg="StartContainer for \"05f46bcde9fa72ca69a5850204e7ad5395c849aea89dfacca36c88330871b870\" returns successfully" Mar 13 00:41:10.261812 containerd[1633]: time="2026-03-13T00:41:10.261766020Z" level=info msg="StartContainer for \"620ce956ded697f7eebc507f4c952ab890bcae308aa1c04cdc10268f7fc0b67c\" returns successfully" Mar 13 00:41:10.279122 kubelet[2427]: E0313 00:41:10.279088 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.185:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-8f702bd38e?timeout=10s\": dial tcp 10.0.0.185:6443: connect: connection refused" interval="1.6s" Mar 13 00:41:10.279349 containerd[1633]: time="2026-03-13T00:41:10.279327223Z" level=info msg="StartContainer for \"a5d3d3982d153eb03553d05478b948186b0c0127cc9a9c4c70917d7e93545ac6\" returns successfully" Mar 13 00:41:10.300229 kubelet[2427]: E0313 00:41:10.300186 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.185:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.185:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:41:10.460267 kubelet[2427]: I0313 00:41:10.460176 2427 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:10.910403 kubelet[2427]: E0313 00:41:10.910377 2427 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:10.915061 kubelet[2427]: E0313 00:41:10.915035 2427 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:10.919943 kubelet[2427]: E0313 00:41:10.919923 2427 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:11.652868 kubelet[2427]: I0313 00:41:11.652833 2427 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:11.652868 kubelet[2427]: E0313 00:41:11.652878 2427 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459-2-4-n-8f702bd38e\": node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:11.667191 kubelet[2427]: E0313 00:41:11.667160 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:11.767588 kubelet[2427]: E0313 00:41:11.767532 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:11.867680 kubelet[2427]: E0313 00:41:11.867636 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:11.924808 kubelet[2427]: E0313 00:41:11.924639 2427 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:11.925100 kubelet[2427]: E0313 00:41:11.924685 2427 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:11.925100 kubelet[2427]: E0313 00:41:11.924999 2427 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:11.968386 kubelet[2427]: E0313 00:41:11.968316 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:12.069138 kubelet[2427]: E0313 00:41:12.069085 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:12.169986 kubelet[2427]: E0313 00:41:12.169937 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:12.270667 kubelet[2427]: E0313 00:41:12.270615 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:12.371378 kubelet[2427]: E0313 00:41:12.371335 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:12.472320 kubelet[2427]: E0313 00:41:12.472250 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:12.573017 kubelet[2427]: E0313 00:41:12.572905 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:12.673113 kubelet[2427]: E0313 00:41:12.673061 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:12.774129 kubelet[2427]: E0313 00:41:12.774074 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-8f702bd38e\" not found" Mar 13 00:41:12.855867 kubelet[2427]: I0313 00:41:12.855370 2427 apiserver.go:52] "Watching apiserver" Mar 13 00:41:12.876021 kubelet[2427]: I0313 00:41:12.875902 2427 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:41:12.877756 kubelet[2427]: I0313 00:41:12.877703 2427 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:12.887591 kubelet[2427]: I0313 00:41:12.887553 2427 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:12.893099 kubelet[2427]: I0313 00:41:12.893069 2427 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:13.976915 systemd[1]: Reload requested from client PID 2709 ('systemctl') (unit session-7.scope)... Mar 13 00:41:13.977298 systemd[1]: Reloading... Mar 13 00:41:14.058964 zram_generator::config[2748]: No configuration found. Mar 13 00:41:14.252926 systemd[1]: Reloading finished in 275 ms. Mar 13 00:41:14.281363 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:41:14.296991 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:41:14.297369 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:41:14.297512 systemd[1]: kubelet.service: Consumed 647ms CPU time, 123.5M memory peak. Mar 13 00:41:14.299520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:41:14.425091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:41:14.436115 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:41:14.476318 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:41:14.476318 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:41:14.476636 kubelet[2803]: I0313 00:41:14.476346 2803 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:41:14.481829 kubelet[2803]: I0313 00:41:14.481409 2803 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:41:14.481829 kubelet[2803]: I0313 00:41:14.481427 2803 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:41:14.481829 kubelet[2803]: I0313 00:41:14.481450 2803 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:41:14.481829 kubelet[2803]: I0313 00:41:14.481456 2803 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:41:14.481829 kubelet[2803]: I0313 00:41:14.481618 2803 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:41:14.482825 kubelet[2803]: I0313 00:41:14.482812 2803 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:41:14.484559 kubelet[2803]: I0313 00:41:14.484543 2803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:41:14.488257 kubelet[2803]: I0313 00:41:14.488230 2803 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:41:14.491298 kubelet[2803]: I0313 00:41:14.491285 2803 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:41:14.491535 kubelet[2803]: I0313 00:41:14.491518 2803 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:41:14.491711 kubelet[2803]: I0313 00:41:14.491575 2803 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-8f702bd38e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:41:14.491819 kubelet[2803]: I0313 00:41:14.491812 2803 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:41:14.491862 kubelet[2803]: I0313 00:41:14.491857 2803 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:41:14.491911 kubelet[2803]: I0313 00:41:14.491906 2803 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:41:14.492103 kubelet[2803]: I0313 00:41:14.492095 2803 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:41:14.492265 kubelet[2803]: I0313 00:41:14.492258 2803 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:41:14.492320 kubelet[2803]: I0313 00:41:14.492315 2803 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:41:14.492363 kubelet[2803]: I0313 00:41:14.492359 2803 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:41:14.492406 kubelet[2803]: I0313 00:41:14.492401 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:41:14.494783 kubelet[2803]: I0313 00:41:14.493265 2803 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:41:14.494783 kubelet[2803]: I0313 00:41:14.493751 2803 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:41:14.494783 kubelet[2803]: I0313 00:41:14.493812 2803 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:41:14.496157 kubelet[2803]: I0313 00:41:14.496145 2803 server.go:1262] "Started kubelet" Mar 13 00:41:14.497630 kubelet[2803]: I0313 00:41:14.497616 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:41:14.509804 kubelet[2803]: I0313 00:41:14.509711 2803 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:41:14.510453 kubelet[2803]: I0313 00:41:14.510440 2803 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:41:14.515187 kubelet[2803]: E0313 00:41:14.515170 2803 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:41:14.515369 kubelet[2803]: I0313 00:41:14.515352 2803 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:41:14.515431 kubelet[2803]: I0313 00:41:14.515423 2803 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:41:14.515675 kubelet[2803]: I0313 00:41:14.515665 2803 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:41:14.516813 kubelet[2803]: I0313 00:41:14.516005 2803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:41:14.517597 kubelet[2803]: I0313 00:41:14.517563 2803 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:41:14.520016 kubelet[2803]: I0313 00:41:14.520002 2803 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:41:14.520097 kubelet[2803]: I0313 00:41:14.520090 2803 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:41:14.521730 kubelet[2803]: I0313 00:41:14.521717 2803 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:41:14.522323 kubelet[2803]: I0313 00:41:14.521868 2803 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:41:14.523372 kubelet[2803]: I0313 00:41:14.523352 2803 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:41:14.524393 kubelet[2803]: I0313 00:41:14.524379 2803 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:41:14.532689 kubelet[2803]: I0313 00:41:14.532642 2803 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:41:14.532689 kubelet[2803]: I0313 00:41:14.532675 2803 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:41:14.532689 kubelet[2803]: I0313 00:41:14.532696 2803 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:41:14.532870 kubelet[2803]: E0313 00:41:14.532733 2803 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:41:14.573740 kubelet[2803]: I0313 00:41:14.573720 2803 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:41:14.574630 kubelet[2803]: I0313 00:41:14.574618 2803 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:41:14.574717 kubelet[2803]: I0313 00:41:14.574710 2803 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:41:14.574875 kubelet[2803]: I0313 00:41:14.574866 2803 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:41:14.574928 kubelet[2803]: I0313 00:41:14.574915 2803 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:41:14.574960 kubelet[2803]: I0313 00:41:14.574956 2803 policy_none.go:49] "None policy: Start" Mar 13 00:41:14.575006 kubelet[2803]: I0313 00:41:14.575001 2803 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:41:14.575038 kubelet[2803]: I0313 00:41:14.575033 2803 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:41:14.575143 kubelet[2803]: I0313 00:41:14.575137 2803 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 13 00:41:14.575180 kubelet[2803]: I0313 00:41:14.575177 2803 policy_none.go:47] "Start" Mar 13 00:41:14.579050 kubelet[2803]: E0313 00:41:14.579031 2803 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:41:14.579566 kubelet[2803]: I0313 00:41:14.579200 2803 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:41:14.579566 kubelet[2803]: I0313 00:41:14.579211 2803 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:41:14.579566 kubelet[2803]: I0313 00:41:14.579535 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:41:14.580232 kubelet[2803]: E0313 00:41:14.580215 2803 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:41:14.634050 kubelet[2803]: I0313 00:41:14.634017 2803 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.634410 kubelet[2803]: I0313 00:41:14.634181 2803 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.634410 kubelet[2803]: I0313 00:41:14.634364 2803 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.643161 kubelet[2803]: E0313 00:41:14.643130 2803 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-8f702bd38e\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.643882 kubelet[2803]: E0313 00:41:14.643847 2803 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-8f702bd38e\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.644196 kubelet[2803]: E0313 00:41:14.644172 2803 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-8f702bd38e\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.685882 kubelet[2803]: I0313 00:41:14.685802 2803 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.693271 kubelet[2803]: I0313 00:41:14.693246 2803 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.693427 kubelet[2803]: I0313 00:41:14.693318 2803 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.721766 kubelet[2803]: I0313 00:41:14.721533 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a772c94d994dbd4e08357f52789e6f2a-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-8f702bd38e\" (UID: \"a772c94d994dbd4e08357f52789e6f2a\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.721766 kubelet[2803]: I0313 00:41:14.721568 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79b704288966a07d8cf61c2b0098092e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-8f702bd38e\" (UID: \"79b704288966a07d8cf61c2b0098092e\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.721766 kubelet[2803]: I0313 00:41:14.721585 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8920cd6eb8e85d866502779c811a3cb-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-8f702bd38e\" (UID: \"c8920cd6eb8e85d866502779c811a3cb\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.721766 kubelet[2803]: I0313 00:41:14.721600 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c8920cd6eb8e85d866502779c811a3cb-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-8f702bd38e\" (UID: \"c8920cd6eb8e85d866502779c811a3cb\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.721766 kubelet[2803]: I0313 00:41:14.721617 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c8920cd6eb8e85d866502779c811a3cb-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-8f702bd38e\" (UID: \"c8920cd6eb8e85d866502779c811a3cb\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.722013 kubelet[2803]: I0313 00:41:14.721630 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79b704288966a07d8cf61c2b0098092e-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-8f702bd38e\" (UID: \"79b704288966a07d8cf61c2b0098092e\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.722013 kubelet[2803]: I0313 00:41:14.721643 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79b704288966a07d8cf61c2b0098092e-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-8f702bd38e\" (UID: \"79b704288966a07d8cf61c2b0098092e\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.722013 kubelet[2803]: I0313 00:41:14.721656 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8920cd6eb8e85d866502779c811a3cb-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-8f702bd38e\" (UID: \"c8920cd6eb8e85d866502779c811a3cb\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.722013 kubelet[2803]: I0313 00:41:14.721688 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8920cd6eb8e85d866502779c811a3cb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-8f702bd38e\" (UID: \"c8920cd6eb8e85d866502779c811a3cb\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:14.978035 sudo[2839]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 13 00:41:14.978261 sudo[2839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 13 00:41:15.312325 sudo[2839]: pam_unix(sudo:session): session closed for user root Mar 13 00:41:15.499441 kubelet[2803]: I0313 00:41:15.499400 2803 apiserver.go:52] "Watching apiserver" Mar 13 00:41:15.520841 kubelet[2803]: I0313 00:41:15.520806 2803 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:41:15.562923 kubelet[2803]: I0313 00:41:15.562164 2803 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:15.569038 kubelet[2803]: E0313 00:41:15.569003 2803 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-8f702bd38e\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-4-n-8f702bd38e" Mar 13 00:41:15.588248 kubelet[2803]: I0313 00:41:15.588201 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-8f702bd38e" podStartSLOduration=3.588185459 podStartE2EDuration="3.588185459s" podCreationTimestamp="2026-03-13 00:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:41:15.57967988 +0000 UTC m=+1.140760609" watchObservedRunningTime="2026-03-13 00:41:15.588185459 +0000 UTC m=+1.149266180" Mar 13 00:41:15.598583 kubelet[2803]: I0313 00:41:15.597376 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-n-8f702bd38e" podStartSLOduration=3.597358362 podStartE2EDuration="3.597358362s" podCreationTimestamp="2026-03-13 00:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:41:15.588512689 +0000 UTC m=+1.149593396" watchObservedRunningTime="2026-03-13 00:41:15.597358362 +0000 UTC m=+1.158439069" Mar 13 00:41:15.609145 kubelet[2803]: I0313 00:41:15.608994 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-n-8f702bd38e" podStartSLOduration=3.608874267 podStartE2EDuration="3.608874267s" podCreationTimestamp="2026-03-13 00:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:41:15.597506915 +0000 UTC m=+1.158587644" watchObservedRunningTime="2026-03-13 00:41:15.608874267 +0000 UTC m=+1.169954975" Mar 13 00:41:16.901419 sudo[1864]: pam_unix(sudo:session): session closed for user root Mar 13 00:41:16.996108 sshd[1863]: Connection closed by 4.153.228.146 port 41914 Mar 13 00:41:16.996652 sshd-session[1860]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:17.000759 systemd-logind[1610]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:41:17.001191 systemd[1]: sshd@6-10.0.0.185:22-4.153.228.146:41914.service: Deactivated successfully. Mar 13 00:41:17.003113 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:41:17.003398 systemd[1]: session-7.scope: Consumed 5.491s CPU time, 271.6M memory peak. Mar 13 00:41:17.005135 systemd-logind[1610]: Removed session 7. Mar 13 00:41:20.464486 kubelet[2803]: I0313 00:41:20.464345 2803 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:41:20.465625 containerd[1633]: time="2026-03-13T00:41:20.465055145Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:41:20.466212 kubelet[2803]: I0313 00:41:20.465330 2803 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:41:21.247164 systemd[1]: Created slice kubepods-besteffort-pod97637a9f_3e35_460d_a9aa_e4268fb45a30.slice - libcontainer container kubepods-besteffort-pod97637a9f_3e35_460d_a9aa_e4268fb45a30.slice. Mar 13 00:41:21.257478 kubelet[2803]: I0313 00:41:21.257445 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97637a9f-3e35-460d-a9aa-e4268fb45a30-xtables-lock\") pod \"kube-proxy-62872\" (UID: \"97637a9f-3e35-460d-a9aa-e4268fb45a30\") " pod="kube-system/kube-proxy-62872" Mar 13 00:41:21.257478 kubelet[2803]: I0313 00:41:21.257475 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97637a9f-3e35-460d-a9aa-e4268fb45a30-lib-modules\") pod \"kube-proxy-62872\" (UID: \"97637a9f-3e35-460d-a9aa-e4268fb45a30\") " pod="kube-system/kube-proxy-62872" Mar 13 00:41:21.257640 kubelet[2803]: I0313 00:41:21.257496 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-hostproc\") pod \"cilium-287wp\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " pod="kube-system/cilium-287wp" Mar 13 00:41:21.257640 kubelet[2803]: I0313 00:41:21.257511 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-cilium-cgroup\") pod \"cilium-287wp\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " pod="kube-system/cilium-287wp" Mar 13 00:41:21.257640 kubelet[2803]: I0313 00:41:21.257523 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-etc-cni-netd\") pod \"cilium-287wp\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " pod="kube-system/cilium-287wp" Mar 13 00:41:21.257640 kubelet[2803]: I0313 00:41:21.257537 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8a0da43-54f3-49fc-81da-8cd1f986a554-cilium-config-path\") pod \"cilium-287wp\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " pod="kube-system/cilium-287wp" Mar 13 00:41:21.257640 kubelet[2803]: I0313 00:41:21.257552 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-host-proc-sys-net\") pod \"cilium-287wp\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " pod="kube-system/cilium-287wp" Mar 13 00:41:21.257640 kubelet[2803]: I0313 00:41:21.257566 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/97637a9f-3e35-460d-a9aa-e4268fb45a30-kube-proxy\") pod \"kube-proxy-62872\" (UID: \"97637a9f-3e35-460d-a9aa-e4268fb45a30\") " pod="kube-system/kube-proxy-62872" Mar 13 00:41:21.257767 kubelet[2803]: I0313 00:41:21.257579 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5gcc\" (UniqueName: \"kubernetes.io/projected/97637a9f-3e35-460d-a9aa-e4268fb45a30-kube-api-access-z5gcc\") pod \"kube-proxy-62872\" (UID: \"97637a9f-3e35-460d-a9aa-e4268fb45a30\") " pod="kube-system/kube-proxy-62872" Mar 13 00:41:21.257767 kubelet[2803]: I0313 00:41:21.257592 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-cni-path\") pod \"cilium-287wp\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " pod="kube-system/cilium-287wp" Mar 13 00:41:21.257767 kubelet[2803]: I0313 00:41:21.257604 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-lib-modules\") pod \"cilium-287wp\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " pod="kube-system/cilium-287wp" Mar 13 00:41:21.257767 kubelet[2803]: I0313 00:41:21.257617 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-xtables-lock\") pod \"cilium-287wp\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " pod="kube-system/cilium-287wp" Mar 13 00:41:21.257767 kubelet[2803]: I0313 00:41:21.257630 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-host-proc-sys-kernel\") pod \"cilium-287wp\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " pod="kube-system/cilium-287wp" Mar 13 00:41:21.257767 kubelet[2803]: I0313 00:41:21.257650 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8a0da43-54f3-49fc-81da-8cd1f986a554-hubble-tls\") pod \"cilium-287wp\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " pod="kube-system/cilium-287wp" Mar 13 00:41:21.259609 kubelet[2803]: I0313 00:41:21.257669 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-bpf-maps\") pod \"cilium-287wp\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " pod="kube-system/cilium-287wp" Mar 13 00:41:21.259609 kubelet[2803]: I0313 00:41:21.257682 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8a0da43-54f3-49fc-81da-8cd1f986a554-clustermesh-secrets\") pod \"cilium-287wp\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " pod="kube-system/cilium-287wp" Mar 13 00:41:21.259609 kubelet[2803]: I0313 00:41:21.257694 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jll7\" (UniqueName: \"kubernetes.io/projected/a8a0da43-54f3-49fc-81da-8cd1f986a554-kube-api-access-4jll7\") pod \"cilium-287wp\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " pod="kube-system/cilium-287wp" Mar 13 00:41:21.259609 kubelet[2803]: I0313 00:41:21.257711 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-cilium-run\") pod \"cilium-287wp\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " pod="kube-system/cilium-287wp" Mar 13 00:41:21.265749 systemd[1]: Created slice kubepods-burstable-poda8a0da43_54f3_49fc_81da_8cd1f986a554.slice - libcontainer container kubepods-burstable-poda8a0da43_54f3_49fc_81da_8cd1f986a554.slice. Mar 13 00:41:21.566137 containerd[1633]: time="2026-03-13T00:41:21.566077806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-62872,Uid:97637a9f-3e35-460d-a9aa-e4268fb45a30,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:21.572347 containerd[1633]: time="2026-03-13T00:41:21.572125936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-287wp,Uid:a8a0da43-54f3-49fc-81da-8cd1f986a554,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:21.597178 containerd[1633]: time="2026-03-13T00:41:21.597140841Z" level=info msg="connecting to shim f601a7da8fd5feeacfc228aa12e9863bc256cc87616b703d72c517abe296baa4" address="unix:///run/containerd/s/90a1a3692753dee10a7fae2e79a4b5775abf8fdbbfe6e5f6ec225544cdfaff83" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:21.601196 containerd[1633]: time="2026-03-13T00:41:21.601124949Z" level=info msg="connecting to shim cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8" address="unix:///run/containerd/s/5f8c3ff8dfc1fc41241d94d6aef741093d8f884250d6be15809b71a94db75226" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:21.631008 systemd[1]: Started cri-containerd-cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8.scope - libcontainer container cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8. Mar 13 00:41:21.648922 systemd[1]: Started cri-containerd-f601a7da8fd5feeacfc228aa12e9863bc256cc87616b703d72c517abe296baa4.scope - libcontainer container f601a7da8fd5feeacfc228aa12e9863bc256cc87616b703d72c517abe296baa4. Mar 13 00:41:21.649526 systemd[1]: Created slice kubepods-besteffort-pode27a53ec_f9b4_4365_8fbc_0f07990d0ae2.slice - libcontainer container kubepods-besteffort-pode27a53ec_f9b4_4365_8fbc_0f07990d0ae2.slice. Mar 13 00:41:21.660459 kubelet[2803]: I0313 00:41:21.660209 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e27a53ec-f9b4-4365-8fbc-0f07990d0ae2-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-lgpx8\" (UID: \"e27a53ec-f9b4-4365-8fbc-0f07990d0ae2\") " pod="kube-system/cilium-operator-6f9c7c5859-lgpx8" Mar 13 00:41:21.661160 kubelet[2803]: I0313 00:41:21.661144 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9t48\" (UniqueName: \"kubernetes.io/projected/e27a53ec-f9b4-4365-8fbc-0f07990d0ae2-kube-api-access-f9t48\") pod \"cilium-operator-6f9c7c5859-lgpx8\" (UID: \"e27a53ec-f9b4-4365-8fbc-0f07990d0ae2\") " pod="kube-system/cilium-operator-6f9c7c5859-lgpx8" Mar 13 00:41:21.691294 containerd[1633]: time="2026-03-13T00:41:21.691256149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-287wp,Uid:a8a0da43-54f3-49fc-81da-8cd1f986a554,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\"" Mar 13 00:41:21.695921 containerd[1633]: time="2026-03-13T00:41:21.695811072Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 13 00:41:21.708185 containerd[1633]: time="2026-03-13T00:41:21.708145171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-62872,Uid:97637a9f-3e35-460d-a9aa-e4268fb45a30,Namespace:kube-system,Attempt:0,} returns sandbox id \"f601a7da8fd5feeacfc228aa12e9863bc256cc87616b703d72c517abe296baa4\"" Mar 13 00:41:21.715989 containerd[1633]: time="2026-03-13T00:41:21.715943693Z" level=info msg="CreateContainer within sandbox \"f601a7da8fd5feeacfc228aa12e9863bc256cc87616b703d72c517abe296baa4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:41:21.726107 containerd[1633]: time="2026-03-13T00:41:21.725839955Z" level=info msg="Container 02ec06f1a9b469c6ba0ae6bab65ed9bb38ca25d4ef4c8828a2025a8d27a9896b: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:21.740051 containerd[1633]: time="2026-03-13T00:41:21.740007905Z" level=info msg="CreateContainer within sandbox \"f601a7da8fd5feeacfc228aa12e9863bc256cc87616b703d72c517abe296baa4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"02ec06f1a9b469c6ba0ae6bab65ed9bb38ca25d4ef4c8828a2025a8d27a9896b\"" Mar 13 00:41:21.741133 containerd[1633]: time="2026-03-13T00:41:21.741110043Z" level=info msg="StartContainer for \"02ec06f1a9b469c6ba0ae6bab65ed9bb38ca25d4ef4c8828a2025a8d27a9896b\"" Mar 13 00:41:21.742490 containerd[1633]: time="2026-03-13T00:41:21.742468921Z" level=info msg="connecting to shim 02ec06f1a9b469c6ba0ae6bab65ed9bb38ca25d4ef4c8828a2025a8d27a9896b" address="unix:///run/containerd/s/90a1a3692753dee10a7fae2e79a4b5775abf8fdbbfe6e5f6ec225544cdfaff83" protocol=ttrpc version=3 Mar 13 00:41:21.762016 systemd[1]: Started cri-containerd-02ec06f1a9b469c6ba0ae6bab65ed9bb38ca25d4ef4c8828a2025a8d27a9896b.scope - libcontainer container 02ec06f1a9b469c6ba0ae6bab65ed9bb38ca25d4ef4c8828a2025a8d27a9896b. Mar 13 00:41:21.833793 containerd[1633]: time="2026-03-13T00:41:21.833676512Z" level=info msg="StartContainer for \"02ec06f1a9b469c6ba0ae6bab65ed9bb38ca25d4ef4c8828a2025a8d27a9896b\" returns successfully" Mar 13 00:41:21.955391 containerd[1633]: time="2026-03-13T00:41:21.955338371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-lgpx8,Uid:e27a53ec-f9b4-4365-8fbc-0f07990d0ae2,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:21.975570 containerd[1633]: time="2026-03-13T00:41:21.975492009Z" level=info msg="connecting to shim b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630" address="unix:///run/containerd/s/b20533d087ffd2de6aba83a23dbe6515d49265e315da774cdefca26c157858e8" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:21.995948 systemd[1]: Started cri-containerd-b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630.scope - libcontainer container b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630. Mar 13 00:41:22.054289 containerd[1633]: time="2026-03-13T00:41:22.054180932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-lgpx8,Uid:e27a53ec-f9b4-4365-8fbc-0f07990d0ae2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\"" Mar 13 00:41:22.598643 kubelet[2803]: I0313 00:41:22.598597 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-62872" podStartSLOduration=1.598581937 podStartE2EDuration="1.598581937s" podCreationTimestamp="2026-03-13 00:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:41:22.598279811 +0000 UTC m=+8.159360537" watchObservedRunningTime="2026-03-13 00:41:22.598581937 +0000 UTC m=+8.159662666" Mar 13 00:41:22.994174 update_engine[1616]: I20260313 00:41:22.993826 1616 update_attempter.cc:509] Updating boot flags... Mar 13 00:41:25.528698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3968729904.mount: Deactivated successfully. Mar 13 00:41:27.158803 containerd[1633]: time="2026-03-13T00:41:27.158699071Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:27.160453 containerd[1633]: time="2026-03-13T00:41:27.160349414Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 13 00:41:27.161793 containerd[1633]: time="2026-03-13T00:41:27.161734767Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:27.162979 containerd[1633]: time="2026-03-13T00:41:27.162831279Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.466582687s" Mar 13 00:41:27.162979 containerd[1633]: time="2026-03-13T00:41:27.162869542Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 13 00:41:27.164136 containerd[1633]: time="2026-03-13T00:41:27.164116269Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 13 00:41:27.168358 containerd[1633]: time="2026-03-13T00:41:27.168239983Z" level=info msg="CreateContainer within sandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 00:41:27.179804 containerd[1633]: time="2026-03-13T00:41:27.177553655Z" level=info msg="Container 8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:27.186899 containerd[1633]: time="2026-03-13T00:41:27.186851031Z" level=info msg="CreateContainer within sandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca\"" Mar 13 00:41:27.187791 containerd[1633]: time="2026-03-13T00:41:27.187740277Z" level=info msg="StartContainer for \"8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca\"" Mar 13 00:41:27.188791 containerd[1633]: time="2026-03-13T00:41:27.188754778Z" level=info msg="connecting to shim 8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca" address="unix:///run/containerd/s/5f8c3ff8dfc1fc41241d94d6aef741093d8f884250d6be15809b71a94db75226" protocol=ttrpc version=3 Mar 13 00:41:27.218243 systemd[1]: Started cri-containerd-8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca.scope - libcontainer container 8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca. Mar 13 00:41:27.247733 containerd[1633]: time="2026-03-13T00:41:27.247695329Z" level=info msg="StartContainer for \"8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca\" returns successfully" Mar 13 00:41:27.258049 systemd[1]: cri-containerd-8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca.scope: Deactivated successfully. Mar 13 00:41:27.260863 containerd[1633]: time="2026-03-13T00:41:27.260826059Z" level=info msg="received container exit event container_id:\"8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca\" id:\"8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca\" pid:3238 exited_at:{seconds:1773362487 nanos:260447843}" Mar 13 00:41:27.284721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca-rootfs.mount: Deactivated successfully. Mar 13 00:41:27.600480 containerd[1633]: time="2026-03-13T00:41:27.600015423Z" level=info msg="CreateContainer within sandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 00:41:27.607590 containerd[1633]: time="2026-03-13T00:41:27.607550983Z" level=info msg="Container b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:27.617549 containerd[1633]: time="2026-03-13T00:41:27.617141038Z" level=info msg="CreateContainer within sandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f\"" Mar 13 00:41:27.618982 containerd[1633]: time="2026-03-13T00:41:27.618961070Z" level=info msg="StartContainer for \"b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f\"" Mar 13 00:41:27.621171 containerd[1633]: time="2026-03-13T00:41:27.621131327Z" level=info msg="connecting to shim b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f" address="unix:///run/containerd/s/5f8c3ff8dfc1fc41241d94d6aef741093d8f884250d6be15809b71a94db75226" protocol=ttrpc version=3 Mar 13 00:41:27.642937 systemd[1]: Started cri-containerd-b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f.scope - libcontainer container b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f. Mar 13 00:41:27.674161 containerd[1633]: time="2026-03-13T00:41:27.674127140Z" level=info msg="StartContainer for \"b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f\" returns successfully" Mar 13 00:41:27.684019 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:41:27.684225 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:41:27.685017 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:41:27.687457 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:41:27.692021 systemd[1]: cri-containerd-b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f.scope: Deactivated successfully. Mar 13 00:41:27.694327 containerd[1633]: time="2026-03-13T00:41:27.693837739Z" level=info msg="received container exit event container_id:\"b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f\" id:\"b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f\" pid:3283 exited_at:{seconds:1773362487 nanos:693446650}" Mar 13 00:41:27.714572 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:41:28.611607 containerd[1633]: time="2026-03-13T00:41:28.611557573Z" level=info msg="CreateContainer within sandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 00:41:28.632710 containerd[1633]: time="2026-03-13T00:41:28.632668167Z" level=info msg="Container 1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:28.634264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1597762024.mount: Deactivated successfully. Mar 13 00:41:28.646796 containerd[1633]: time="2026-03-13T00:41:28.646663727Z" level=info msg="CreateContainer within sandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9\"" Mar 13 00:41:28.647762 containerd[1633]: time="2026-03-13T00:41:28.647599135Z" level=info msg="StartContainer for \"1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9\"" Mar 13 00:41:28.649067 containerd[1633]: time="2026-03-13T00:41:28.649042033Z" level=info msg="connecting to shim 1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9" address="unix:///run/containerd/s/5f8c3ff8dfc1fc41241d94d6aef741093d8f884250d6be15809b71a94db75226" protocol=ttrpc version=3 Mar 13 00:41:28.675950 systemd[1]: Started cri-containerd-1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9.scope - libcontainer container 1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9. Mar 13 00:41:28.736908 containerd[1633]: time="2026-03-13T00:41:28.736872344Z" level=info msg="StartContainer for \"1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9\" returns successfully" Mar 13 00:41:28.737611 systemd[1]: cri-containerd-1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9.scope: Deactivated successfully. Mar 13 00:41:28.740203 containerd[1633]: time="2026-03-13T00:41:28.740161517Z" level=info msg="received container exit event container_id:\"1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9\" id:\"1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9\" pid:3331 exited_at:{seconds:1773362488 nanos:739881360}" Mar 13 00:41:28.763632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9-rootfs.mount: Deactivated successfully. Mar 13 00:41:29.147065 containerd[1633]: time="2026-03-13T00:41:29.146477698Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:29.147319 containerd[1633]: time="2026-03-13T00:41:29.147302640Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 13 00:41:29.148264 containerd[1633]: time="2026-03-13T00:41:29.148250058Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:41:29.149821 containerd[1633]: time="2026-03-13T00:41:29.149786760Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.985418596s" Mar 13 00:41:29.149921 containerd[1633]: time="2026-03-13T00:41:29.149898327Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 13 00:41:29.160035 containerd[1633]: time="2026-03-13T00:41:29.160000784Z" level=info msg="CreateContainer within sandbox \"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 13 00:41:29.167802 containerd[1633]: time="2026-03-13T00:41:29.167296616Z" level=info msg="Container ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:29.173241 containerd[1633]: time="2026-03-13T00:41:29.173210821Z" level=info msg="CreateContainer within sandbox \"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\"" Mar 13 00:41:29.173964 containerd[1633]: time="2026-03-13T00:41:29.173948793Z" level=info msg="StartContainer for \"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\"" Mar 13 00:41:29.174847 containerd[1633]: time="2026-03-13T00:41:29.174828245Z" level=info msg="connecting to shim ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a" address="unix:///run/containerd/s/b20533d087ffd2de6aba83a23dbe6515d49265e315da774cdefca26c157858e8" protocol=ttrpc version=3 Mar 13 00:41:29.197946 systemd[1]: Started cri-containerd-ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a.scope - libcontainer container ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a. Mar 13 00:41:29.227034 containerd[1633]: time="2026-03-13T00:41:29.226993463Z" level=info msg="StartContainer for \"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\" returns successfully" Mar 13 00:41:29.615486 containerd[1633]: time="2026-03-13T00:41:29.614979344Z" level=info msg="CreateContainer within sandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 00:41:29.627098 containerd[1633]: time="2026-03-13T00:41:29.627067448Z" level=info msg="Container 83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:29.630672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396295703.mount: Deactivated successfully. Mar 13 00:41:29.636963 containerd[1633]: time="2026-03-13T00:41:29.636912506Z" level=info msg="CreateContainer within sandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db\"" Mar 13 00:41:29.637586 containerd[1633]: time="2026-03-13T00:41:29.637569369Z" level=info msg="StartContainer for \"83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db\"" Mar 13 00:41:29.638471 containerd[1633]: time="2026-03-13T00:41:29.638390667Z" level=info msg="connecting to shim 83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db" address="unix:///run/containerd/s/5f8c3ff8dfc1fc41241d94d6aef741093d8f884250d6be15809b71a94db75226" protocol=ttrpc version=3 Mar 13 00:41:29.672926 systemd[1]: Started cri-containerd-83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db.scope - libcontainer container 83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db. Mar 13 00:41:29.690314 kubelet[2803]: I0313 00:41:29.690265 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-lgpx8" podStartSLOduration=1.594727977 podStartE2EDuration="8.690248602s" podCreationTimestamp="2026-03-13 00:41:21 +0000 UTC" firstStartedPulling="2026-03-13 00:41:22.057075419 +0000 UTC m=+7.618156127" lastFinishedPulling="2026-03-13 00:41:29.152596045 +0000 UTC m=+14.713676752" observedRunningTime="2026-03-13 00:41:29.623858208 +0000 UTC m=+15.184938938" watchObservedRunningTime="2026-03-13 00:41:29.690248602 +0000 UTC m=+15.251329321" Mar 13 00:41:29.731928 systemd[1]: cri-containerd-83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db.scope: Deactivated successfully. Mar 13 00:41:29.732216 containerd[1633]: time="2026-03-13T00:41:29.732195530Z" level=info msg="StartContainer for \"83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db\" returns successfully" Mar 13 00:41:29.735209 containerd[1633]: time="2026-03-13T00:41:29.735119357Z" level=info msg="received container exit event container_id:\"83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db\" id:\"83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db\" pid:3422 exited_at:{seconds:1773362489 nanos:734878483}" Mar 13 00:41:29.761112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db-rootfs.mount: Deactivated successfully. Mar 13 00:41:30.627963 containerd[1633]: time="2026-03-13T00:41:30.627921433Z" level=info msg="CreateContainer within sandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 00:41:30.640793 containerd[1633]: time="2026-03-13T00:41:30.638756078Z" level=info msg="Container 5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:30.648550 containerd[1633]: time="2026-03-13T00:41:30.648519906Z" level=info msg="CreateContainer within sandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\"" Mar 13 00:41:30.650260 containerd[1633]: time="2026-03-13T00:41:30.650237998Z" level=info msg="StartContainer for \"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\"" Mar 13 00:41:30.651097 containerd[1633]: time="2026-03-13T00:41:30.651076504Z" level=info msg="connecting to shim 5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6" address="unix:///run/containerd/s/5f8c3ff8dfc1fc41241d94d6aef741093d8f884250d6be15809b71a94db75226" protocol=ttrpc version=3 Mar 13 00:41:30.672043 systemd[1]: Started cri-containerd-5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6.scope - libcontainer container 5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6. Mar 13 00:41:30.722406 containerd[1633]: time="2026-03-13T00:41:30.722375080Z" level=info msg="StartContainer for \"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\" returns successfully" Mar 13 00:41:30.797148 kubelet[2803]: I0313 00:41:30.797123 2803 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 13 00:41:30.829587 systemd[1]: Created slice kubepods-burstable-pod30f6e4c6_61e2_458d_8d39_1d3da62b10ae.slice - libcontainer container kubepods-burstable-pod30f6e4c6_61e2_458d_8d39_1d3da62b10ae.slice. Mar 13 00:41:30.835818 systemd[1]: Created slice kubepods-burstable-pod61669788_a83e_4775_a46e_42534d6c7f57.slice - libcontainer container kubepods-burstable-pod61669788_a83e_4775_a46e_42534d6c7f57.slice. Mar 13 00:41:30.925158 kubelet[2803]: I0313 00:41:30.924866 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30f6e4c6-61e2-458d-8d39-1d3da62b10ae-config-volume\") pod \"coredns-66bc5c9577-8fqc5\" (UID: \"30f6e4c6-61e2-458d-8d39-1d3da62b10ae\") " pod="kube-system/coredns-66bc5c9577-8fqc5" Mar 13 00:41:30.925158 kubelet[2803]: I0313 00:41:30.924909 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95w2w\" (UniqueName: \"kubernetes.io/projected/30f6e4c6-61e2-458d-8d39-1d3da62b10ae-kube-api-access-95w2w\") pod \"coredns-66bc5c9577-8fqc5\" (UID: \"30f6e4c6-61e2-458d-8d39-1d3da62b10ae\") " pod="kube-system/coredns-66bc5c9577-8fqc5" Mar 13 00:41:30.925158 kubelet[2803]: I0313 00:41:30.924933 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rwtp\" (UniqueName: \"kubernetes.io/projected/61669788-a83e-4775-a46e-42534d6c7f57-kube-api-access-6rwtp\") pod \"coredns-66bc5c9577-9lsjz\" (UID: \"61669788-a83e-4775-a46e-42534d6c7f57\") " pod="kube-system/coredns-66bc5c9577-9lsjz" Mar 13 00:41:30.925541 kubelet[2803]: I0313 00:41:30.925417 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61669788-a83e-4775-a46e-42534d6c7f57-config-volume\") pod \"coredns-66bc5c9577-9lsjz\" (UID: \"61669788-a83e-4775-a46e-42534d6c7f57\") " pod="kube-system/coredns-66bc5c9577-9lsjz" Mar 13 00:41:31.137799 containerd[1633]: time="2026-03-13T00:41:31.137739258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8fqc5,Uid:30f6e4c6-61e2-458d-8d39-1d3da62b10ae,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:31.140643 containerd[1633]: time="2026-03-13T00:41:31.140496739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9lsjz,Uid:61669788-a83e-4775-a46e-42534d6c7f57,Namespace:kube-system,Attempt:0,}" Mar 13 00:41:31.637241 kubelet[2803]: I0313 00:41:31.636704 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-287wp" podStartSLOduration=5.166950376 podStartE2EDuration="10.636690475s" podCreationTimestamp="2026-03-13 00:41:21 +0000 UTC" firstStartedPulling="2026-03-13 00:41:21.694156257 +0000 UTC m=+7.255236965" lastFinishedPulling="2026-03-13 00:41:27.163896356 +0000 UTC m=+12.724977064" observedRunningTime="2026-03-13 00:41:31.635885271 +0000 UTC m=+17.196965999" watchObservedRunningTime="2026-03-13 00:41:31.636690475 +0000 UTC m=+17.197771204" Mar 13 00:41:32.725111 systemd-networkd[1507]: cilium_host: Link UP Mar 13 00:41:32.725208 systemd-networkd[1507]: cilium_net: Link UP Mar 13 00:41:32.725313 systemd-networkd[1507]: cilium_net: Gained carrier Mar 13 00:41:32.725408 systemd-networkd[1507]: cilium_host: Gained carrier Mar 13 00:41:32.822005 systemd-networkd[1507]: cilium_vxlan: Link UP Mar 13 00:41:32.822013 systemd-networkd[1507]: cilium_vxlan: Gained carrier Mar 13 00:41:33.024810 kernel: NET: Registered PF_ALG protocol family Mar 13 00:41:33.100916 systemd-networkd[1507]: cilium_host: Gained IPv6LL Mar 13 00:41:33.590892 systemd-networkd[1507]: lxc_health: Link UP Mar 13 00:41:33.608645 systemd-networkd[1507]: lxc_health: Gained carrier Mar 13 00:41:33.677849 systemd-networkd[1507]: cilium_net: Gained IPv6LL Mar 13 00:41:34.181824 kernel: eth0: renamed from tmpeeb2a Mar 13 00:41:34.180658 systemd-networkd[1507]: lxc259a2223c9a9: Link UP Mar 13 00:41:34.186804 systemd-networkd[1507]: lxc259a2223c9a9: Gained carrier Mar 13 00:41:34.198172 kernel: eth0: renamed from tmp6ceb8 Mar 13 00:41:34.201717 systemd-networkd[1507]: lxc92c139c09355: Link UP Mar 13 00:41:34.202922 systemd-networkd[1507]: lxc92c139c09355: Gained carrier Mar 13 00:41:34.765256 systemd-networkd[1507]: cilium_vxlan: Gained IPv6LL Mar 13 00:41:35.341082 systemd-networkd[1507]: lxc_health: Gained IPv6LL Mar 13 00:41:35.532949 systemd-networkd[1507]: lxc92c139c09355: Gained IPv6LL Mar 13 00:41:36.044987 systemd-networkd[1507]: lxc259a2223c9a9: Gained IPv6LL Mar 13 00:41:37.580318 containerd[1633]: time="2026-03-13T00:41:37.580259212Z" level=info msg="connecting to shim eeb2a526d0f852e828f366c19d1470b76bdd005fcdadf473df465f045b43a2f5" address="unix:///run/containerd/s/6403b1cd04bc12bb73027146159ad4e39c13961310bc62b53e0afeaece772541" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:37.587679 containerd[1633]: time="2026-03-13T00:41:37.587647926Z" level=info msg="connecting to shim 6ceb845dcf8ec7357fe189afd1165004838ac4320cea6a2386c397921d3b56e9" address="unix:///run/containerd/s/1fd36a30a6f58fa9b73a12eb99455006b3c1217a47139c8a640b9c87fcdd7f20" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:41:37.635909 systemd[1]: Started cri-containerd-6ceb845dcf8ec7357fe189afd1165004838ac4320cea6a2386c397921d3b56e9.scope - libcontainer container 6ceb845dcf8ec7357fe189afd1165004838ac4320cea6a2386c397921d3b56e9. Mar 13 00:41:37.637213 systemd[1]: Started cri-containerd-eeb2a526d0f852e828f366c19d1470b76bdd005fcdadf473df465f045b43a2f5.scope - libcontainer container eeb2a526d0f852e828f366c19d1470b76bdd005fcdadf473df465f045b43a2f5. Mar 13 00:41:37.699451 containerd[1633]: time="2026-03-13T00:41:37.699310717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8fqc5,Uid:30f6e4c6-61e2-458d-8d39-1d3da62b10ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"eeb2a526d0f852e828f366c19d1470b76bdd005fcdadf473df465f045b43a2f5\"" Mar 13 00:41:37.707520 containerd[1633]: time="2026-03-13T00:41:37.707488120Z" level=info msg="CreateContainer within sandbox \"eeb2a526d0f852e828f366c19d1470b76bdd005fcdadf473df465f045b43a2f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:41:37.711891 containerd[1633]: time="2026-03-13T00:41:37.711845197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9lsjz,Uid:61669788-a83e-4775-a46e-42534d6c7f57,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ceb845dcf8ec7357fe189afd1165004838ac4320cea6a2386c397921d3b56e9\"" Mar 13 00:41:37.719107 containerd[1633]: time="2026-03-13T00:41:37.719080918Z" level=info msg="Container 9f3f67a5da5609a8a3502fc4d8afee86bc7cc35cdb10f93be679126ffa6645a9: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:37.720482 containerd[1633]: time="2026-03-13T00:41:37.719554824Z" level=info msg="CreateContainer within sandbox \"6ceb845dcf8ec7357fe189afd1165004838ac4320cea6a2386c397921d3b56e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:41:37.724363 containerd[1633]: time="2026-03-13T00:41:37.724340807Z" level=info msg="CreateContainer within sandbox \"eeb2a526d0f852e828f366c19d1470b76bdd005fcdadf473df465f045b43a2f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f3f67a5da5609a8a3502fc4d8afee86bc7cc35cdb10f93be679126ffa6645a9\"" Mar 13 00:41:37.724989 containerd[1633]: time="2026-03-13T00:41:37.724972861Z" level=info msg="StartContainer for \"9f3f67a5da5609a8a3502fc4d8afee86bc7cc35cdb10f93be679126ffa6645a9\"" Mar 13 00:41:37.725615 containerd[1633]: time="2026-03-13T00:41:37.725596433Z" level=info msg="connecting to shim 9f3f67a5da5609a8a3502fc4d8afee86bc7cc35cdb10f93be679126ffa6645a9" address="unix:///run/containerd/s/6403b1cd04bc12bb73027146159ad4e39c13961310bc62b53e0afeaece772541" protocol=ttrpc version=3 Mar 13 00:41:37.737675 containerd[1633]: time="2026-03-13T00:41:37.737644065Z" level=info msg="Container dc3ad8737503adf19db7090f2fafffdb90b335119d03f4a258e79e5ac57f9490: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:41:37.745225 containerd[1633]: time="2026-03-13T00:41:37.745191216Z" level=info msg="CreateContainer within sandbox \"6ceb845dcf8ec7357fe189afd1165004838ac4320cea6a2386c397921d3b56e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc3ad8737503adf19db7090f2fafffdb90b335119d03f4a258e79e5ac57f9490\"" Mar 13 00:41:37.745908 systemd[1]: Started cri-containerd-9f3f67a5da5609a8a3502fc4d8afee86bc7cc35cdb10f93be679126ffa6645a9.scope - libcontainer container 9f3f67a5da5609a8a3502fc4d8afee86bc7cc35cdb10f93be679126ffa6645a9. Mar 13 00:41:37.746329 containerd[1633]: time="2026-03-13T00:41:37.746112240Z" level=info msg="StartContainer for \"dc3ad8737503adf19db7090f2fafffdb90b335119d03f4a258e79e5ac57f9490\"" Mar 13 00:41:37.747797 containerd[1633]: time="2026-03-13T00:41:37.747706352Z" level=info msg="connecting to shim dc3ad8737503adf19db7090f2fafffdb90b335119d03f4a258e79e5ac57f9490" address="unix:///run/containerd/s/1fd36a30a6f58fa9b73a12eb99455006b3c1217a47139c8a640b9c87fcdd7f20" protocol=ttrpc version=3 Mar 13 00:41:37.779108 systemd[1]: Started cri-containerd-dc3ad8737503adf19db7090f2fafffdb90b335119d03f4a258e79e5ac57f9490.scope - libcontainer container dc3ad8737503adf19db7090f2fafffdb90b335119d03f4a258e79e5ac57f9490. Mar 13 00:41:37.789082 containerd[1633]: time="2026-03-13T00:41:37.789053563Z" level=info msg="StartContainer for \"9f3f67a5da5609a8a3502fc4d8afee86bc7cc35cdb10f93be679126ffa6645a9\" returns successfully" Mar 13 00:41:37.822002 containerd[1633]: time="2026-03-13T00:41:37.821958828Z" level=info msg="StartContainer for \"dc3ad8737503adf19db7090f2fafffdb90b335119d03f4a258e79e5ac57f9490\" returns successfully" Mar 13 00:41:38.565685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206318865.mount: Deactivated successfully. Mar 13 00:41:38.671817 kubelet[2803]: I0313 00:41:38.671571 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8fqc5" podStartSLOduration=17.671557669 podStartE2EDuration="17.671557669s" podCreationTimestamp="2026-03-13 00:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:41:38.670882743 +0000 UTC m=+24.231963452" watchObservedRunningTime="2026-03-13 00:41:38.671557669 +0000 UTC m=+24.232638397" Mar 13 00:41:38.673038 kubelet[2803]: I0313 00:41:38.672735 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9lsjz" podStartSLOduration=17.672723483 podStartE2EDuration="17.672723483s" podCreationTimestamp="2026-03-13 00:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:41:38.659293286 +0000 UTC m=+24.220374014" watchObservedRunningTime="2026-03-13 00:41:38.672723483 +0000 UTC m=+24.233804214" Mar 13 00:41:44.754663 kubelet[2803]: I0313 00:41:44.754530 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:42:44.059047 update_engine[1616]: I20260313 00:42:44.058364 1616 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 13 00:42:44.059047 update_engine[1616]: I20260313 00:42:44.058416 1616 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 13 00:42:44.059047 update_engine[1616]: I20260313 00:42:44.058558 1616 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 13 00:42:44.059737 update_engine[1616]: I20260313 00:42:44.059717 1616 omaha_request_params.cc:62] Current group set to stable Mar 13 00:42:44.059889 update_engine[1616]: I20260313 00:42:44.059875 1616 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 13 00:42:44.059930 update_engine[1616]: I20260313 00:42:44.059922 1616 update_attempter.cc:643] Scheduling an action processor start. Mar 13 00:42:44.059977 update_engine[1616]: I20260313 00:42:44.059968 1616 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 13 00:42:44.060083 update_engine[1616]: I20260313 00:42:44.060070 1616 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 13 00:42:44.060166 update_engine[1616]: I20260313 00:42:44.060155 1616 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 13 00:42:44.060791 update_engine[1616]: I20260313 00:42:44.060256 1616 omaha_request_action.cc:272] Request: Mar 13 00:42:44.060791 update_engine[1616]: Mar 13 00:42:44.060791 update_engine[1616]: Mar 13 00:42:44.060791 update_engine[1616]: Mar 13 00:42:44.060791 update_engine[1616]: Mar 13 00:42:44.060791 update_engine[1616]: Mar 13 00:42:44.060791 update_engine[1616]: Mar 13 00:42:44.060791 update_engine[1616]: Mar 13 00:42:44.060791 update_engine[1616]: Mar 13 00:42:44.060791 update_engine[1616]: I20260313 00:42:44.060266 1616 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:42:44.061537 update_engine[1616]: I20260313 00:42:44.061519 1616 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:42:44.061903 locksmithd[1642]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 13 00:42:44.062265 update_engine[1616]: I20260313 00:42:44.062243 1616 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:42:44.069858 update_engine[1616]: E20260313 00:42:44.069725 1616 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:42:44.069858 update_engine[1616]: I20260313 00:42:44.069830 1616 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 13 00:42:54.001820 update_engine[1616]: I20260313 00:42:54.001369 1616 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:42:54.001820 update_engine[1616]: I20260313 00:42:54.001459 1616 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:42:54.001820 update_engine[1616]: I20260313 00:42:54.001741 1616 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:42:54.007584 update_engine[1616]: E20260313 00:42:54.007545 1616 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:42:54.007685 update_engine[1616]: I20260313 00:42:54.007619 1616 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 13 00:43:04.001417 update_engine[1616]: I20260313 00:43:04.001353 1616 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:43:04.001728 update_engine[1616]: I20260313 00:43:04.001431 1616 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:43:04.001753 update_engine[1616]: I20260313 00:43:04.001726 1616 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:43:04.007596 update_engine[1616]: E20260313 00:43:04.007542 1616 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:43:04.007707 update_engine[1616]: I20260313 00:43:04.007632 1616 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 13 00:43:14.000962 update_engine[1616]: I20260313 00:43:14.000895 1616 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:43:14.000962 update_engine[1616]: I20260313 00:43:14.000972 1616 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:43:14.001325 update_engine[1616]: I20260313 00:43:14.001250 1616 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:43:14.007381 update_engine[1616]: E20260313 00:43:14.007335 1616 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:43:14.007492 update_engine[1616]: I20260313 00:43:14.007416 1616 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 13 00:43:14.007492 update_engine[1616]: I20260313 00:43:14.007423 1616 omaha_request_action.cc:617] Omaha request response: Mar 13 00:43:14.007538 update_engine[1616]: E20260313 00:43:14.007503 1616 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 13 00:43:14.007538 update_engine[1616]: I20260313 00:43:14.007521 1616 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 13 00:43:14.007538 update_engine[1616]: I20260313 00:43:14.007525 1616 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 13 00:43:14.007538 update_engine[1616]: I20260313 00:43:14.007529 1616 update_attempter.cc:306] Processing Done. Mar 13 00:43:14.007616 update_engine[1616]: E20260313 00:43:14.007541 1616 update_attempter.cc:619] Update failed. Mar 13 00:43:14.007616 update_engine[1616]: I20260313 00:43:14.007546 1616 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 13 00:43:14.007616 update_engine[1616]: I20260313 00:43:14.007551 1616 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 13 00:43:14.007616 update_engine[1616]: I20260313 00:43:14.007556 1616 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 13 00:43:14.007699 update_engine[1616]: I20260313 00:43:14.007615 1616 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 13 00:43:14.007699 update_engine[1616]: I20260313 00:43:14.007637 1616 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 13 00:43:14.007699 update_engine[1616]: I20260313 00:43:14.007640 1616 omaha_request_action.cc:272] Request: Mar 13 00:43:14.007699 update_engine[1616]: Mar 13 00:43:14.007699 update_engine[1616]: Mar 13 00:43:14.007699 update_engine[1616]: Mar 13 00:43:14.007699 update_engine[1616]: Mar 13 00:43:14.007699 update_engine[1616]: Mar 13 00:43:14.007699 update_engine[1616]: Mar 13 00:43:14.007699 update_engine[1616]: I20260313 00:43:14.007647 1616 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 00:43:14.007699 update_engine[1616]: I20260313 00:43:14.007663 1616 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 00:43:14.008154 update_engine[1616]: I20260313 00:43:14.007905 1616 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 00:43:14.008194 locksmithd[1642]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 13 00:43:14.014277 update_engine[1616]: E20260313 00:43:14.014233 1616 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 00:43:14.014367 update_engine[1616]: I20260313 00:43:14.014303 1616 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 13 00:43:14.014367 update_engine[1616]: I20260313 00:43:14.014310 1616 omaha_request_action.cc:617] Omaha request response: Mar 13 00:43:14.014367 update_engine[1616]: I20260313 00:43:14.014317 1616 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 13 00:43:14.014367 update_engine[1616]: I20260313 00:43:14.014320 1616 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 13 00:43:14.014367 update_engine[1616]: I20260313 00:43:14.014325 1616 update_attempter.cc:306] Processing Done. Mar 13 00:43:14.014367 update_engine[1616]: I20260313 00:43:14.014331 1616 update_attempter.cc:310] Error event sent. Mar 13 00:43:14.014367 update_engine[1616]: I20260313 00:43:14.014338 1616 update_check_scheduler.cc:74] Next update check in 44m44s Mar 13 00:43:14.014799 locksmithd[1642]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 13 00:44:09.011928 systemd[1]: Started sshd@7-10.0.0.185:22-4.153.228.146:53810.service - OpenSSH per-connection server daemon (4.153.228.146:53810). Mar 13 00:44:09.519333 sshd[4145]: Accepted publickey for core from 4.153.228.146 port 53810 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:09.520631 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:09.524491 systemd-logind[1610]: New session 8 of user core. Mar 13 00:44:09.531915 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:44:09.880042 sshd[4148]: Connection closed by 4.153.228.146 port 53810 Mar 13 00:44:09.880397 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:09.883870 systemd-logind[1610]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:44:09.884531 systemd[1]: sshd@7-10.0.0.185:22-4.153.228.146:53810.service: Deactivated successfully. Mar 13 00:44:09.886192 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:44:09.887330 systemd-logind[1610]: Removed session 8. Mar 13 00:44:14.983980 systemd[1]: Started sshd@8-10.0.0.185:22-4.153.228.146:53822.service - OpenSSH per-connection server daemon (4.153.228.146:53822). Mar 13 00:44:15.489820 sshd[4162]: Accepted publickey for core from 4.153.228.146 port 53822 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:15.490848 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:15.494942 systemd-logind[1610]: New session 9 of user core. Mar 13 00:44:15.501064 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:44:15.825090 sshd[4165]: Connection closed by 4.153.228.146 port 53822 Mar 13 00:44:15.825569 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:15.829334 systemd[1]: sshd@8-10.0.0.185:22-4.153.228.146:53822.service: Deactivated successfully. Mar 13 00:44:15.831326 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:44:15.832627 systemd-logind[1610]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:44:15.834447 systemd-logind[1610]: Removed session 9. Mar 13 00:44:20.932332 systemd[1]: Started sshd@9-10.0.0.185:22-4.153.228.146:43506.service - OpenSSH per-connection server daemon (4.153.228.146:43506). Mar 13 00:44:21.446102 sshd[4177]: Accepted publickey for core from 4.153.228.146 port 43506 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:21.447338 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:21.452554 systemd-logind[1610]: New session 10 of user core. Mar 13 00:44:21.458943 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:44:21.796911 sshd[4180]: Connection closed by 4.153.228.146 port 43506 Mar 13 00:44:21.797381 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:21.801649 systemd[1]: sshd@9-10.0.0.185:22-4.153.228.146:43506.service: Deactivated successfully. Mar 13 00:44:21.805646 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:44:21.808240 systemd-logind[1610]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:44:21.809206 systemd-logind[1610]: Removed session 10. Mar 13 00:44:26.901246 systemd[1]: Started sshd@10-10.0.0.185:22-4.153.228.146:43514.service - OpenSSH per-connection server daemon (4.153.228.146:43514). Mar 13 00:44:27.405814 sshd[4195]: Accepted publickey for core from 4.153.228.146 port 43514 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:27.406728 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:27.410295 systemd-logind[1610]: New session 11 of user core. Mar 13 00:44:27.420015 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:44:27.741034 sshd[4198]: Connection closed by 4.153.228.146 port 43514 Mar 13 00:44:27.740880 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:27.744807 systemd[1]: sshd@10-10.0.0.185:22-4.153.228.146:43514.service: Deactivated successfully. Mar 13 00:44:27.746850 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:44:27.751462 systemd-logind[1610]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:44:27.752535 systemd-logind[1610]: Removed session 11. Mar 13 00:44:27.846956 systemd[1]: Started sshd@11-10.0.0.185:22-4.153.228.146:43522.service - OpenSSH per-connection server daemon (4.153.228.146:43522). Mar 13 00:44:28.356611 sshd[4211]: Accepted publickey for core from 4.153.228.146 port 43522 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:28.358075 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:28.362860 systemd-logind[1610]: New session 12 of user core. Mar 13 00:44:28.374272 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:44:28.737732 sshd[4214]: Connection closed by 4.153.228.146 port 43522 Mar 13 00:44:28.738232 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:28.742439 systemd[1]: sshd@11-10.0.0.185:22-4.153.228.146:43522.service: Deactivated successfully. Mar 13 00:44:28.744495 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:44:28.745821 systemd-logind[1610]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:44:28.747443 systemd-logind[1610]: Removed session 12. Mar 13 00:44:28.842898 systemd[1]: Started sshd@12-10.0.0.185:22-4.153.228.146:43532.service - OpenSSH per-connection server daemon (4.153.228.146:43532). Mar 13 00:44:29.353946 sshd[4223]: Accepted publickey for core from 4.153.228.146 port 43532 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:29.355493 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:29.359816 systemd-logind[1610]: New session 13 of user core. Mar 13 00:44:29.366929 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:44:29.691281 sshd[4226]: Connection closed by 4.153.228.146 port 43532 Mar 13 00:44:29.691892 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:29.695135 systemd[1]: sshd@12-10.0.0.185:22-4.153.228.146:43532.service: Deactivated successfully. Mar 13 00:44:29.696940 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:44:29.697642 systemd-logind[1610]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:44:29.698963 systemd-logind[1610]: Removed session 13. Mar 13 00:44:34.795011 systemd[1]: Started sshd@13-10.0.0.185:22-4.153.228.146:40296.service - OpenSSH per-connection server daemon (4.153.228.146:40296). Mar 13 00:44:35.302622 sshd[4238]: Accepted publickey for core from 4.153.228.146 port 40296 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:35.303699 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:35.308404 systemd-logind[1610]: New session 14 of user core. Mar 13 00:44:35.310917 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:44:35.639101 sshd[4241]: Connection closed by 4.153.228.146 port 40296 Mar 13 00:44:35.639643 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:35.643496 systemd[1]: sshd@13-10.0.0.185:22-4.153.228.146:40296.service: Deactivated successfully. Mar 13 00:44:35.645563 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:44:35.647561 systemd-logind[1610]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:44:35.649040 systemd-logind[1610]: Removed session 14. Mar 13 00:44:35.744243 systemd[1]: Started sshd@14-10.0.0.185:22-4.153.228.146:40308.service - OpenSSH per-connection server daemon (4.153.228.146:40308). Mar 13 00:44:36.251183 sshd[4253]: Accepted publickey for core from 4.153.228.146 port 40308 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:36.252549 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:36.256470 systemd-logind[1610]: New session 15 of user core. Mar 13 00:44:36.262158 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:44:36.628834 sshd[4256]: Connection closed by 4.153.228.146 port 40308 Mar 13 00:44:36.628726 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:36.633317 systemd[1]: sshd@14-10.0.0.185:22-4.153.228.146:40308.service: Deactivated successfully. Mar 13 00:44:36.635537 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:44:36.636745 systemd-logind[1610]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:44:36.638251 systemd-logind[1610]: Removed session 15. Mar 13 00:44:36.736031 systemd[1]: Started sshd@15-10.0.0.185:22-4.153.228.146:40318.service - OpenSSH per-connection server daemon (4.153.228.146:40318). Mar 13 00:44:37.261197 sshd[4266]: Accepted publickey for core from 4.153.228.146 port 40318 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:37.262176 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:37.265740 systemd-logind[1610]: New session 16 of user core. Mar 13 00:44:37.272906 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:44:38.048835 sshd[4269]: Connection closed by 4.153.228.146 port 40318 Mar 13 00:44:38.049275 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:38.052720 systemd[1]: sshd@15-10.0.0.185:22-4.153.228.146:40318.service: Deactivated successfully. Mar 13 00:44:38.054108 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:44:38.055255 systemd-logind[1610]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:44:38.056408 systemd-logind[1610]: Removed session 16. Mar 13 00:44:38.151886 systemd[1]: Started sshd@16-10.0.0.185:22-4.153.228.146:40330.service - OpenSSH per-connection server daemon (4.153.228.146:40330). Mar 13 00:44:38.656809 sshd[4284]: Accepted publickey for core from 4.153.228.146 port 40330 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:38.657254 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:38.661149 systemd-logind[1610]: New session 17 of user core. Mar 13 00:44:38.666947 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:44:39.087590 sshd[4287]: Connection closed by 4.153.228.146 port 40330 Mar 13 00:44:39.088135 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:39.091815 systemd[1]: sshd@16-10.0.0.185:22-4.153.228.146:40330.service: Deactivated successfully. Mar 13 00:44:39.093253 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:44:39.093891 systemd-logind[1610]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:44:39.094949 systemd-logind[1610]: Removed session 17. Mar 13 00:44:39.194949 systemd[1]: Started sshd@17-10.0.0.185:22-4.153.228.146:53794.service - OpenSSH per-connection server daemon (4.153.228.146:53794). Mar 13 00:44:39.699692 sshd[4299]: Accepted publickey for core from 4.153.228.146 port 53794 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:39.700845 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:39.704509 systemd-logind[1610]: New session 18 of user core. Mar 13 00:44:39.709956 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:44:40.032836 sshd[4302]: Connection closed by 4.153.228.146 port 53794 Mar 13 00:44:40.034396 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:40.037101 systemd-logind[1610]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:44:40.037914 systemd[1]: sshd@17-10.0.0.185:22-4.153.228.146:53794.service: Deactivated successfully. Mar 13 00:44:40.039818 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:44:40.041978 systemd-logind[1610]: Removed session 18. Mar 13 00:44:45.138920 systemd[1]: Started sshd@18-10.0.0.185:22-4.153.228.146:53804.service - OpenSSH per-connection server daemon (4.153.228.146:53804). Mar 13 00:44:45.646304 sshd[4316]: Accepted publickey for core from 4.153.228.146 port 53804 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:45.647552 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:45.652129 systemd-logind[1610]: New session 19 of user core. Mar 13 00:44:45.658928 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:44:45.985455 sshd[4319]: Connection closed by 4.153.228.146 port 53804 Mar 13 00:44:45.986040 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:45.989094 systemd[1]: sshd@18-10.0.0.185:22-4.153.228.146:53804.service: Deactivated successfully. Mar 13 00:44:45.991002 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:44:45.991732 systemd-logind[1610]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:44:45.992746 systemd-logind[1610]: Removed session 19. Mar 13 00:44:51.090589 systemd[1]: Started sshd@19-10.0.0.185:22-4.153.228.146:36720.service - OpenSSH per-connection server daemon (4.153.228.146:36720). Mar 13 00:44:51.602178 sshd[4330]: Accepted publickey for core from 4.153.228.146 port 36720 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:51.603467 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:51.609812 systemd-logind[1610]: New session 20 of user core. Mar 13 00:44:51.614939 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:44:51.937400 sshd[4333]: Connection closed by 4.153.228.146 port 36720 Mar 13 00:44:51.937797 sshd-session[4330]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:51.940510 systemd-logind[1610]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:44:51.940737 systemd[1]: sshd@19-10.0.0.185:22-4.153.228.146:36720.service: Deactivated successfully. Mar 13 00:44:51.942121 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:44:51.943809 systemd-logind[1610]: Removed session 20. Mar 13 00:44:52.044071 systemd[1]: Started sshd@20-10.0.0.185:22-4.153.228.146:36734.service - OpenSSH per-connection server daemon (4.153.228.146:36734). Mar 13 00:44:52.550847 sshd[4345]: Accepted publickey for core from 4.153.228.146 port 36734 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:52.551828 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:52.555271 systemd-logind[1610]: New session 21 of user core. Mar 13 00:44:52.562903 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:44:54.147455 containerd[1633]: time="2026-03-13T00:44:54.147313420Z" level=info msg="StopContainer for \"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\" with timeout 30 (s)" Mar 13 00:44:54.148923 containerd[1633]: time="2026-03-13T00:44:54.148906004Z" level=info msg="Stop container \"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\" with signal terminated" Mar 13 00:44:54.167576 containerd[1633]: time="2026-03-13T00:44:54.167541508Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:44:54.169550 systemd[1]: cri-containerd-ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a.scope: Deactivated successfully. Mar 13 00:44:54.173885 containerd[1633]: time="2026-03-13T00:44:54.173686136Z" level=info msg="received container exit event container_id:\"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\" id:\"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\" pid:3390 exited_at:{seconds:1773362694 nanos:173449865}" Mar 13 00:44:54.178689 containerd[1633]: time="2026-03-13T00:44:54.178626483Z" level=info msg="StopContainer for \"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\" with timeout 2 (s)" Mar 13 00:44:54.178994 containerd[1633]: time="2026-03-13T00:44:54.178975872Z" level=info msg="Stop container \"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\" with signal terminated" Mar 13 00:44:54.186982 systemd-networkd[1507]: lxc_health: Link DOWN Mar 13 00:44:54.186988 systemd-networkd[1507]: lxc_health: Lost carrier Mar 13 00:44:54.212226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a-rootfs.mount: Deactivated successfully. Mar 13 00:44:54.215961 systemd[1]: cri-containerd-5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6.scope: Deactivated successfully. Mar 13 00:44:54.216749 systemd[1]: cri-containerd-5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6.scope: Consumed 5.882s CPU time, 128.3M memory peak, 128K read from disk, 13.3M written to disk. Mar 13 00:44:54.217020 containerd[1633]: time="2026-03-13T00:44:54.216995751Z" level=info msg="received container exit event container_id:\"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\" id:\"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\" pid:3460 exited_at:{seconds:1773362694 nanos:216297840}" Mar 13 00:44:54.235225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6-rootfs.mount: Deactivated successfully. Mar 13 00:44:54.240723 containerd[1633]: time="2026-03-13T00:44:54.240693263Z" level=info msg="StopContainer for \"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\" returns successfully" Mar 13 00:44:54.242683 containerd[1633]: time="2026-03-13T00:44:54.242647777Z" level=info msg="StopPodSandbox for \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\"" Mar 13 00:44:54.242929 containerd[1633]: time="2026-03-13T00:44:54.242916168Z" level=info msg="Container to stop \"83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:44:54.243000 containerd[1633]: time="2026-03-13T00:44:54.242991862Z" level=info msg="Container to stop \"8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:44:54.243107 containerd[1633]: time="2026-03-13T00:44:54.243031055Z" level=info msg="Container to stop \"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:44:54.243107 containerd[1633]: time="2026-03-13T00:44:54.243061321Z" level=info msg="Container to stop \"b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:44:54.243107 containerd[1633]: time="2026-03-13T00:44:54.243068915Z" level=info msg="Container to stop \"1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:44:54.243183 containerd[1633]: time="2026-03-13T00:44:54.242697765Z" level=info msg="StopContainer for \"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\" returns successfully" Mar 13 00:44:54.243875 containerd[1633]: time="2026-03-13T00:44:54.243849137Z" level=info msg="StopPodSandbox for \"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\"" Mar 13 00:44:54.244185 containerd[1633]: time="2026-03-13T00:44:54.244145752Z" level=info msg="Container to stop \"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 00:44:54.250197 systemd[1]: cri-containerd-cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8.scope: Deactivated successfully. Mar 13 00:44:54.251949 systemd[1]: cri-containerd-b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630.scope: Deactivated successfully. Mar 13 00:44:54.255300 containerd[1633]: time="2026-03-13T00:44:54.255272091Z" level=info msg="received sandbox exit event container_id:\"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" id:\"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" exit_status:137 exited_at:{seconds:1773362694 nanos:255102721}" monitor_name=podsandbox Mar 13 00:44:54.255457 containerd[1633]: time="2026-03-13T00:44:54.255410563Z" level=info msg="received sandbox exit event container_id:\"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\" id:\"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\" exit_status:137 exited_at:{seconds:1773362694 nanos:255100636}" monitor_name=podsandbox Mar 13 00:44:54.276528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8-rootfs.mount: Deactivated successfully. Mar 13 00:44:54.282087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630-rootfs.mount: Deactivated successfully. Mar 13 00:44:54.287184 containerd[1633]: time="2026-03-13T00:44:54.287136622Z" level=info msg="shim disconnected" id=cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8 namespace=k8s.io Mar 13 00:44:54.287184 containerd[1633]: time="2026-03-13T00:44:54.287162807Z" level=warning msg="cleaning up after shim disconnected" id=cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8 namespace=k8s.io Mar 13 00:44:54.288107 containerd[1633]: time="2026-03-13T00:44:54.287168865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 00:44:54.294187 containerd[1633]: time="2026-03-13T00:44:54.294110973Z" level=info msg="shim disconnected" id=b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630 namespace=k8s.io Mar 13 00:44:54.294187 containerd[1633]: time="2026-03-13T00:44:54.294136548Z" level=warning msg="cleaning up after shim disconnected" id=b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630 namespace=k8s.io Mar 13 00:44:54.294187 containerd[1633]: time="2026-03-13T00:44:54.294142915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 00:44:54.305983 containerd[1633]: time="2026-03-13T00:44:54.305954065Z" level=info msg="TearDown network for sandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" successfully" Mar 13 00:44:54.305983 containerd[1633]: time="2026-03-13T00:44:54.305978582Z" level=info msg="StopPodSandbox for \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" returns successfully" Mar 13 00:44:54.306811 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8-shm.mount: Deactivated successfully. Mar 13 00:44:54.307921 containerd[1633]: time="2026-03-13T00:44:54.307892983Z" level=info msg="received sandbox container exit event sandbox_id:\"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" exit_status:137 exited_at:{seconds:1773362694 nanos:255102721}" monitor_name=criService Mar 13 00:44:54.316850 containerd[1633]: time="2026-03-13T00:44:54.316829432Z" level=info msg="received sandbox container exit event sandbox_id:\"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\" exit_status:137 exited_at:{seconds:1773362694 nanos:255100636}" monitor_name=criService Mar 13 00:44:54.317265 containerd[1633]: time="2026-03-13T00:44:54.316884759Z" level=info msg="TearDown network for sandbox \"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\" successfully" Mar 13 00:44:54.317265 containerd[1633]: time="2026-03-13T00:44:54.317264825Z" level=info msg="StopPodSandbox for \"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\" returns successfully" Mar 13 00:44:54.452787 kubelet[2803]: I0313 00:44:54.452679 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-cilium-run\") pod \"a8a0da43-54f3-49fc-81da-8cd1f986a554\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " Mar 13 00:44:54.453485 kubelet[2803]: I0313 00:44:54.453131 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-bpf-maps\") pod \"a8a0da43-54f3-49fc-81da-8cd1f986a554\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " Mar 13 00:44:54.453485 kubelet[2803]: I0313 00:44:54.453163 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8a0da43-54f3-49fc-81da-8cd1f986a554-cilium-config-path\") pod \"a8a0da43-54f3-49fc-81da-8cd1f986a554\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " Mar 13 00:44:54.453485 kubelet[2803]: I0313 00:44:54.453195 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9t48\" (UniqueName: \"kubernetes.io/projected/e27a53ec-f9b4-4365-8fbc-0f07990d0ae2-kube-api-access-f9t48\") pod \"e27a53ec-f9b4-4365-8fbc-0f07990d0ae2\" (UID: \"e27a53ec-f9b4-4365-8fbc-0f07990d0ae2\") " Mar 13 00:44:54.453485 kubelet[2803]: I0313 00:44:54.453213 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8a0da43-54f3-49fc-81da-8cd1f986a554-hubble-tls\") pod \"a8a0da43-54f3-49fc-81da-8cd1f986a554\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " Mar 13 00:44:54.453485 kubelet[2803]: I0313 00:44:54.453232 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8a0da43-54f3-49fc-81da-8cd1f986a554-clustermesh-secrets\") pod \"a8a0da43-54f3-49fc-81da-8cd1f986a554\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " Mar 13 00:44:54.453485 kubelet[2803]: I0313 00:44:54.453246 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-etc-cni-netd\") pod \"a8a0da43-54f3-49fc-81da-8cd1f986a554\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " Mar 13 00:44:54.453675 kubelet[2803]: I0313 00:44:54.453266 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-host-proc-sys-kernel\") pod \"a8a0da43-54f3-49fc-81da-8cd1f986a554\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " Mar 13 00:44:54.453675 kubelet[2803]: I0313 00:44:54.453278 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-cni-path\") pod \"a8a0da43-54f3-49fc-81da-8cd1f986a554\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " Mar 13 00:44:54.453675 kubelet[2803]: I0313 00:44:54.453291 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-hostproc\") pod \"a8a0da43-54f3-49fc-81da-8cd1f986a554\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " Mar 13 00:44:54.453675 kubelet[2803]: I0313 00:44:54.453304 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-host-proc-sys-net\") pod \"a8a0da43-54f3-49fc-81da-8cd1f986a554\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " Mar 13 00:44:54.453675 kubelet[2803]: I0313 00:44:54.453316 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-xtables-lock\") pod \"a8a0da43-54f3-49fc-81da-8cd1f986a554\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " Mar 13 00:44:54.453675 kubelet[2803]: I0313 00:44:54.453337 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jll7\" (UniqueName: \"kubernetes.io/projected/a8a0da43-54f3-49fc-81da-8cd1f986a554-kube-api-access-4jll7\") pod \"a8a0da43-54f3-49fc-81da-8cd1f986a554\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " Mar 13 00:44:54.453804 kubelet[2803]: I0313 00:44:54.453353 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e27a53ec-f9b4-4365-8fbc-0f07990d0ae2-cilium-config-path\") pod \"e27a53ec-f9b4-4365-8fbc-0f07990d0ae2\" (UID: \"e27a53ec-f9b4-4365-8fbc-0f07990d0ae2\") " Mar 13 00:44:54.453804 kubelet[2803]: I0313 00:44:54.453366 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-cilium-cgroup\") pod \"a8a0da43-54f3-49fc-81da-8cd1f986a554\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " Mar 13 00:44:54.453804 kubelet[2803]: I0313 00:44:54.453378 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-lib-modules\") pod \"a8a0da43-54f3-49fc-81da-8cd1f986a554\" (UID: \"a8a0da43-54f3-49fc-81da-8cd1f986a554\") " Mar 13 00:44:54.453804 kubelet[2803]: I0313 00:44:54.453439 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a8a0da43-54f3-49fc-81da-8cd1f986a554" (UID: "a8a0da43-54f3-49fc-81da-8cd1f986a554"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:44:54.453804 kubelet[2803]: I0313 00:44:54.453470 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a8a0da43-54f3-49fc-81da-8cd1f986a554" (UID: "a8a0da43-54f3-49fc-81da-8cd1f986a554"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:44:54.453902 kubelet[2803]: I0313 00:44:54.453489 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a8a0da43-54f3-49fc-81da-8cd1f986a554" (UID: "a8a0da43-54f3-49fc-81da-8cd1f986a554"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:44:54.453902 kubelet[2803]: I0313 00:44:54.453526 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-cni-path" (OuterVolumeSpecName: "cni-path") pod "a8a0da43-54f3-49fc-81da-8cd1f986a554" (UID: "a8a0da43-54f3-49fc-81da-8cd1f986a554"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:44:54.454313 kubelet[2803]: I0313 00:44:54.453966 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-hostproc" (OuterVolumeSpecName: "hostproc") pod "a8a0da43-54f3-49fc-81da-8cd1f986a554" (UID: "a8a0da43-54f3-49fc-81da-8cd1f986a554"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:44:54.454313 kubelet[2803]: I0313 00:44:54.453984 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a8a0da43-54f3-49fc-81da-8cd1f986a554" (UID: "a8a0da43-54f3-49fc-81da-8cd1f986a554"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:44:54.454313 kubelet[2803]: I0313 00:44:54.453995 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a8a0da43-54f3-49fc-81da-8cd1f986a554" (UID: "a8a0da43-54f3-49fc-81da-8cd1f986a554"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:44:54.456797 kubelet[2803]: I0313 00:44:54.456014 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27a53ec-f9b4-4365-8fbc-0f07990d0ae2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e27a53ec-f9b4-4365-8fbc-0f07990d0ae2" (UID: "e27a53ec-f9b4-4365-8fbc-0f07990d0ae2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:44:54.456797 kubelet[2803]: I0313 00:44:54.456050 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a8a0da43-54f3-49fc-81da-8cd1f986a554" (UID: "a8a0da43-54f3-49fc-81da-8cd1f986a554"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:44:54.456797 kubelet[2803]: I0313 00:44:54.456448 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8a0da43-54f3-49fc-81da-8cd1f986a554-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a8a0da43-54f3-49fc-81da-8cd1f986a554" (UID: "a8a0da43-54f3-49fc-81da-8cd1f986a554"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:44:54.457180 kubelet[2803]: I0313 00:44:54.457161 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a8a0da43-54f3-49fc-81da-8cd1f986a554" (UID: "a8a0da43-54f3-49fc-81da-8cd1f986a554"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:44:54.457217 kubelet[2803]: I0313 00:44:54.457182 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a8a0da43-54f3-49fc-81da-8cd1f986a554" (UID: "a8a0da43-54f3-49fc-81da-8cd1f986a554"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 00:44:54.458579 kubelet[2803]: I0313 00:44:54.458562 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8a0da43-54f3-49fc-81da-8cd1f986a554-kube-api-access-4jll7" (OuterVolumeSpecName: "kube-api-access-4jll7") pod "a8a0da43-54f3-49fc-81da-8cd1f986a554" (UID: "a8a0da43-54f3-49fc-81da-8cd1f986a554"). InnerVolumeSpecName "kube-api-access-4jll7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:44:54.459006 kubelet[2803]: I0313 00:44:54.458991 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27a53ec-f9b4-4365-8fbc-0f07990d0ae2-kube-api-access-f9t48" (OuterVolumeSpecName: "kube-api-access-f9t48") pod "e27a53ec-f9b4-4365-8fbc-0f07990d0ae2" (UID: "e27a53ec-f9b4-4365-8fbc-0f07990d0ae2"). InnerVolumeSpecName "kube-api-access-f9t48". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:44:54.460143 kubelet[2803]: I0313 00:44:54.460121 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8a0da43-54f3-49fc-81da-8cd1f986a554-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a8a0da43-54f3-49fc-81da-8cd1f986a554" (UID: "a8a0da43-54f3-49fc-81da-8cd1f986a554"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:44:54.460476 kubelet[2803]: I0313 00:44:54.460461 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8a0da43-54f3-49fc-81da-8cd1f986a554-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a8a0da43-54f3-49fc-81da-8cd1f986a554" (UID: "a8a0da43-54f3-49fc-81da-8cd1f986a554"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:44:54.540694 systemd[1]: Removed slice kubepods-burstable-poda8a0da43_54f3_49fc_81da_8cd1f986a554.slice - libcontainer container kubepods-burstable-poda8a0da43_54f3_49fc_81da_8cd1f986a554.slice. Mar 13 00:44:54.540799 systemd[1]: kubepods-burstable-poda8a0da43_54f3_49fc_81da_8cd1f986a554.slice: Consumed 5.966s CPU time, 128.8M memory peak, 128K read from disk, 13.3M written to disk. Mar 13 00:44:54.542354 systemd[1]: Removed slice kubepods-besteffort-pode27a53ec_f9b4_4365_8fbc_0f07990d0ae2.slice - libcontainer container kubepods-besteffort-pode27a53ec_f9b4_4365_8fbc_0f07990d0ae2.slice. Mar 13 00:44:54.554237 kubelet[2803]: I0313 00:44:54.554209 2803 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8a0da43-54f3-49fc-81da-8cd1f986a554-clustermesh-secrets\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554237 kubelet[2803]: I0313 00:44:54.554234 2803 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-etc-cni-netd\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554237 kubelet[2803]: I0313 00:44:54.554243 2803 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-host-proc-sys-kernel\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554375 kubelet[2803]: I0313 00:44:54.554251 2803 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-cni-path\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554375 kubelet[2803]: I0313 00:44:54.554259 2803 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-hostproc\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554375 kubelet[2803]: I0313 00:44:54.554268 2803 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-host-proc-sys-net\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554375 kubelet[2803]: I0313 00:44:54.554276 2803 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-xtables-lock\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554375 kubelet[2803]: I0313 00:44:54.554282 2803 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4jll7\" (UniqueName: \"kubernetes.io/projected/a8a0da43-54f3-49fc-81da-8cd1f986a554-kube-api-access-4jll7\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554375 kubelet[2803]: I0313 00:44:54.554289 2803 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e27a53ec-f9b4-4365-8fbc-0f07990d0ae2-cilium-config-path\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554375 kubelet[2803]: I0313 00:44:54.554296 2803 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-cilium-cgroup\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554375 kubelet[2803]: I0313 00:44:54.554303 2803 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-lib-modules\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554520 kubelet[2803]: I0313 00:44:54.554309 2803 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-cilium-run\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554520 kubelet[2803]: I0313 00:44:54.554315 2803 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8a0da43-54f3-49fc-81da-8cd1f986a554-bpf-maps\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554520 kubelet[2803]: I0313 00:44:54.554321 2803 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8a0da43-54f3-49fc-81da-8cd1f986a554-cilium-config-path\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554520 kubelet[2803]: I0313 00:44:54.554327 2803 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f9t48\" (UniqueName: \"kubernetes.io/projected/e27a53ec-f9b4-4365-8fbc-0f07990d0ae2-kube-api-access-f9t48\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.554520 kubelet[2803]: I0313 00:44:54.554334 2803 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8a0da43-54f3-49fc-81da-8cd1f986a554-hubble-tls\") on node \"ci-4459-2-4-n-8f702bd38e\" DevicePath \"\"" Mar 13 00:44:54.630536 kubelet[2803]: E0313 00:44:54.630482 2803 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 13 00:44:54.996262 kubelet[2803]: I0313 00:44:54.996165 2803 scope.go:117] "RemoveContainer" containerID="ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a" Mar 13 00:44:55.001545 containerd[1633]: time="2026-03-13T00:44:55.001437468Z" level=info msg="RemoveContainer for \"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\"" Mar 13 00:44:55.008919 containerd[1633]: time="2026-03-13T00:44:55.008890791Z" level=info msg="RemoveContainer for \"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\" returns successfully" Mar 13 00:44:55.009137 kubelet[2803]: I0313 00:44:55.009125 2803 scope.go:117] "RemoveContainer" containerID="ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a" Mar 13 00:44:55.009352 containerd[1633]: time="2026-03-13T00:44:55.009301595Z" level=error msg="ContainerStatus for \"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\": not found" Mar 13 00:44:55.009416 kubelet[2803]: E0313 00:44:55.009403 2803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\": not found" containerID="ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a" Mar 13 00:44:55.009454 kubelet[2803]: I0313 00:44:55.009423 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a"} err="failed to get container status \"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce1deb93fddd030fce61da6e4f69dd98692f36676b76fbe5ea9d0112bfa0881a\": not found" Mar 13 00:44:55.009512 kubelet[2803]: I0313 00:44:55.009455 2803 scope.go:117] "RemoveContainer" containerID="5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6" Mar 13 00:44:55.010846 containerd[1633]: time="2026-03-13T00:44:55.010828255Z" level=info msg="RemoveContainer for \"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\"" Mar 13 00:44:55.015719 containerd[1633]: time="2026-03-13T00:44:55.015669911Z" level=info msg="RemoveContainer for \"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\" returns successfully" Mar 13 00:44:55.015938 kubelet[2803]: I0313 00:44:55.015915 2803 scope.go:117] "RemoveContainer" containerID="83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db" Mar 13 00:44:55.018601 containerd[1633]: time="2026-03-13T00:44:55.018577827Z" level=info msg="RemoveContainer for \"83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db\"" Mar 13 00:44:55.024547 containerd[1633]: time="2026-03-13T00:44:55.024516240Z" level=info msg="RemoveContainer for \"83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db\" returns successfully" Mar 13 00:44:55.027169 kubelet[2803]: I0313 00:44:55.027117 2803 scope.go:117] "RemoveContainer" containerID="1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9" Mar 13 00:44:55.031129 containerd[1633]: time="2026-03-13T00:44:55.031105263Z" level=info msg="RemoveContainer for \"1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9\"" Mar 13 00:44:55.036054 containerd[1633]: time="2026-03-13T00:44:55.036031563Z" level=info msg="RemoveContainer for \"1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9\" returns successfully" Mar 13 00:44:55.036176 kubelet[2803]: I0313 00:44:55.036162 2803 scope.go:117] "RemoveContainer" containerID="b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f" Mar 13 00:44:55.037407 containerd[1633]: time="2026-03-13T00:44:55.037385188Z" level=info msg="RemoveContainer for \"b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f\"" Mar 13 00:44:55.040957 containerd[1633]: time="2026-03-13T00:44:55.040902916Z" level=info msg="RemoveContainer for \"b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f\" returns successfully" Mar 13 00:44:55.041087 kubelet[2803]: I0313 00:44:55.041031 2803 scope.go:117] "RemoveContainer" containerID="8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca" Mar 13 00:44:55.042211 containerd[1633]: time="2026-03-13T00:44:55.042194814Z" level=info msg="RemoveContainer for \"8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca\"" Mar 13 00:44:55.045190 containerd[1633]: time="2026-03-13T00:44:55.045168333Z" level=info msg="RemoveContainer for \"8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca\" returns successfully" Mar 13 00:44:55.045318 kubelet[2803]: I0313 00:44:55.045299 2803 scope.go:117] "RemoveContainer" containerID="5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6" Mar 13 00:44:55.045487 containerd[1633]: time="2026-03-13T00:44:55.045466432Z" level=error msg="ContainerStatus for \"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\": not found" Mar 13 00:44:55.045619 kubelet[2803]: E0313 00:44:55.045606 2803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\": not found" containerID="5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6" Mar 13 00:44:55.045686 kubelet[2803]: I0313 00:44:55.045669 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6"} err="failed to get container status \"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b3fb2dd1c7936796978f76e02a72318d41ba2926ea6f87e32832de100f930f6\": not found" Mar 13 00:44:55.045735 kubelet[2803]: I0313 00:44:55.045729 2803 scope.go:117] "RemoveContainer" containerID="83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db" Mar 13 00:44:55.045928 containerd[1633]: time="2026-03-13T00:44:55.045900933Z" level=error msg="ContainerStatus for \"83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db\": not found" Mar 13 00:44:55.046004 kubelet[2803]: E0313 00:44:55.045989 2803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db\": not found" containerID="83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db" Mar 13 00:44:55.046038 kubelet[2803]: I0313 00:44:55.046010 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db"} err="failed to get container status \"83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db\": rpc error: code = NotFound desc = an error occurred when try to find container \"83a7aeaf815ade4e3920c102fbebb4adbb567e6a54bdc79a597c326a2cfab7db\": not found" Mar 13 00:44:55.046038 kubelet[2803]: I0313 00:44:55.046024 2803 scope.go:117] "RemoveContainer" containerID="1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9" Mar 13 00:44:55.046237 kubelet[2803]: E0313 00:44:55.046227 2803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9\": not found" containerID="1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9" Mar 13 00:44:55.046266 containerd[1633]: time="2026-03-13T00:44:55.046148213Z" level=error msg="ContainerStatus for \"1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9\": not found" Mar 13 00:44:55.046307 kubelet[2803]: I0313 00:44:55.046239 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9"} err="failed to get container status \"1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"1520d370296e0dc682df2fc267799bb8a0a0d9ebb26d60c43fa3bccb42f5f8b9\": not found" Mar 13 00:44:55.046307 kubelet[2803]: I0313 00:44:55.046248 2803 scope.go:117] "RemoveContainer" containerID="b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f" Mar 13 00:44:55.046404 containerd[1633]: time="2026-03-13T00:44:55.046355797Z" level=error msg="ContainerStatus for \"b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f\": not found" Mar 13 00:44:55.046494 kubelet[2803]: E0313 00:44:55.046484 2803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f\": not found" containerID="b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f" Mar 13 00:44:55.046542 kubelet[2803]: I0313 00:44:55.046496 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f"} err="failed to get container status \"b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4f1418225e2b706949230df5a154781a09db4e0614c3f56131cc60cb097070f\": not found" Mar 13 00:44:55.046542 kubelet[2803]: I0313 00:44:55.046509 2803 scope.go:117] "RemoveContainer" containerID="8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca" Mar 13 00:44:55.046702 containerd[1633]: time="2026-03-13T00:44:55.046607770Z" level=error msg="ContainerStatus for \"8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca\": not found" Mar 13 00:44:55.046803 kubelet[2803]: E0313 00:44:55.046752 2803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca\": not found" containerID="8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca" Mar 13 00:44:55.046851 kubelet[2803]: I0313 00:44:55.046767 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca"} err="failed to get container status \"8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca\": rpc error: code = NotFound desc = an error occurred when try to find container \"8909feb078ff30d6ec548fd77297a6ffafedf6ad320aaa6dd27bec735843caca\": not found" Mar 13 00:44:55.212450 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630-shm.mount: Deactivated successfully. Mar 13 00:44:55.213863 systemd[1]: var-lib-kubelet-pods-e27a53ec\x2df9b4\x2d4365\x2d8fbc\x2d0f07990d0ae2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df9t48.mount: Deactivated successfully. Mar 13 00:44:55.213970 systemd[1]: var-lib-kubelet-pods-a8a0da43\x2d54f3\x2d49fc\x2d81da\x2d8cd1f986a554-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4jll7.mount: Deactivated successfully. Mar 13 00:44:55.214023 systemd[1]: var-lib-kubelet-pods-a8a0da43\x2d54f3\x2d49fc\x2d81da\x2d8cd1f986a554-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 13 00:44:55.214084 systemd[1]: var-lib-kubelet-pods-a8a0da43\x2d54f3\x2d49fc\x2d81da\x2d8cd1f986a554-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 13 00:44:56.177088 sshd[4350]: Connection closed by 4.153.228.146 port 36734 Mar 13 00:44:56.177944 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:56.181610 systemd-logind[1610]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:44:56.182419 systemd[1]: sshd@20-10.0.0.185:22-4.153.228.146:36734.service: Deactivated successfully. Mar 13 00:44:56.184015 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:44:56.185441 systemd-logind[1610]: Removed session 21. Mar 13 00:44:56.289060 systemd[1]: Started sshd@21-10.0.0.185:22-4.153.228.146:36736.service - OpenSSH per-connection server daemon (4.153.228.146:36736). Mar 13 00:44:56.535959 kubelet[2803]: I0313 00:44:56.535918 2803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8a0da43-54f3-49fc-81da-8cd1f986a554" path="/var/lib/kubelet/pods/a8a0da43-54f3-49fc-81da-8cd1f986a554/volumes" Mar 13 00:44:56.536973 kubelet[2803]: I0313 00:44:56.536882 2803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27a53ec-f9b4-4365-8fbc-0f07990d0ae2" path="/var/lib/kubelet/pods/e27a53ec-f9b4-4365-8fbc-0f07990d0ae2/volumes" Mar 13 00:44:56.802033 sshd[4492]: Accepted publickey for core from 4.153.228.146 port 36736 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:56.802860 sshd-session[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:56.806603 systemd-logind[1610]: New session 22 of user core. Mar 13 00:44:56.810897 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:44:57.510720 systemd[1]: Created slice kubepods-burstable-pod74a7ce94_62b2_41e3_b1be_057b4f6b0f10.slice - libcontainer container kubepods-burstable-pod74a7ce94_62b2_41e3_b1be_057b4f6b0f10.slice. Mar 13 00:44:57.569013 kubelet[2803]: I0313 00:44:57.568969 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-hubble-tls\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.569463 kubelet[2803]: I0313 00:44:57.569054 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-etc-cni-netd\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.569463 kubelet[2803]: I0313 00:44:57.569083 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-cilium-config-path\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.569463 kubelet[2803]: I0313 00:44:57.569101 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-bpf-maps\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.569463 kubelet[2803]: I0313 00:44:57.569114 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-hostproc\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.569463 kubelet[2803]: I0313 00:44:57.569130 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-cilium-ipsec-secrets\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.569463 kubelet[2803]: I0313 00:44:57.569142 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-host-proc-sys-net\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.569585 kubelet[2803]: I0313 00:44:57.569157 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-cilium-cgroup\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.569585 kubelet[2803]: I0313 00:44:57.569171 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-lib-modules\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.569585 kubelet[2803]: I0313 00:44:57.569183 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-xtables-lock\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.569585 kubelet[2803]: I0313 00:44:57.569198 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-cilium-run\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.569585 kubelet[2803]: I0313 00:44:57.569211 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb4zr\" (UniqueName: \"kubernetes.io/projected/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-kube-api-access-pb4zr\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.569585 kubelet[2803]: I0313 00:44:57.569228 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-cni-path\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.569699 kubelet[2803]: I0313 00:44:57.569262 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-clustermesh-secrets\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.569699 kubelet[2803]: I0313 00:44:57.569286 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74a7ce94-62b2-41e3-b1be-057b4f6b0f10-host-proc-sys-kernel\") pod \"cilium-nk87n\" (UID: \"74a7ce94-62b2-41e3-b1be-057b4f6b0f10\") " pod="kube-system/cilium-nk87n" Mar 13 00:44:57.599998 sshd[4495]: Connection closed by 4.153.228.146 port 36736 Mar 13 00:44:57.600942 sshd-session[4492]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:57.603564 systemd-logind[1610]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:44:57.605100 systemd[1]: sshd@21-10.0.0.185:22-4.153.228.146:36736.service: Deactivated successfully. Mar 13 00:44:57.607148 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:44:57.609202 systemd-logind[1610]: Removed session 22. Mar 13 00:44:57.703718 systemd[1]: Started sshd@22-10.0.0.185:22-4.153.228.146:36744.service - OpenSSH per-connection server daemon (4.153.228.146:36744). Mar 13 00:44:57.818050 containerd[1633]: time="2026-03-13T00:44:57.817597299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nk87n,Uid:74a7ce94-62b2-41e3-b1be-057b4f6b0f10,Namespace:kube-system,Attempt:0,}" Mar 13 00:44:57.836353 containerd[1633]: time="2026-03-13T00:44:57.836315407Z" level=info msg="connecting to shim d8842dc0407d58a9583fdf5a93795c847f2116414a8217b5eff0d388fcf2fd06" address="unix:///run/containerd/s/72b0a05afabfee43303e1f46ee6d2a19b94f0880f6f985c818ac4ee196dfd60f" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:44:57.855923 systemd[1]: Started cri-containerd-d8842dc0407d58a9583fdf5a93795c847f2116414a8217b5eff0d388fcf2fd06.scope - libcontainer container d8842dc0407d58a9583fdf5a93795c847f2116414a8217b5eff0d388fcf2fd06. Mar 13 00:44:57.878852 containerd[1633]: time="2026-03-13T00:44:57.878818368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nk87n,Uid:74a7ce94-62b2-41e3-b1be-057b4f6b0f10,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8842dc0407d58a9583fdf5a93795c847f2116414a8217b5eff0d388fcf2fd06\"" Mar 13 00:44:57.883402 containerd[1633]: time="2026-03-13T00:44:57.883372065Z" level=info msg="CreateContainer within sandbox \"d8842dc0407d58a9583fdf5a93795c847f2116414a8217b5eff0d388fcf2fd06\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 00:44:57.892705 containerd[1633]: time="2026-03-13T00:44:57.892674608Z" level=info msg="Container d57054518f96d985ab59dfc1e268d701e4e41de04b6b9a7afd524de129c768d7: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:44:57.899633 containerd[1633]: time="2026-03-13T00:44:57.899548692Z" level=info msg="CreateContainer within sandbox \"d8842dc0407d58a9583fdf5a93795c847f2116414a8217b5eff0d388fcf2fd06\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d57054518f96d985ab59dfc1e268d701e4e41de04b6b9a7afd524de129c768d7\"" Mar 13 00:44:57.900463 containerd[1633]: time="2026-03-13T00:44:57.900400529Z" level=info msg="StartContainer for \"d57054518f96d985ab59dfc1e268d701e4e41de04b6b9a7afd524de129c768d7\"" Mar 13 00:44:57.901295 containerd[1633]: time="2026-03-13T00:44:57.901273059Z" level=info msg="connecting to shim d57054518f96d985ab59dfc1e268d701e4e41de04b6b9a7afd524de129c768d7" address="unix:///run/containerd/s/72b0a05afabfee43303e1f46ee6d2a19b94f0880f6f985c818ac4ee196dfd60f" protocol=ttrpc version=3 Mar 13 00:44:57.918919 systemd[1]: Started cri-containerd-d57054518f96d985ab59dfc1e268d701e4e41de04b6b9a7afd524de129c768d7.scope - libcontainer container d57054518f96d985ab59dfc1e268d701e4e41de04b6b9a7afd524de129c768d7. Mar 13 00:44:57.946573 containerd[1633]: time="2026-03-13T00:44:57.946477871Z" level=info msg="StartContainer for \"d57054518f96d985ab59dfc1e268d701e4e41de04b6b9a7afd524de129c768d7\" returns successfully" Mar 13 00:44:57.951695 systemd[1]: cri-containerd-d57054518f96d985ab59dfc1e268d701e4e41de04b6b9a7afd524de129c768d7.scope: Deactivated successfully. Mar 13 00:44:57.953313 containerd[1633]: time="2026-03-13T00:44:57.953288240Z" level=info msg="received container exit event container_id:\"d57054518f96d985ab59dfc1e268d701e4e41de04b6b9a7afd524de129c768d7\" id:\"d57054518f96d985ab59dfc1e268d701e4e41de04b6b9a7afd524de129c768d7\" pid:4569 exited_at:{seconds:1773362697 nanos:953050453}" Mar 13 00:44:58.017443 containerd[1633]: time="2026-03-13T00:44:58.017403651Z" level=info msg="CreateContainer within sandbox \"d8842dc0407d58a9583fdf5a93795c847f2116414a8217b5eff0d388fcf2fd06\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 00:44:58.023951 containerd[1633]: time="2026-03-13T00:44:58.023922648Z" level=info msg="Container e3ea5926376bac6318abdfee7c0c7a491cf2f4b1a1bc3628bcdfc3f31effb339: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:44:58.031467 containerd[1633]: time="2026-03-13T00:44:58.031429721Z" level=info msg="CreateContainer within sandbox \"d8842dc0407d58a9583fdf5a93795c847f2116414a8217b5eff0d388fcf2fd06\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e3ea5926376bac6318abdfee7c0c7a491cf2f4b1a1bc3628bcdfc3f31effb339\"" Mar 13 00:44:58.032805 containerd[1633]: time="2026-03-13T00:44:58.031865071Z" level=info msg="StartContainer for \"e3ea5926376bac6318abdfee7c0c7a491cf2f4b1a1bc3628bcdfc3f31effb339\"" Mar 13 00:44:58.032805 containerd[1633]: time="2026-03-13T00:44:58.032520514Z" level=info msg="connecting to shim e3ea5926376bac6318abdfee7c0c7a491cf2f4b1a1bc3628bcdfc3f31effb339" address="unix:///run/containerd/s/72b0a05afabfee43303e1f46ee6d2a19b94f0880f6f985c818ac4ee196dfd60f" protocol=ttrpc version=3 Mar 13 00:44:58.050951 systemd[1]: Started cri-containerd-e3ea5926376bac6318abdfee7c0c7a491cf2f4b1a1bc3628bcdfc3f31effb339.scope - libcontainer container e3ea5926376bac6318abdfee7c0c7a491cf2f4b1a1bc3628bcdfc3f31effb339. Mar 13 00:44:58.077357 containerd[1633]: time="2026-03-13T00:44:58.076897434Z" level=info msg="StartContainer for \"e3ea5926376bac6318abdfee7c0c7a491cf2f4b1a1bc3628bcdfc3f31effb339\" returns successfully" Mar 13 00:44:58.080981 systemd[1]: cri-containerd-e3ea5926376bac6318abdfee7c0c7a491cf2f4b1a1bc3628bcdfc3f31effb339.scope: Deactivated successfully. Mar 13 00:44:58.082643 containerd[1633]: time="2026-03-13T00:44:58.082532840Z" level=info msg="received container exit event container_id:\"e3ea5926376bac6318abdfee7c0c7a491cf2f4b1a1bc3628bcdfc3f31effb339\" id:\"e3ea5926376bac6318abdfee7c0c7a491cf2f4b1a1bc3628bcdfc3f31effb339\" pid:4615 exited_at:{seconds:1773362698 nanos:82008298}" Mar 13 00:44:58.207032 sshd[4509]: Accepted publickey for core from 4.153.228.146 port 36744 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:58.208219 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:58.212708 systemd-logind[1610]: New session 23 of user core. Mar 13 00:44:58.219905 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 00:44:58.488418 sshd[4645]: Connection closed by 4.153.228.146 port 36744 Mar 13 00:44:58.488250 sshd-session[4509]: pam_unix(sshd:session): session closed for user core Mar 13 00:44:58.491972 systemd[1]: sshd@22-10.0.0.185:22-4.153.228.146:36744.service: Deactivated successfully. Mar 13 00:44:58.493901 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 00:44:58.494752 systemd-logind[1610]: Session 23 logged out. Waiting for processes to exit. Mar 13 00:44:58.496630 systemd-logind[1610]: Removed session 23. Mar 13 00:44:58.534420 kubelet[2803]: E0313 00:44:58.533987 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-8fqc5" podUID="30f6e4c6-61e2-458d-8d39-1d3da62b10ae" Mar 13 00:44:58.592737 systemd[1]: Started sshd@23-10.0.0.185:22-4.153.228.146:36750.service - OpenSSH per-connection server daemon (4.153.228.146:36750). Mar 13 00:44:59.020479 containerd[1633]: time="2026-03-13T00:44:59.020444063Z" level=info msg="CreateContainer within sandbox \"d8842dc0407d58a9583fdf5a93795c847f2116414a8217b5eff0d388fcf2fd06\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 00:44:59.033498 containerd[1633]: time="2026-03-13T00:44:59.032900990Z" level=info msg="Container 4cdedcc4d1fc49ca570c6aee37cd58d330a8c6af7a4f752f60da5a516ba74f27: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:44:59.042171 containerd[1633]: time="2026-03-13T00:44:59.042141727Z" level=info msg="CreateContainer within sandbox \"d8842dc0407d58a9583fdf5a93795c847f2116414a8217b5eff0d388fcf2fd06\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4cdedcc4d1fc49ca570c6aee37cd58d330a8c6af7a4f752f60da5a516ba74f27\"" Mar 13 00:44:59.042983 containerd[1633]: time="2026-03-13T00:44:59.042872933Z" level=info msg="StartContainer for \"4cdedcc4d1fc49ca570c6aee37cd58d330a8c6af7a4f752f60da5a516ba74f27\"" Mar 13 00:44:59.044370 containerd[1633]: time="2026-03-13T00:44:59.044350494Z" level=info msg="connecting to shim 4cdedcc4d1fc49ca570c6aee37cd58d330a8c6af7a4f752f60da5a516ba74f27" address="unix:///run/containerd/s/72b0a05afabfee43303e1f46ee6d2a19b94f0880f6f985c818ac4ee196dfd60f" protocol=ttrpc version=3 Mar 13 00:44:59.063928 systemd[1]: Started cri-containerd-4cdedcc4d1fc49ca570c6aee37cd58d330a8c6af7a4f752f60da5a516ba74f27.scope - libcontainer container 4cdedcc4d1fc49ca570c6aee37cd58d330a8c6af7a4f752f60da5a516ba74f27. Mar 13 00:44:59.106302 sshd[4652]: Accepted publickey for core from 4.153.228.146 port 36750 ssh2: RSA SHA256:vq/pKw+AvC1pwghLaTIizFiq9VFBXFrLmNBInBA4+oE Mar 13 00:44:59.107326 sshd-session[4652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:44:59.111579 systemd-logind[1610]: New session 24 of user core. Mar 13 00:44:59.118948 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 00:44:59.138642 systemd[1]: cri-containerd-4cdedcc4d1fc49ca570c6aee37cd58d330a8c6af7a4f752f60da5a516ba74f27.scope: Deactivated successfully. Mar 13 00:44:59.139583 containerd[1633]: time="2026-03-13T00:44:59.139557663Z" level=info msg="StartContainer for \"4cdedcc4d1fc49ca570c6aee37cd58d330a8c6af7a4f752f60da5a516ba74f27\" returns successfully" Mar 13 00:44:59.141499 containerd[1633]: time="2026-03-13T00:44:59.141381966Z" level=info msg="received container exit event container_id:\"4cdedcc4d1fc49ca570c6aee37cd58d330a8c6af7a4f752f60da5a516ba74f27\" id:\"4cdedcc4d1fc49ca570c6aee37cd58d330a8c6af7a4f752f60da5a516ba74f27\" pid:4669 exited_at:{seconds:1773362699 nanos:141150223}" Mar 13 00:44:59.158878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cdedcc4d1fc49ca570c6aee37cd58d330a8c6af7a4f752f60da5a516ba74f27-rootfs.mount: Deactivated successfully. Mar 13 00:44:59.431429 kubelet[2803]: I0313 00:44:59.431251 2803 setters.go:543] "Node became not ready" node="ci-4459-2-4-n-8f702bd38e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T00:44:59Z","lastTransitionTime":"2026-03-13T00:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 13 00:44:59.631332 kubelet[2803]: E0313 00:44:59.631280 2803 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 13 00:45:00.024736 containerd[1633]: time="2026-03-13T00:45:00.024638033Z" level=info msg="CreateContainer within sandbox \"d8842dc0407d58a9583fdf5a93795c847f2116414a8217b5eff0d388fcf2fd06\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 00:45:00.033336 containerd[1633]: time="2026-03-13T00:45:00.033269300Z" level=info msg="Container d4bffeacb5f240639a8e234ec9392078021e3c1eeede2465e6e81853601b8dd4: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:45:00.036820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2898099937.mount: Deactivated successfully. Mar 13 00:45:00.039565 containerd[1633]: time="2026-03-13T00:45:00.039524375Z" level=info msg="CreateContainer within sandbox \"d8842dc0407d58a9583fdf5a93795c847f2116414a8217b5eff0d388fcf2fd06\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d4bffeacb5f240639a8e234ec9392078021e3c1eeede2465e6e81853601b8dd4\"" Mar 13 00:45:00.040871 containerd[1633]: time="2026-03-13T00:45:00.040051674Z" level=info msg="StartContainer for \"d4bffeacb5f240639a8e234ec9392078021e3c1eeede2465e6e81853601b8dd4\"" Mar 13 00:45:00.040871 containerd[1633]: time="2026-03-13T00:45:00.040695462Z" level=info msg="connecting to shim d4bffeacb5f240639a8e234ec9392078021e3c1eeede2465e6e81853601b8dd4" address="unix:///run/containerd/s/72b0a05afabfee43303e1f46ee6d2a19b94f0880f6f985c818ac4ee196dfd60f" protocol=ttrpc version=3 Mar 13 00:45:00.066015 systemd[1]: Started cri-containerd-d4bffeacb5f240639a8e234ec9392078021e3c1eeede2465e6e81853601b8dd4.scope - libcontainer container d4bffeacb5f240639a8e234ec9392078021e3c1eeede2465e6e81853601b8dd4. Mar 13 00:45:00.090197 systemd[1]: cri-containerd-d4bffeacb5f240639a8e234ec9392078021e3c1eeede2465e6e81853601b8dd4.scope: Deactivated successfully. Mar 13 00:45:00.093163 containerd[1633]: time="2026-03-13T00:45:00.092756404Z" level=info msg="received container exit event container_id:\"d4bffeacb5f240639a8e234ec9392078021e3c1eeede2465e6e81853601b8dd4\" id:\"d4bffeacb5f240639a8e234ec9392078021e3c1eeede2465e6e81853601b8dd4\" pid:4715 exited_at:{seconds:1773362700 nanos:91006532}" Mar 13 00:45:00.099680 containerd[1633]: time="2026-03-13T00:45:00.099650277Z" level=info msg="StartContainer for \"d4bffeacb5f240639a8e234ec9392078021e3c1eeede2465e6e81853601b8dd4\" returns successfully" Mar 13 00:45:00.109977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4bffeacb5f240639a8e234ec9392078021e3c1eeede2465e6e81853601b8dd4-rootfs.mount: Deactivated successfully. Mar 13 00:45:00.534593 kubelet[2803]: E0313 00:45:00.534060 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-8fqc5" podUID="30f6e4c6-61e2-458d-8d39-1d3da62b10ae" Mar 13 00:45:01.029442 containerd[1633]: time="2026-03-13T00:45:01.029229889Z" level=info msg="CreateContainer within sandbox \"d8842dc0407d58a9583fdf5a93795c847f2116414a8217b5eff0d388fcf2fd06\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 00:45:01.056918 containerd[1633]: time="2026-03-13T00:45:01.055787607Z" level=info msg="Container 493cd773b409725d36e7376b867669f08ad4667c804391ac602fe5d23f73fcbe: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:45:01.058345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1588384509.mount: Deactivated successfully. Mar 13 00:45:01.079444 containerd[1633]: time="2026-03-13T00:45:01.079398452Z" level=info msg="CreateContainer within sandbox \"d8842dc0407d58a9583fdf5a93795c847f2116414a8217b5eff0d388fcf2fd06\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"493cd773b409725d36e7376b867669f08ad4667c804391ac602fe5d23f73fcbe\"" Mar 13 00:45:01.080711 containerd[1633]: time="2026-03-13T00:45:01.080675898Z" level=info msg="StartContainer for \"493cd773b409725d36e7376b867669f08ad4667c804391ac602fe5d23f73fcbe\"" Mar 13 00:45:01.081796 containerd[1633]: time="2026-03-13T00:45:01.081694482Z" level=info msg="connecting to shim 493cd773b409725d36e7376b867669f08ad4667c804391ac602fe5d23f73fcbe" address="unix:///run/containerd/s/72b0a05afabfee43303e1f46ee6d2a19b94f0880f6f985c818ac4ee196dfd60f" protocol=ttrpc version=3 Mar 13 00:45:01.102938 systemd[1]: Started cri-containerd-493cd773b409725d36e7376b867669f08ad4667c804391ac602fe5d23f73fcbe.scope - libcontainer container 493cd773b409725d36e7376b867669f08ad4667c804391ac602fe5d23f73fcbe. Mar 13 00:45:01.145804 containerd[1633]: time="2026-03-13T00:45:01.145642666Z" level=info msg="StartContainer for \"493cd773b409725d36e7376b867669f08ad4667c804391ac602fe5d23f73fcbe\" returns successfully" Mar 13 00:45:01.416829 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_256)) Mar 13 00:45:02.534212 kubelet[2803]: E0313 00:45:02.533508 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-8fqc5" podUID="30f6e4c6-61e2-458d-8d39-1d3da62b10ae" Mar 13 00:45:04.101981 systemd-networkd[1507]: lxc_health: Link UP Mar 13 00:45:04.102213 systemd-networkd[1507]: lxc_health: Gained carrier Mar 13 00:45:04.533599 kubelet[2803]: E0313 00:45:04.533264 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-8fqc5" podUID="30f6e4c6-61e2-458d-8d39-1d3da62b10ae" Mar 13 00:45:05.836198 kubelet[2803]: I0313 00:45:05.835876 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nk87n" podStartSLOduration=8.835862979 podStartE2EDuration="8.835862979s" podCreationTimestamp="2026-03-13 00:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:45:02.055006788 +0000 UTC m=+227.616087517" watchObservedRunningTime="2026-03-13 00:45:05.835862979 +0000 UTC m=+231.396943708" Mar 13 00:45:05.900947 systemd-networkd[1507]: lxc_health: Gained IPv6LL Mar 13 00:45:09.856824 sshd[4676]: Connection closed by 4.153.228.146 port 36750 Mar 13 00:45:09.857167 sshd-session[4652]: pam_unix(sshd:session): session closed for user core Mar 13 00:45:09.861713 systemd[1]: sshd@23-10.0.0.185:22-4.153.228.146:36750.service: Deactivated successfully. Mar 13 00:45:09.864079 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 00:45:09.864969 systemd-logind[1610]: Session 24 logged out. Waiting for processes to exit. Mar 13 00:45:09.866388 systemd-logind[1610]: Removed session 24. Mar 13 00:45:14.524969 containerd[1633]: time="2026-03-13T00:45:14.524930338Z" level=info msg="StopPodSandbox for \"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\"" Mar 13 00:45:14.525788 containerd[1633]: time="2026-03-13T00:45:14.525423088Z" level=info msg="TearDown network for sandbox \"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\" successfully" Mar 13 00:45:14.525788 containerd[1633]: time="2026-03-13T00:45:14.525446141Z" level=info msg="StopPodSandbox for \"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\" returns successfully" Mar 13 00:45:14.525788 containerd[1633]: time="2026-03-13T00:45:14.525715698Z" level=info msg="RemovePodSandbox for \"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\"" Mar 13 00:45:14.525788 containerd[1633]: time="2026-03-13T00:45:14.525734880Z" level=info msg="Forcibly stopping sandbox \"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\"" Mar 13 00:45:14.526804 containerd[1633]: time="2026-03-13T00:45:14.526023154Z" level=info msg="TearDown network for sandbox \"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\" successfully" Mar 13 00:45:14.526994 containerd[1633]: time="2026-03-13T00:45:14.526969504Z" level=info msg="Ensure that sandbox b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630 in task-service has been cleanup successfully" Mar 13 00:45:14.534954 containerd[1633]: time="2026-03-13T00:45:14.534918952Z" level=info msg="RemovePodSandbox \"b8358a57336d566514d317cbac86f286a0155a89deb4ce810020657a04a87630\" returns successfully" Mar 13 00:45:14.535198 containerd[1633]: time="2026-03-13T00:45:14.535180874Z" level=info msg="StopPodSandbox for \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\"" Mar 13 00:45:14.535276 containerd[1633]: time="2026-03-13T00:45:14.535264463Z" level=info msg="TearDown network for sandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" successfully" Mar 13 00:45:14.535301 containerd[1633]: time="2026-03-13T00:45:14.535275742Z" level=info msg="StopPodSandbox for \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" returns successfully" Mar 13 00:45:14.535471 containerd[1633]: time="2026-03-13T00:45:14.535441138Z" level=info msg="RemovePodSandbox for \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\"" Mar 13 00:45:14.535500 containerd[1633]: time="2026-03-13T00:45:14.535474585Z" level=info msg="Forcibly stopping sandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\"" Mar 13 00:45:14.535537 containerd[1633]: time="2026-03-13T00:45:14.535527486Z" level=info msg="TearDown network for sandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" successfully" Mar 13 00:45:14.536549 containerd[1633]: time="2026-03-13T00:45:14.536525657Z" level=info msg="Ensure that sandbox cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8 in task-service has been cleanup successfully" Mar 13 00:45:14.542168 containerd[1633]: time="2026-03-13T00:45:14.542133705Z" level=info msg="RemovePodSandbox \"cc4cb2b3a4810a5d761e23a1caa5337073cecf72a75a17092acce607704ae4b8\" returns successfully" Mar 13 00:45:39.717725 kubelet[2803]: E0313 00:45:39.717248 2803 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.185:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-8f702bd38e?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 00:45:39.951263 kubelet[2803]: E0313 00:45:39.951223 2803 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.185:47998->10.0.0.173:2379: read: connection timed out" Mar 13 00:45:40.062564 systemd[1]: cri-containerd-05f46bcde9fa72ca69a5850204e7ad5395c849aea89dfacca36c88330871b870.scope: Deactivated successfully. Mar 13 00:45:40.062825 systemd[1]: cri-containerd-05f46bcde9fa72ca69a5850204e7ad5395c849aea89dfacca36c88330871b870.scope: Consumed 2.424s CPU time, 56.2M memory peak. Mar 13 00:45:40.064256 containerd[1633]: time="2026-03-13T00:45:40.064222862Z" level=info msg="received container exit event container_id:\"05f46bcde9fa72ca69a5850204e7ad5395c849aea89dfacca36c88330871b870\" id:\"05f46bcde9fa72ca69a5850204e7ad5395c849aea89dfacca36c88330871b870\" pid:2632 exit_status:1 exited_at:{seconds:1773362740 nanos:63601588}" Mar 13 00:45:40.085569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05f46bcde9fa72ca69a5850204e7ad5395c849aea89dfacca36c88330871b870-rootfs.mount: Deactivated successfully. Mar 13 00:45:40.100621 kubelet[2803]: I0313 00:45:40.100601 2803 scope.go:117] "RemoveContainer" containerID="05f46bcde9fa72ca69a5850204e7ad5395c849aea89dfacca36c88330871b870" Mar 13 00:45:40.102006 containerd[1633]: time="2026-03-13T00:45:40.101980399Z" level=info msg="CreateContainer within sandbox \"df68a98c3c39f85c08d9d47278daa921c7218aa2e32a4a14dd54a2509187ff13\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 13 00:45:40.113717 containerd[1633]: time="2026-03-13T00:45:40.111924002Z" level=info msg="Container 8a16f73416d5ec610d1656b913d621bb1cfa261e796024782de0dbc8d4f63e18: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:45:40.114970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount863294502.mount: Deactivated successfully. Mar 13 00:45:40.119589 containerd[1633]: time="2026-03-13T00:45:40.119501327Z" level=info msg="CreateContainer within sandbox \"df68a98c3c39f85c08d9d47278daa921c7218aa2e32a4a14dd54a2509187ff13\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8a16f73416d5ec610d1656b913d621bb1cfa261e796024782de0dbc8d4f63e18\"" Mar 13 00:45:40.120095 containerd[1633]: time="2026-03-13T00:45:40.120076348Z" level=info msg="StartContainer for \"8a16f73416d5ec610d1656b913d621bb1cfa261e796024782de0dbc8d4f63e18\"" Mar 13 00:45:40.121063 containerd[1633]: time="2026-03-13T00:45:40.121043408Z" level=info msg="connecting to shim 8a16f73416d5ec610d1656b913d621bb1cfa261e796024782de0dbc8d4f63e18" address="unix:///run/containerd/s/9a3f6e013740be169b5a5e8ab52aa4eca98700d2387f5ec5a3e21842501a27d4" protocol=ttrpc version=3 Mar 13 00:45:40.142151 systemd[1]: Started cri-containerd-8a16f73416d5ec610d1656b913d621bb1cfa261e796024782de0dbc8d4f63e18.scope - libcontainer container 8a16f73416d5ec610d1656b913d621bb1cfa261e796024782de0dbc8d4f63e18. Mar 13 00:45:40.190139 containerd[1633]: time="2026-03-13T00:45:40.189341226Z" level=info msg="StartContainer for \"8a16f73416d5ec610d1656b913d621bb1cfa261e796024782de0dbc8d4f63e18\" returns successfully" Mar 13 00:45:44.047377 kubelet[2803]: E0313 00:45:44.047248 2803 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.185:47654->10.0.0.173:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-2-4-n-8f702bd38e.189c4002b12e2bd4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-2-4-n-8f702bd38e,UID:79b704288966a07d8cf61c2b0098092e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-8f702bd38e,},FirstTimestamp:2026-03-13 00:45:33.583838164 +0000 UTC m=+259.144918886,LastTimestamp:2026-03-13 00:45:33.583838164 +0000 UTC m=+259.144918886,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-8f702bd38e,}" Mar 13 00:45:45.618661 systemd[1]: cri-containerd-a5d3d3982d153eb03553d05478b948186b0c0127cc9a9c4c70917d7e93545ac6.scope: Deactivated successfully. Mar 13 00:45:45.619592 systemd[1]: cri-containerd-a5d3d3982d153eb03553d05478b948186b0c0127cc9a9c4c70917d7e93545ac6.scope: Consumed 2.587s CPU time, 22.4M memory peak. Mar 13 00:45:45.620712 containerd[1633]: time="2026-03-13T00:45:45.620471552Z" level=info msg="received container exit event container_id:\"a5d3d3982d153eb03553d05478b948186b0c0127cc9a9c4c70917d7e93545ac6\" id:\"a5d3d3982d153eb03553d05478b948186b0c0127cc9a9c4c70917d7e93545ac6\" pid:2647 exit_status:1 exited_at:{seconds:1773362745 nanos:620140677}" Mar 13 00:45:45.639656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5d3d3982d153eb03553d05478b948186b0c0127cc9a9c4c70917d7e93545ac6-rootfs.mount: Deactivated successfully. Mar 13 00:45:46.112239 kubelet[2803]: I0313 00:45:46.112209 2803 scope.go:117] "RemoveContainer" containerID="a5d3d3982d153eb03553d05478b948186b0c0127cc9a9c4c70917d7e93545ac6" Mar 13 00:45:46.114198 containerd[1633]: time="2026-03-13T00:45:46.114168863Z" level=info msg="CreateContainer within sandbox \"135d448c793ff1da378922312f71913c488b0cd614fc489c5b96c4f3ced0f858\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 13 00:45:46.123483 containerd[1633]: time="2026-03-13T00:45:46.122010515Z" level=info msg="Container 8f7710b9eb9a1252e33a74f6843debbd8dbffa9b334174e606f801855827a821: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:45:46.132385 containerd[1633]: time="2026-03-13T00:45:46.132329268Z" level=info msg="CreateContainer within sandbox \"135d448c793ff1da378922312f71913c488b0cd614fc489c5b96c4f3ced0f858\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8f7710b9eb9a1252e33a74f6843debbd8dbffa9b334174e606f801855827a821\"" Mar 13 00:45:46.132785 containerd[1633]: time="2026-03-13T00:45:46.132757426Z" level=info msg="StartContainer for \"8f7710b9eb9a1252e33a74f6843debbd8dbffa9b334174e606f801855827a821\"" Mar 13 00:45:46.133549 containerd[1633]: time="2026-03-13T00:45:46.133514427Z" level=info msg="connecting to shim 8f7710b9eb9a1252e33a74f6843debbd8dbffa9b334174e606f801855827a821" address="unix:///run/containerd/s/121cbfff542d368b2d337fac7dfd221d0c04273ae9b18301aed15936e4c893e5" protocol=ttrpc version=3 Mar 13 00:45:46.153957 systemd[1]: Started cri-containerd-8f7710b9eb9a1252e33a74f6843debbd8dbffa9b334174e606f801855827a821.scope - libcontainer container 8f7710b9eb9a1252e33a74f6843debbd8dbffa9b334174e606f801855827a821. Mar 13 00:45:46.199098 containerd[1633]: time="2026-03-13T00:45:46.199062352Z" level=info msg="StartContainer for \"8f7710b9eb9a1252e33a74f6843debbd8dbffa9b334174e606f801855827a821\" returns successfully" Mar 13 00:45:49.951814 kubelet[2803]: E0313 00:45:49.951644 2803 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.185:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-8f702bd38e?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"