Jan 23 01:44:00.931379 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:44:00.931420 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:44:00.931433 kernel: BIOS-provided physical RAM map: Jan 23 01:44:00.931443 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 23 01:44:00.931456 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 23 01:44:00.931472 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 01:44:00.931496 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 23 01:44:00.931518 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 23 01:44:00.931527 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 01:44:00.931537 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 01:44:00.931547 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 01:44:00.931557 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 01:44:00.931581 kernel: NX (Execute Disable) protection: active Jan 23 01:44:00.931596 kernel: APIC: Static calls initialized Jan 23 01:44:00.931608 kernel: SMBIOS 2.8 present. Jan 23 01:44:00.931619 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 23 01:44:00.931630 kernel: DMI: Memory slots populated: 1/1 Jan 23 01:44:00.931640 kernel: Hypervisor detected: KVM Jan 23 01:44:00.931651 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 23 01:44:00.931665 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 01:44:00.931676 kernel: kvm-clock: using sched offset of 5841336006 cycles Jan 23 01:44:00.931688 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 01:44:00.931699 kernel: tsc: Detected 2799.998 MHz processor Jan 23 01:44:00.931709 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:44:00.931721 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:44:00.931731 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 23 01:44:00.931742 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 01:44:00.931753 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:44:00.931768 kernel: Using GB pages for direct mapping Jan 23 01:44:00.931779 kernel: ACPI: Early table checksum verification disabled Jan 23 01:44:00.931789 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 23 01:44:00.931800 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:44:00.931811 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:44:00.931822 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:44:00.931833 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 23 01:44:00.931855 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:44:00.931868 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:44:00.931883 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:44:00.931894 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:44:00.931905 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 23 01:44:00.931921 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 23 01:44:00.931932 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 23 01:44:00.931944 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 23 01:44:00.931958 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 23 01:44:00.931970 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 23 01:44:00.931981 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 23 01:44:00.931992 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 23 01:44:00.932003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 23 01:44:00.932014 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 23 01:44:00.932026 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 23 01:44:00.932037 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 23 01:44:00.932052 kernel: Zone ranges: Jan 23 01:44:00.932080 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:44:00.932099 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 23 01:44:00.932116 kernel: Normal empty Jan 23 01:44:00.932127 kernel: Device empty Jan 23 01:44:00.932138 kernel: Movable zone start for each node Jan 23 01:44:00.932150 kernel: Early memory node ranges Jan 23 01:44:00.932161 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 01:44:00.932172 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 23 01:44:00.932188 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 23 01:44:00.932205 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:44:00.932216 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 01:44:00.932227 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 23 01:44:00.932239 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 01:44:00.932250 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 01:44:00.932262 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 01:44:00.932273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 01:44:00.932297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 01:44:00.932308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:44:00.932332 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 01:44:00.932355 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 01:44:00.932366 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:44:00.932376 kernel: TSC deadline timer available Jan 23 01:44:00.932387 kernel: CPU topo: Max. logical packages: 16 Jan 23 01:44:00.932397 kernel: CPU topo: Max. logical dies: 16 Jan 23 01:44:00.932408 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:44:00.932418 kernel: CPU topo: Max. threads per core: 1 Jan 23 01:44:00.932428 kernel: CPU topo: Num. cores per package: 1 Jan 23 01:44:00.932442 kernel: CPU topo: Num. threads per package: 1 Jan 23 01:44:00.932465 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 23 01:44:00.932476 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 01:44:00.932487 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 01:44:00.932498 kernel: Booting paravirtualized kernel on KVM Jan 23 01:44:00.932530 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:44:00.932541 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 23 01:44:00.932553 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 23 01:44:00.932564 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 23 01:44:00.932579 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 23 01:44:00.932590 kernel: kvm-guest: PV spinlocks enabled Jan 23 01:44:00.932601 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:44:00.932614 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:44:00.932626 kernel: random: crng init done Jan 23 01:44:00.932637 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 01:44:00.932648 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 01:44:00.932665 kernel: Fallback order for Node 0: 0 Jan 23 01:44:00.932680 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 23 01:44:00.932704 kernel: Policy zone: DMA32 Jan 23 01:44:00.932715 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:44:00.932726 kernel: software IO TLB: area num 16. Jan 23 01:44:00.932737 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 23 01:44:00.932748 kernel: Kernel/User page tables isolation: enabled Jan 23 01:44:00.932759 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:44:00.932769 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:44:00.932793 kernel: Dynamic Preempt: voluntary Jan 23 01:44:00.932813 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:44:00.932825 kernel: rcu: RCU event tracing is enabled. Jan 23 01:44:00.932857 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 23 01:44:00.932871 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:44:00.932883 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:44:00.932894 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:44:00.932905 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:44:00.932916 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 23 01:44:00.932927 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 01:44:00.932939 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 01:44:00.932961 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 01:44:00.932972 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 23 01:44:00.932984 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:44:00.933004 kernel: Console: colour VGA+ 80x25 Jan 23 01:44:00.933023 kernel: printk: legacy console [tty0] enabled Jan 23 01:44:00.933034 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:44:00.933046 kernel: ACPI: Core revision 20240827 Jan 23 01:44:00.933058 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:44:00.933088 kernel: x2apic enabled Jan 23 01:44:00.933101 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:44:00.933118 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 23 01:44:00.933135 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 23 01:44:00.933147 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 01:44:00.933159 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 01:44:00.933171 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 01:44:00.933183 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:44:00.933198 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 01:44:00.933210 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 01:44:00.933222 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 23 01:44:00.933234 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 01:44:00.933248 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 01:44:00.933260 kernel: MDS: Mitigation: Clear CPU buffers Jan 23 01:44:00.933272 kernel: MMIO Stale Data: Unknown: No mitigations Jan 23 01:44:00.933295 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 23 01:44:00.933307 kernel: active return thunk: its_return_thunk Jan 23 01:44:00.933318 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 01:44:00.933330 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:44:00.933345 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:44:00.933369 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:44:00.933381 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:44:00.933392 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 23 01:44:00.933404 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:44:00.933416 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:44:00.933427 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:44:00.933439 kernel: landlock: Up and running. Jan 23 01:44:00.933450 kernel: SELinux: Initializing. Jan 23 01:44:00.933462 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 01:44:00.933474 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 01:44:00.933486 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 23 01:44:00.933501 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 23 01:44:00.933513 kernel: signal: max sigframe size: 1776 Jan 23 01:44:00.933526 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:44:00.933538 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:44:00.933550 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 23 01:44:00.933562 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 01:44:00.933574 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:44:00.933585 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:44:00.933597 kernel: .... node #0, CPUs: #1 Jan 23 01:44:00.933619 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 01:44:00.933631 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 23 01:44:00.933643 kernel: Memory: 1887484K/2096616K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 203116K reserved, 0K cma-reserved) Jan 23 01:44:00.933655 kernel: devtmpfs: initialized Jan 23 01:44:00.933667 kernel: x86/mm: Memory block size: 128MB Jan 23 01:44:00.933679 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:44:00.933691 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 23 01:44:00.933703 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:44:00.933715 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:44:00.933731 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:44:00.933743 kernel: audit: type=2000 audit(1769132637.620:1): state=initialized audit_enabled=0 res=1 Jan 23 01:44:00.933754 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:44:00.933766 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:44:00.933778 kernel: cpuidle: using governor menu Jan 23 01:44:00.933790 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:44:00.933801 kernel: dca service started, version 1.12.1 Jan 23 01:44:00.933816 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 01:44:00.933828 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 01:44:00.933854 kernel: PCI: Using configuration type 1 for base access Jan 23 01:44:00.933868 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:44:00.933880 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:44:00.933892 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:44:00.933904 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:44:00.933915 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:44:00.933927 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:44:00.933939 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:44:00.933951 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:44:00.933967 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 01:44:00.933980 kernel: ACPI: Interpreter enabled Jan 23 01:44:00.933991 kernel: ACPI: PM: (supports S0 S5) Jan 23 01:44:00.934003 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:44:00.934015 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:44:00.934027 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 01:44:00.934039 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 01:44:00.934051 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 01:44:00.934361 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 01:44:00.934531 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 01:44:00.934686 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 01:44:00.934705 kernel: PCI host bridge to bus 0000:00 Jan 23 01:44:00.934902 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 01:44:00.935047 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 01:44:00.935229 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 01:44:00.935400 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 23 01:44:00.935545 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 01:44:00.935699 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 23 01:44:00.935838 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 01:44:00.936048 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 01:44:00.936278 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 23 01:44:00.936462 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 23 01:44:00.936615 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 23 01:44:00.936766 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 23 01:44:00.936955 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 01:44:00.937179 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:44:00.937336 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 23 01:44:00.937521 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 01:44:00.937692 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 23 01:44:00.937857 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 01:44:00.938040 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:44:00.938231 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 23 01:44:00.938401 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 01:44:00.938587 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 23 01:44:00.938737 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 01:44:00.938935 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:44:00.941136 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 23 01:44:00.941315 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 01:44:00.941473 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 23 01:44:00.941627 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 01:44:00.941811 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:44:00.941989 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 23 01:44:00.942183 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 01:44:00.942338 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 23 01:44:00.942490 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 01:44:00.942673 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:44:00.942872 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 23 01:44:00.943026 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 01:44:00.943221 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 23 01:44:00.943385 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 01:44:00.943560 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:44:00.943714 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 23 01:44:00.943881 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 01:44:00.944034 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 23 01:44:00.944204 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 01:44:00.944374 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:44:00.944535 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 23 01:44:00.944688 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 01:44:00.944883 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 23 01:44:00.945036 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 01:44:00.947306 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:44:00.947473 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 23 01:44:00.947659 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 01:44:00.947856 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 23 01:44:00.948016 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 01:44:00.948224 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 01:44:00.948385 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jan 23 01:44:00.948553 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 23 01:44:00.948724 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 23 01:44:00.948903 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 23 01:44:00.950547 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 01:44:00.950719 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jan 23 01:44:00.950910 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 23 01:44:00.951085 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 23 01:44:00.951277 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 01:44:00.951434 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 01:44:00.951619 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 01:44:00.951784 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jan 23 01:44:00.951952 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 23 01:44:00.952155 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 01:44:00.952310 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 01:44:00.952515 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 23 01:44:00.952704 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 23 01:44:00.952898 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 01:44:00.953056 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 23 01:44:00.953252 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 01:44:00.953467 kernel: pci_bus 0000:02: extended config space not accessible Jan 23 01:44:00.953657 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 23 01:44:00.953823 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 23 01:44:00.953996 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 01:44:00.954258 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 01:44:00.954418 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 23 01:44:00.954584 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 01:44:00.954778 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 01:44:00.954961 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 23 01:44:00.955129 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 01:44:00.955290 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 01:44:00.955443 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 01:44:00.955600 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 01:44:00.955761 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 01:44:00.955938 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 01:44:00.955958 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 01:44:00.955970 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 01:44:00.955982 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 01:44:00.956001 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 01:44:00.956013 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 01:44:00.956025 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 01:44:00.956037 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 01:44:00.956049 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 01:44:00.956061 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 01:44:00.956087 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 01:44:00.956099 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 01:44:00.956111 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 01:44:00.956141 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 01:44:00.956153 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 01:44:00.956165 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 01:44:00.956176 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 01:44:00.956200 kernel: iommu: Default domain type: Translated Jan 23 01:44:00.956220 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:44:00.956232 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:44:00.956243 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 01:44:00.956255 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 23 01:44:00.956272 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 23 01:44:00.956425 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 01:44:00.956598 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 01:44:00.956749 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 01:44:00.956769 kernel: vgaarb: loaded Jan 23 01:44:00.956781 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 01:44:00.956793 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:44:00.956805 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:44:00.956823 kernel: pnp: PnP ACPI init Jan 23 01:44:00.957021 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 01:44:00.957042 kernel: pnp: PnP ACPI: found 5 devices Jan 23 01:44:00.957054 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:44:00.958160 kernel: NET: Registered PF_INET protocol family Jan 23 01:44:00.958177 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 01:44:00.958196 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 23 01:44:00.958209 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:44:00.958228 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 01:44:00.958241 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 23 01:44:00.958253 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 23 01:44:00.958270 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 01:44:00.958282 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 01:44:00.958294 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:44:00.958306 kernel: NET: Registered PF_XDP protocol family Jan 23 01:44:00.958498 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 23 01:44:00.958656 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 01:44:00.958817 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 01:44:00.958985 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 01:44:00.961191 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 01:44:00.961373 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 01:44:00.961542 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 01:44:00.961700 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 01:44:00.961870 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 23 01:44:00.962027 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 23 01:44:00.962632 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 01:44:00.962799 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 23 01:44:00.962977 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 23 01:44:00.965179 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 23 01:44:00.965367 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 23 01:44:00.965554 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 01:44:00.965725 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 01:44:00.965938 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 23 01:44:00.966113 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 01:44:00.966270 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 23 01:44:00.966435 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 23 01:44:00.966609 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 01:44:00.966763 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 01:44:00.966936 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 23 01:44:00.969128 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 23 01:44:00.969304 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 01:44:00.969467 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 01:44:00.969633 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 23 01:44:00.969792 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 23 01:44:00.969968 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 01:44:00.970152 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 01:44:00.970308 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 23 01:44:00.970485 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 23 01:44:00.970648 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 01:44:00.970801 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 01:44:00.970970 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 23 01:44:00.972216 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 23 01:44:00.972415 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 01:44:00.972602 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 01:44:00.972771 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 23 01:44:00.972941 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 23 01:44:00.973122 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 01:44:00.973288 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 01:44:00.973452 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 23 01:44:00.973607 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 23 01:44:00.973786 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 01:44:00.973957 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 01:44:00.974183 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 23 01:44:00.974376 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 23 01:44:00.974529 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 01:44:00.974684 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 01:44:00.974825 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 01:44:00.974980 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 01:44:00.975170 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 23 01:44:00.975312 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 01:44:00.975451 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 23 01:44:00.975636 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 23 01:44:00.975784 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 23 01:44:00.975945 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 01:44:00.976119 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 23 01:44:00.976422 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 23 01:44:00.976653 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 23 01:44:00.976816 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 01:44:00.977045 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 23 01:44:00.977231 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 23 01:44:00.977396 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 01:44:00.977564 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 23 01:44:00.977719 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 23 01:44:00.977899 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 01:44:00.978091 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 23 01:44:00.978247 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 23 01:44:00.978404 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 01:44:00.978572 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 23 01:44:00.978748 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 23 01:44:00.978914 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 01:44:00.979134 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 23 01:44:00.979380 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 23 01:44:00.979528 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 01:44:00.979697 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 23 01:44:00.979889 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 23 01:44:00.980035 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 01:44:00.980056 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 01:44:00.980096 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:44:00.980117 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 01:44:00.980142 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 23 01:44:00.980155 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 01:44:00.980167 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 23 01:44:00.980180 kernel: Initialise system trusted keyrings Jan 23 01:44:00.980207 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 23 01:44:00.980219 kernel: Key type asymmetric registered Jan 23 01:44:00.980231 kernel: Asymmetric key parser 'x509' registered Jan 23 01:44:00.980267 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:44:00.980282 kernel: io scheduler mq-deadline registered Jan 23 01:44:00.980294 kernel: io scheduler kyber registered Jan 23 01:44:00.980313 kernel: io scheduler bfq registered Jan 23 01:44:00.980488 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 23 01:44:00.980653 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 23 01:44:00.980838 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:44:00.981012 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 23 01:44:00.981425 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 23 01:44:00.981631 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:44:00.981819 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 23 01:44:00.982003 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 23 01:44:00.982205 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:44:00.982393 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 23 01:44:00.982569 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 23 01:44:00.982725 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:44:00.982918 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 23 01:44:00.983099 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 23 01:44:00.983268 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:44:00.983428 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 23 01:44:00.983612 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 23 01:44:00.983816 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:44:00.984008 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 23 01:44:00.984230 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 23 01:44:00.984402 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:44:00.984558 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 23 01:44:00.984736 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 23 01:44:00.984922 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 01:44:00.984943 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:44:00.984958 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 01:44:00.984971 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 01:44:00.984984 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:44:00.984997 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:44:00.985017 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 01:44:00.985030 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 01:44:00.985043 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 01:44:00.985056 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 01:44:00.985270 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 01:44:00.985441 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 01:44:00.985588 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T01:44:00 UTC (1769132640) Jan 23 01:44:00.985733 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 23 01:44:00.985768 kernel: intel_pstate: CPU model not supported Jan 23 01:44:00.985782 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:44:00.985795 kernel: Segment Routing with IPv6 Jan 23 01:44:00.985808 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:44:00.985820 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:44:00.985833 kernel: Key type dns_resolver registered Jan 23 01:44:00.985857 kernel: IPI shorthand broadcast: enabled Jan 23 01:44:00.985871 kernel: sched_clock: Marking stable (3361150783, 219056386)->(3701336163, -121128994) Jan 23 01:44:00.985883 kernel: registered taskstats version 1 Jan 23 01:44:00.985902 kernel: Loading compiled-in X.509 certificates Jan 23 01:44:00.985915 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:44:00.985928 kernel: Demotion targets for Node 0: null Jan 23 01:44:00.985941 kernel: Key type .fscrypt registered Jan 23 01:44:00.985953 kernel: Key type fscrypt-provisioning registered Jan 23 01:44:00.985966 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:44:00.985979 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:44:00.985992 kernel: ima: No architecture policies found Jan 23 01:44:00.986005 kernel: clk: Disabling unused clocks Jan 23 01:44:00.986017 kernel: Warning: unable to open an initial console. Jan 23 01:44:00.986036 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:44:00.986049 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:44:00.986062 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:44:00.986093 kernel: Run /init as init process Jan 23 01:44:00.986106 kernel: with arguments: Jan 23 01:44:00.986130 kernel: /init Jan 23 01:44:00.986141 kernel: with environment: Jan 23 01:44:00.986153 kernel: HOME=/ Jan 23 01:44:00.986171 kernel: TERM=linux Jan 23 01:44:00.986209 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:44:00.986228 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:44:00.986255 systemd[1]: Detected virtualization kvm. Jan 23 01:44:00.986269 systemd[1]: Detected architecture x86-64. Jan 23 01:44:00.986282 systemd[1]: Running in initrd. Jan 23 01:44:00.986295 systemd[1]: No hostname configured, using default hostname. Jan 23 01:44:00.986309 systemd[1]: Hostname set to . Jan 23 01:44:00.986329 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:44:00.986348 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:44:00.986361 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:44:00.986376 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:44:00.986390 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:44:00.986411 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:44:00.986424 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:44:00.986444 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:44:00.986459 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:44:00.986481 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:44:00.986495 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:44:00.986509 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:44:00.986522 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:44:00.986536 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:44:00.986555 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:44:00.986573 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:44:00.986587 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:44:00.986615 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:44:00.986629 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:44:00.986642 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:44:00.986655 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:44:00.986678 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:44:00.986691 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:44:00.986709 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:44:00.986722 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:44:00.986743 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:44:00.986756 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:44:00.986770 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:44:00.986783 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:44:00.986805 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:44:00.986831 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:44:00.986853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:44:00.986885 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:44:00.986899 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:44:00.986912 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:44:00.986927 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:44:00.986998 systemd-journald[210]: Collecting audit messages is disabled. Jan 23 01:44:00.987033 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:44:00.987047 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:44:00.987060 kernel: Bridge firewalling registered Jan 23 01:44:00.987102 systemd-journald[210]: Journal started Jan 23 01:44:00.987134 systemd-journald[210]: Runtime Journal (/run/log/journal/911696392f7c4700a8652f2ebd042bae) is 4.7M, max 37.8M, 33.1M free. Jan 23 01:44:00.926217 systemd-modules-load[212]: Inserted module 'overlay' Jan 23 01:44:01.050556 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:44:00.983735 systemd-modules-load[212]: Inserted module 'br_netfilter' Jan 23 01:44:01.051681 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:44:01.052885 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:44:01.059216 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:44:01.062227 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:44:01.065272 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:44:01.069240 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:44:01.093875 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:44:01.096372 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:44:01.103364 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:44:01.106577 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:44:01.110251 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:44:01.111308 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:44:01.115203 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:44:01.143234 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:44:01.164103 systemd-resolved[251]: Positive Trust Anchors: Jan 23 01:44:01.164135 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:44:01.164177 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:44:01.168792 systemd-resolved[251]: Defaulting to hostname 'linux'. Jan 23 01:44:01.170509 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:44:01.172126 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:44:01.261117 kernel: SCSI subsystem initialized Jan 23 01:44:01.272093 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:44:01.285094 kernel: iscsi: registered transport (tcp) Jan 23 01:44:01.310649 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:44:01.310687 kernel: QLogic iSCSI HBA Driver Jan 23 01:44:01.335576 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:44:01.354878 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:44:01.356442 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:44:01.420910 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:44:01.423670 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:44:01.485123 kernel: raid6: sse2x4 gen() 13541 MB/s Jan 23 01:44:01.503124 kernel: raid6: sse2x2 gen() 9268 MB/s Jan 23 01:44:01.521589 kernel: raid6: sse2x1 gen() 9487 MB/s Jan 23 01:44:01.521670 kernel: raid6: using algorithm sse2x4 gen() 13541 MB/s Jan 23 01:44:01.540613 kernel: raid6: .... xor() 7420 MB/s, rmw enabled Jan 23 01:44:01.540730 kernel: raid6: using ssse3x2 recovery algorithm Jan 23 01:44:01.566135 kernel: xor: automatically using best checksumming function avx Jan 23 01:44:01.756143 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:44:01.765150 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:44:01.769626 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:44:01.802032 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 23 01:44:01.811569 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:44:01.815843 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:44:01.845462 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jan 23 01:44:01.877881 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:44:01.880927 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:44:02.001459 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:44:02.005666 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:44:02.138159 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 23 01:44:02.151094 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:44:02.160129 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 23 01:44:02.188236 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 01:44:02.188276 kernel: GPT:17805311 != 125829119 Jan 23 01:44:02.188294 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 01:44:02.190355 kernel: GPT:17805311 != 125829119 Jan 23 01:44:02.190391 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 01:44:02.192149 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:44:02.197116 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 01:44:02.197843 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:44:02.202982 kernel: libata version 3.00 loaded. Jan 23 01:44:02.198009 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:44:02.199735 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:44:02.202338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:44:02.203551 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:44:02.218395 kernel: AES CTR mode by8 optimization enabled Jan 23 01:44:02.227103 kernel: ACPI: bus type USB registered Jan 23 01:44:02.230806 kernel: usbcore: registered new interface driver usbfs Jan 23 01:44:02.230847 kernel: usbcore: registered new interface driver hub Jan 23 01:44:02.232385 kernel: usbcore: registered new device driver usb Jan 23 01:44:02.293298 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 23 01:44:02.293646 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 23 01:44:02.295089 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 01:44:02.296113 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 23 01:44:02.296328 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 23 01:44:02.296535 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 23 01:44:02.296749 kernel: hub 1-0:1.0: USB hub found Jan 23 01:44:02.296962 kernel: hub 1-0:1.0: 4 ports detected Jan 23 01:44:02.298131 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 01:44:02.298371 kernel: hub 2-0:1.0: USB hub found Jan 23 01:44:02.298578 kernel: hub 2-0:1.0: 4 ports detected Jan 23 01:44:02.322446 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 01:44:02.426043 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 01:44:02.426323 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 01:44:02.426347 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 01:44:02.426537 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 01:44:02.426721 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 01:44:02.426915 kernel: scsi host0: ahci Jan 23 01:44:02.427127 kernel: scsi host1: ahci Jan 23 01:44:02.427307 kernel: scsi host2: ahci Jan 23 01:44:02.427491 kernel: scsi host3: ahci Jan 23 01:44:02.427675 kernel: scsi host4: ahci Jan 23 01:44:02.427865 kernel: scsi host5: ahci Jan 23 01:44:02.428041 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 lpm-pol 1 Jan 23 01:44:02.428061 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 lpm-pol 1 Jan 23 01:44:02.428099 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 lpm-pol 1 Jan 23 01:44:02.428117 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 lpm-pol 1 Jan 23 01:44:02.428140 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 lpm-pol 1 Jan 23 01:44:02.428158 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 lpm-pol 1 Jan 23 01:44:02.425441 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:44:02.438407 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 01:44:02.450506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 01:44:02.476704 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 01:44:02.477531 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 01:44:02.480224 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:44:02.500295 disk-uuid[615]: Primary Header is updated. Jan 23 01:44:02.500295 disk-uuid[615]: Secondary Entries is updated. Jan 23 01:44:02.500295 disk-uuid[615]: Secondary Header is updated. Jan 23 01:44:02.507050 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:44:02.512105 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:44:02.534119 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 01:44:02.653148 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 01:44:02.653251 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 01:44:02.660875 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 01:44:02.660907 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 01:44:02.660925 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 01:44:02.660947 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 01:44:02.687089 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 01:44:02.694170 kernel: usbcore: registered new interface driver usbhid Jan 23 01:44:02.694207 kernel: usbhid: USB HID core driver Jan 23 01:44:02.703842 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 23 01:44:02.703885 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 23 01:44:02.724499 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:44:02.726644 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:44:02.727513 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:44:02.730054 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:44:02.732057 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:44:02.758241 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:44:03.513981 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:44:03.518293 disk-uuid[616]: The operation has completed successfully. Jan 23 01:44:03.569728 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:44:03.569916 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:44:03.616840 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:44:03.644867 sh[641]: Success Jan 23 01:44:03.668273 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:44:03.668369 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:44:03.672102 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:44:03.683095 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Jan 23 01:44:03.734589 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:44:03.741199 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:44:03.751302 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:44:03.764116 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (653) Jan 23 01:44:03.767141 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:44:03.767209 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:44:03.780838 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:44:03.780923 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:44:03.782527 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:44:03.784605 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:44:03.785458 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:44:03.786535 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:44:03.789709 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:44:03.821341 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (686) Jan 23 01:44:03.825232 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:44:03.825290 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:44:03.833402 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:44:03.833488 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:44:03.841116 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:44:03.843646 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:44:03.847229 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:44:03.932148 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:44:03.937338 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:44:04.015416 systemd-networkd[823]: lo: Link UP Jan 23 01:44:04.015430 systemd-networkd[823]: lo: Gained carrier Jan 23 01:44:04.018712 systemd-networkd[823]: Enumeration completed Jan 23 01:44:04.019833 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:44:04.020297 systemd-networkd[823]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:44:04.020316 systemd-networkd[823]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:44:04.021465 systemd-networkd[823]: eth0: Link UP Jan 23 01:44:04.021696 systemd-networkd[823]: eth0: Gained carrier Jan 23 01:44:04.021718 systemd-networkd[823]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:44:04.027617 systemd[1]: Reached target network.target - Network. Jan 23 01:44:04.040254 systemd-networkd[823]: eth0: DHCPv4 address 10.230.49.206/30, gateway 10.230.49.205 acquired from 10.230.49.205 Jan 23 01:44:04.043680 ignition[749]: Ignition 2.22.0 Jan 23 01:44:04.043710 ignition[749]: Stage: fetch-offline Jan 23 01:44:04.043794 ignition[749]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:44:04.047275 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:44:04.043812 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:44:04.043988 ignition[749]: parsed url from cmdline: "" Jan 23 01:44:04.043996 ignition[749]: no config URL provided Jan 23 01:44:04.044010 ignition[749]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:44:04.051238 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 01:44:04.044025 ignition[749]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:44:04.044041 ignition[749]: failed to fetch config: resource requires networking Jan 23 01:44:04.044394 ignition[749]: Ignition finished successfully Jan 23 01:44:04.086865 ignition[833]: Ignition 2.22.0 Jan 23 01:44:04.086887 ignition[833]: Stage: fetch Jan 23 01:44:04.087128 ignition[833]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:44:04.087148 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:44:04.087290 ignition[833]: parsed url from cmdline: "" Jan 23 01:44:04.087297 ignition[833]: no config URL provided Jan 23 01:44:04.087306 ignition[833]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:44:04.087322 ignition[833]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:44:04.087521 ignition[833]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 01:44:04.088150 ignition[833]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 23 01:44:04.088351 ignition[833]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 01:44:04.103269 ignition[833]: GET result: OK Jan 23 01:44:04.103401 ignition[833]: parsing config with SHA512: 68a1a28d796543d584d92b2ac688407f64cddd283baf5384c520c1d6e1b0bd6b5caa85c93ecc22b7c962cdb34a8176ff766d81f07950e47981f7539ed7a876e5 Jan 23 01:44:04.115107 unknown[833]: fetched base config from "system" Jan 23 01:44:04.115129 unknown[833]: fetched base config from "system" Jan 23 01:44:04.115786 ignition[833]: fetch: fetch complete Jan 23 01:44:04.115139 unknown[833]: fetched user config from "openstack" Jan 23 01:44:04.115804 ignition[833]: fetch: fetch passed Jan 23 01:44:04.120335 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 01:44:04.115884 ignition[833]: Ignition finished successfully Jan 23 01:44:04.124407 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:44:04.177509 ignition[840]: Ignition 2.22.0 Jan 23 01:44:04.177531 ignition[840]: Stage: kargs Jan 23 01:44:04.177718 ignition[840]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:44:04.177735 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:44:04.181138 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:44:04.178909 ignition[840]: kargs: kargs passed Jan 23 01:44:04.178977 ignition[840]: Ignition finished successfully Jan 23 01:44:04.185294 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:44:04.220260 ignition[846]: Ignition 2.22.0 Jan 23 01:44:04.220283 ignition[846]: Stage: disks Jan 23 01:44:04.220462 ignition[846]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:44:04.220480 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:44:04.221798 ignition[846]: disks: disks passed Jan 23 01:44:04.223533 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:44:04.221865 ignition[846]: Ignition finished successfully Jan 23 01:44:04.225790 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:44:04.226772 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:44:04.228171 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:44:04.229587 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:44:04.231203 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:44:04.233805 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:44:04.279178 systemd-fsck[854]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 01:44:04.282659 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:44:04.285481 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:44:04.407102 kernel: EXT4-fs (vda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:44:04.408749 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:44:04.409990 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:44:04.413184 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:44:04.416209 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:44:04.417272 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 01:44:04.419255 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 23 01:44:04.420017 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:44:04.420054 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:44:04.433881 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:44:04.438281 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:44:04.456793 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (862) Jan 23 01:44:04.462311 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:44:04.462376 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:44:04.470109 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:44:04.470185 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:44:04.472431 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:44:04.520220 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:44:04.538779 initrd-setup-root[890]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:44:04.545359 initrd-setup-root[897]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:44:04.551706 initrd-setup-root[904]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:44:04.556823 initrd-setup-root[911]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:44:04.666352 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:44:04.670147 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:44:04.672216 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:44:04.697150 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:44:04.716738 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:44:04.745274 ignition[980]: INFO : Ignition 2.22.0 Jan 23 01:44:04.745274 ignition[980]: INFO : Stage: mount Jan 23 01:44:04.747137 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:44:04.747137 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:44:04.748848 ignition[980]: INFO : mount: mount passed Jan 23 01:44:04.748848 ignition[980]: INFO : Ignition finished successfully Jan 23 01:44:04.748943 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:44:04.763015 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:44:05.307501 systemd-networkd[823]: eth0: Gained IPv6LL Jan 23 01:44:05.549114 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:44:07.557121 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:44:07.884924 systemd-networkd[823]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8c73:24:19ff:fee6:31ce/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8c73:24:19ff:fee6:31ce/64 assigned by NDisc. Jan 23 01:44:07.884941 systemd-networkd[823]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 23 01:44:11.565122 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:44:11.571923 coreos-metadata[864]: Jan 23 01:44:11.571 WARN failed to locate config-drive, using the metadata service API instead Jan 23 01:44:11.596248 coreos-metadata[864]: Jan 23 01:44:11.596 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 01:44:11.608490 coreos-metadata[864]: Jan 23 01:44:11.608 INFO Fetch successful Jan 23 01:44:11.609443 coreos-metadata[864]: Jan 23 01:44:11.609 INFO wrote hostname srv-idwud.gb1.brightbox.com to /sysroot/etc/hostname Jan 23 01:44:11.612234 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 23 01:44:11.613715 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 23 01:44:11.617717 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:44:11.639724 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:44:11.677225 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (995) Jan 23 01:44:11.677309 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:44:11.679389 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:44:11.685464 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:44:11.685496 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:44:11.688776 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:44:11.729881 ignition[1012]: INFO : Ignition 2.22.0 Jan 23 01:44:11.729881 ignition[1012]: INFO : Stage: files Jan 23 01:44:11.731776 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:44:11.731776 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:44:11.731776 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:44:11.734637 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:44:11.734637 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:44:11.742818 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:44:11.742818 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:44:11.742818 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:44:11.742818 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 01:44:11.742818 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 23 01:44:11.739290 unknown[1012]: wrote ssh authorized keys file for user: core Jan 23 01:44:11.933902 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 01:44:12.201037 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 01:44:12.201037 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 01:44:12.203753 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 23 01:44:12.399019 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 01:44:12.802045 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 01:44:12.805490 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:44:12.805490 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:44:12.805490 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:44:12.811286 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:44:12.811286 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:44:12.811286 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:44:12.811286 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:44:12.811286 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:44:12.817828 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:44:12.817828 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:44:12.817828 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:44:12.817828 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:44:12.817828 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:44:12.817828 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 01:44:13.053943 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 01:44:14.428561 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:44:14.428561 ignition[1012]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 01:44:14.432371 ignition[1012]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:44:14.436120 ignition[1012]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:44:14.436120 ignition[1012]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 01:44:14.436120 ignition[1012]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 01:44:14.436120 ignition[1012]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 01:44:14.436120 ignition[1012]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:44:14.436120 ignition[1012]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:44:14.436120 ignition[1012]: INFO : files: files passed Jan 23 01:44:14.436120 ignition[1012]: INFO : Ignition finished successfully Jan 23 01:44:14.441131 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:44:14.445430 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:44:14.449354 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:44:14.466554 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:44:14.467538 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:44:14.475051 initrd-setup-root-after-ignition[1042]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:44:14.475051 initrd-setup-root-after-ignition[1042]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:44:14.478538 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:44:14.480044 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:44:14.481436 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:44:14.483879 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:44:14.537382 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:44:14.537608 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:44:14.539391 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:44:14.540665 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:44:14.542251 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:44:14.544300 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:44:14.574563 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:44:14.577346 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:44:14.623034 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:44:14.624193 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:44:14.625893 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:44:14.628079 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:44:14.628333 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:44:14.630416 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:44:14.631223 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:44:14.632676 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:44:14.634015 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:44:14.636363 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:44:14.637607 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:44:14.639477 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:44:14.641002 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:44:14.642599 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:44:14.643977 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:44:14.645637 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:44:14.646884 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:44:14.647119 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:44:14.648747 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:44:14.649730 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:44:14.651155 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:44:14.651484 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:44:14.659379 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:44:14.659670 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:44:14.661386 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:44:14.661681 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:44:14.663572 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:44:14.663804 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:44:14.667328 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:44:14.668696 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:44:14.668942 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:44:14.673918 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:44:14.676812 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:44:14.677062 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:44:14.682286 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:44:14.682463 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:44:14.696057 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:44:14.696279 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:44:14.719162 ignition[1066]: INFO : Ignition 2.22.0 Jan 23 01:44:14.719162 ignition[1066]: INFO : Stage: umount Jan 23 01:44:14.719162 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:44:14.719162 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 01:44:14.724121 ignition[1066]: INFO : umount: umount passed Jan 23 01:44:14.724121 ignition[1066]: INFO : Ignition finished successfully Jan 23 01:44:14.727596 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:44:14.727799 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:44:14.729557 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:44:14.729637 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:44:14.733566 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:44:14.733684 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:44:14.735209 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 01:44:14.735287 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 01:44:14.736691 systemd[1]: Stopped target network.target - Network. Jan 23 01:44:14.737990 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:44:14.738067 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:44:14.739556 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:44:14.740893 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:44:14.741428 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:44:14.742523 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:44:14.744218 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:44:14.746743 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:44:14.746810 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:44:14.750040 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:44:14.750155 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:44:14.750827 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:44:14.750922 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:44:14.751691 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:44:14.751751 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:44:14.753297 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:44:14.755714 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:44:14.757320 systemd-networkd[823]: eth0: DHCPv6 lease lost Jan 23 01:44:14.759517 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:44:14.764602 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:44:14.764772 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:44:14.769581 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:44:14.769980 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:44:14.770197 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:44:14.773051 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:44:14.773913 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:44:14.775203 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:44:14.775291 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:44:14.777794 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:44:14.780198 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:44:14.780311 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:44:14.784632 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:44:14.784700 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:44:14.786358 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:44:14.786448 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:44:14.788268 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:44:14.788343 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:44:14.791821 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:44:14.794609 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:44:14.794698 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:44:14.806091 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:44:14.807842 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:44:14.809400 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:44:14.809637 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:44:14.811885 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:44:14.812033 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:44:14.814667 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:44:14.814755 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:44:14.816365 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:44:14.816426 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:44:14.817910 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:44:14.817985 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:44:14.820146 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:44:14.820215 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:44:14.821563 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:44:14.821634 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:44:14.823196 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:44:14.823272 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:44:14.825269 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:44:14.827620 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:44:14.827692 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:44:14.829942 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:44:14.830035 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:44:14.833222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:44:14.833322 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:44:14.841261 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 01:44:14.841343 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 01:44:14.841420 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:44:14.852115 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:44:14.852289 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:44:14.854270 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:44:14.856609 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:44:14.879180 systemd[1]: Switching root. Jan 23 01:44:14.912907 systemd-journald[210]: Journal stopped Jan 23 01:44:16.456408 systemd-journald[210]: Received SIGTERM from PID 1 (systemd). Jan 23 01:44:16.456514 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:44:16.456539 kernel: SELinux: policy capability open_perms=1 Jan 23 01:44:16.456556 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:44:16.456574 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:44:16.456597 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:44:16.456620 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:44:16.456638 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:44:16.456656 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:44:16.456685 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:44:16.456705 kernel: audit: type=1403 audit(1769132655.194:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:44:16.456731 systemd[1]: Successfully loaded SELinux policy in 79.380ms. Jan 23 01:44:16.456760 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.322ms. Jan 23 01:44:16.456780 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:44:16.456800 systemd[1]: Detected virtualization kvm. Jan 23 01:44:16.456818 systemd[1]: Detected architecture x86-64. Jan 23 01:44:16.456835 systemd[1]: Detected first boot. Jan 23 01:44:16.456865 systemd[1]: Hostname set to . Jan 23 01:44:16.456886 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:44:16.456905 zram_generator::config[1109]: No configuration found. Jan 23 01:44:16.456931 kernel: Guest personality initialized and is inactive Jan 23 01:44:16.456948 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 01:44:16.456967 kernel: Initialized host personality Jan 23 01:44:16.456983 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:44:16.457002 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:44:16.457021 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:44:16.457052 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:44:16.457099 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:44:16.457121 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:44:16.457144 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:44:16.457171 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:44:16.457191 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:44:16.457220 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:44:16.457262 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:44:16.457295 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:44:16.457316 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:44:16.457334 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:44:16.457353 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:44:16.457372 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:44:16.457391 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:44:16.457421 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:44:16.457442 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:44:16.457479 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:44:16.457499 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:44:16.457518 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:44:16.457536 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:44:16.457567 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:44:16.457587 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:44:16.457606 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:44:16.457624 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:44:16.457648 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:44:16.457667 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:44:16.457685 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:44:16.457704 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:44:16.457723 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:44:16.457753 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:44:16.457774 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:44:16.457792 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:44:16.457811 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:44:16.457829 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:44:16.457848 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:44:16.457867 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:44:16.457885 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:44:16.457904 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:44:16.457934 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:44:16.457955 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:44:16.457974 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:44:16.457992 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:44:16.458012 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:44:16.458030 systemd[1]: Reached target machines.target - Containers. Jan 23 01:44:16.458049 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:44:16.460106 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:44:16.460150 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:44:16.460173 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:44:16.460192 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:44:16.460216 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:44:16.460234 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:44:16.460253 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:44:16.460278 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:44:16.460297 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:44:16.460316 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:44:16.460354 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:44:16.460375 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:44:16.460393 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:44:16.460420 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:44:16.460440 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:44:16.460476 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:44:16.460497 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:44:16.460528 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:44:16.460566 kernel: fuse: init (API version 7.41) Jan 23 01:44:16.460598 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:44:16.460628 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:44:16.460649 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:44:16.460668 systemd[1]: Stopped verity-setup.service. Jan 23 01:44:16.460688 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:44:16.460708 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:44:16.460727 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:44:16.460746 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:44:16.460765 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:44:16.460795 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:44:16.460816 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:44:16.460834 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:44:16.460853 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:44:16.460873 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:44:16.460892 kernel: ACPI: bus type drm_connector registered Jan 23 01:44:16.460910 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:44:16.460929 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:44:16.460960 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:44:16.460981 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:44:16.461004 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:44:16.461023 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:44:16.461041 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:44:16.461060 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:44:16.461107 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:44:16.461135 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:44:16.461155 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:44:16.461186 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:44:16.461206 kernel: loop: module loaded Jan 23 01:44:16.461225 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:44:16.461256 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:44:16.461277 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:44:16.461296 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:44:16.461314 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:44:16.461333 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:44:16.461364 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:44:16.461422 systemd-journald[1203]: Collecting audit messages is disabled. Jan 23 01:44:16.461480 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:44:16.461503 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:44:16.461523 systemd-journald[1203]: Journal started Jan 23 01:44:16.461553 systemd-journald[1203]: Runtime Journal (/run/log/journal/911696392f7c4700a8652f2ebd042bae) is 4.7M, max 37.8M, 33.1M free. Jan 23 01:44:15.987576 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:44:16.011879 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 01:44:16.012669 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:44:16.465097 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:44:16.476166 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:44:16.485304 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:44:16.490137 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:44:16.495253 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:44:16.496338 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:44:16.499579 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:44:16.501627 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:44:16.504229 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:44:16.507483 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:44:16.508753 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:44:16.540385 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:44:16.553847 kernel: loop0: detected capacity change from 0 to 224512 Jan 23 01:44:16.550958 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:44:16.557380 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:44:16.558246 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:44:16.571337 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:44:16.575825 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:44:16.602516 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:44:16.603146 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:44:16.626815 systemd-journald[1203]: Time spent on flushing to /var/log/journal/911696392f7c4700a8652f2ebd042bae is 46.835ms for 1174 entries. Jan 23 01:44:16.626815 systemd-journald[1203]: System Journal (/var/log/journal/911696392f7c4700a8652f2ebd042bae) is 8M, max 584.8M, 576.8M free. Jan 23 01:44:16.719354 systemd-journald[1203]: Received client request to flush runtime journal. Jan 23 01:44:16.719426 kernel: loop1: detected capacity change from 0 to 110984 Jan 23 01:44:16.719464 kernel: loop2: detected capacity change from 0 to 8 Jan 23 01:44:16.719488 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 01:44:16.646053 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:44:16.699163 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:44:16.717209 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:44:16.722281 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:44:16.784616 kernel: loop4: detected capacity change from 0 to 224512 Jan 23 01:44:16.799685 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 23 01:44:16.799723 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 23 01:44:16.818432 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:44:16.828273 kernel: loop5: detected capacity change from 0 to 110984 Jan 23 01:44:16.867735 kernel: loop6: detected capacity change from 0 to 8 Jan 23 01:44:16.873118 kernel: loop7: detected capacity change from 0 to 128560 Jan 23 01:44:16.887084 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 23 01:44:16.888457 (sd-merge)[1272]: Merged extensions into '/usr'. Jan 23 01:44:16.898913 systemd[1]: Reload requested from client PID 1228 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:44:16.899061 systemd[1]: Reloading... Jan 23 01:44:17.050888 zram_generator::config[1299]: No configuration found. Jan 23 01:44:17.229938 ldconfig[1224]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:44:17.452539 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:44:17.452692 systemd[1]: Reloading finished in 552 ms. Jan 23 01:44:17.481352 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:44:17.482733 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:44:17.483942 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:44:17.494977 systemd[1]: Starting ensure-sysext.service... Jan 23 01:44:17.499340 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:44:17.502332 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:44:17.526628 systemd[1]: Reload requested from client PID 1356 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:44:17.526649 systemd[1]: Reloading... Jan 23 01:44:17.552049 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:44:17.552331 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:44:17.552781 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:44:17.555853 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:44:17.557242 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:44:17.557630 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Jan 23 01:44:17.557748 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Jan 23 01:44:17.565330 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:44:17.565345 systemd-tmpfiles[1357]: Skipping /boot Jan 23 01:44:17.571486 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Jan 23 01:44:17.604909 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:44:17.604926 systemd-tmpfiles[1357]: Skipping /boot Jan 23 01:44:17.636101 zram_generator::config[1384]: No configuration found. Jan 23 01:44:17.992153 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:44:18.022097 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 01:44:18.048146 kernel: ACPI: button: Power Button [PWRF] Jan 23 01:44:18.065883 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:44:18.066873 systemd[1]: Reloading finished in 539 ms. Jan 23 01:44:18.077160 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:44:18.088598 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:44:18.123928 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:44:18.125762 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:44:18.130515 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:44:18.131516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:44:18.134510 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:44:18.143087 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 01:44:18.147100 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 01:44:18.147994 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:44:18.150487 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:44:18.151400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:44:18.151574 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:44:18.153224 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:44:18.158577 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:44:18.165519 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:44:18.169442 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:44:18.173487 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:44:18.180782 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:44:18.181057 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:44:18.181350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:44:18.181513 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:44:18.181648 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:44:18.189534 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:44:18.191479 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:44:18.214787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 01:44:18.218151 systemd[1]: Finished ensure-sysext.service. Jan 23 01:44:18.223686 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:44:18.223941 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:44:18.231120 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:44:18.232293 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:44:18.241297 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:44:18.242632 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:44:18.243037 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:44:18.249183 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 01:44:18.250331 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:44:18.257266 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:44:18.258398 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:44:18.258722 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:44:18.269772 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:44:18.275547 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:44:18.285509 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:44:18.294726 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:44:18.308183 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:44:18.309448 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:44:18.312609 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:44:18.314291 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:44:18.326577 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:44:18.327693 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:44:18.348247 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:44:18.357424 augenrules[1523]: No rules Jan 23 01:44:18.358659 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:44:18.359508 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:44:18.363868 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:44:18.437022 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:44:18.512473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:44:18.692644 systemd-networkd[1479]: lo: Link UP Jan 23 01:44:18.692657 systemd-networkd[1479]: lo: Gained carrier Jan 23 01:44:18.697878 systemd-networkd[1479]: Enumeration completed Jan 23 01:44:18.698050 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:44:18.698533 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:44:18.698547 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:44:18.700871 systemd-networkd[1479]: eth0: Link UP Jan 23 01:44:18.701181 systemd-networkd[1479]: eth0: Gained carrier Jan 23 01:44:18.701202 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:44:18.718180 systemd-networkd[1479]: eth0: DHCPv4 address 10.230.49.206/30, gateway 10.230.49.205 acquired from 10.230.49.205 Jan 23 01:44:18.791785 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:44:18.798339 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:44:18.799308 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 01:44:18.800587 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:44:18.804646 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:44:18.848908 systemd-resolved[1480]: Positive Trust Anchors: Jan 23 01:44:18.849380 systemd-resolved[1480]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:44:18.849446 systemd-resolved[1480]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:44:18.856832 systemd-resolved[1480]: Using system hostname 'srv-idwud.gb1.brightbox.com'. Jan 23 01:44:18.859554 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:44:18.862355 systemd[1]: Reached target network.target - Network. Jan 23 01:44:18.863079 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:44:18.864356 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:44:18.865323 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:44:18.866297 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:44:18.867380 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:44:18.868577 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:44:18.869820 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:44:18.870810 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:44:18.871842 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:44:18.872036 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:44:18.872797 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:44:18.874954 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:44:18.878238 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:44:18.882659 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:44:18.883773 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:44:18.884593 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:44:18.895816 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:44:18.896966 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:44:18.898861 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:44:18.899905 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:44:18.902071 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:44:18.903012 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:44:18.903833 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:44:18.903900 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:44:18.905469 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:44:18.908290 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 01:44:18.911632 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:44:18.917467 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:44:18.920167 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:44:18.924235 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:44:18.924935 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:44:18.929840 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:44:18.933861 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:44:18.938355 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 01:44:18.940100 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:44:18.941784 jq[1561]: false Jan 23 01:44:18.946401 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:44:18.951457 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:44:18.957539 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:44:18.960334 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:44:18.964617 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:44:18.970397 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:44:18.976286 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:44:18.992371 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:44:18.993239 extend-filesystems[1562]: Found /dev/vda6 Jan 23 01:44:18.993644 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:44:18.993959 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:44:19.016110 extend-filesystems[1562]: Found /dev/vda9 Jan 23 01:44:19.008942 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:44:19.009401 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:44:19.018107 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Refreshing passwd entry cache Jan 23 01:44:19.017732 oslogin_cache_refresh[1563]: Refreshing passwd entry cache Jan 23 01:44:19.024216 extend-filesystems[1562]: Checking size of /dev/vda9 Jan 23 01:44:19.043636 jq[1575]: true Jan 23 01:44:19.053737 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Failure getting users, quitting Jan 23 01:44:19.053737 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:44:19.053737 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Refreshing group entry cache Jan 23 01:44:19.052097 oslogin_cache_refresh[1563]: Failure getting users, quitting Jan 23 01:44:19.052132 oslogin_cache_refresh[1563]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:44:19.052203 oslogin_cache_refresh[1563]: Refreshing group entry cache Jan 23 01:44:19.059142 oslogin_cache_refresh[1563]: Failure getting groups, quitting Jan 23 01:44:19.062373 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Failure getting groups, quitting Jan 23 01:44:19.062373 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:44:19.059162 oslogin_cache_refresh[1563]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:44:19.064320 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:44:19.064648 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:44:19.071697 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:44:19.074294 update_engine[1574]: I20260123 01:44:19.072532 1574 main.cc:92] Flatcar Update Engine starting Jan 23 01:44:19.083720 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:44:19.085220 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:44:19.089111 tar[1583]: linux-amd64/LICENSE Jan 23 01:44:19.089111 tar[1583]: linux-amd64/helm Jan 23 01:44:19.811563 extend-filesystems[1562]: Resized partition /dev/vda9 Jan 23 01:44:19.812109 systemd-timesyncd[1495]: Contacted time server 194.213.3.203:123 (0.flatcar.pool.ntp.org). Jan 23 01:44:19.812188 systemd-timesyncd[1495]: Initial clock synchronization to Fri 2026-01-23 01:44:19.811313 UTC. Jan 23 01:44:19.812960 systemd-resolved[1480]: Clock change detected. Flushing caches. Jan 23 01:44:19.817916 extend-filesystems[1606]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 01:44:19.826072 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:44:19.825396 dbus-daemon[1559]: [system] SELinux support is enabled Jan 23 01:44:19.835823 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 23 01:44:19.831227 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:44:19.829624 dbus-daemon[1559]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1479 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 01:44:19.833453 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:44:19.833498 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:44:19.855959 update_engine[1574]: I20260123 01:44:19.836781 1574 update_check_scheduler.cc:74] Next update check in 3m29s Jan 23 01:44:19.840615 dbus-daemon[1559]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 01:44:19.837108 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:44:19.837135 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:44:19.849901 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:44:19.860242 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 01:44:19.870240 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:44:19.892442 jq[1596]: true Jan 23 01:44:19.995158 systemd-logind[1570]: Watching system buttons on /dev/input/event3 (Power Button) Jan 23 01:44:19.995266 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:44:19.998801 systemd-logind[1570]: New seat seat0. Jan 23 01:44:20.010025 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:44:20.109217 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 23 01:44:20.130043 extend-filesystems[1606]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 01:44:20.130043 extend-filesystems[1606]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 23 01:44:20.130043 extend-filesystems[1606]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 23 01:44:20.144376 extend-filesystems[1562]: Resized filesystem in /dev/vda9 Jan 23 01:44:20.133449 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:44:20.151077 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:44:20.135371 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:44:20.141986 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:44:20.158318 systemd[1]: Starting sshkeys.service... Jan 23 01:44:20.224091 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 01:44:20.232598 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 01:44:20.237639 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 01:44:20.256273 dbus-daemon[1559]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 01:44:20.263963 dbus-daemon[1559]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1607 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 01:44:20.274095 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 01:44:20.308856 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:44:20.368905 sshd_keygen[1603]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:44:20.407584 containerd[1598]: time="2026-01-23T01:44:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:44:20.409760 containerd[1598]: time="2026-01-23T01:44:20.408300289Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:44:20.424097 containerd[1598]: time="2026-01-23T01:44:20.424047891Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="25.562µs" Jan 23 01:44:20.424097 containerd[1598]: time="2026-01-23T01:44:20.424092223Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:44:20.424234 containerd[1598]: time="2026-01-23T01:44:20.424138728Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:44:20.424430 containerd[1598]: time="2026-01-23T01:44:20.424402372Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:44:20.424504 containerd[1598]: time="2026-01-23T01:44:20.424436288Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:44:20.424504 containerd[1598]: time="2026-01-23T01:44:20.424496845Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:44:20.425557 containerd[1598]: time="2026-01-23T01:44:20.424626706Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:44:20.425557 containerd[1598]: time="2026-01-23T01:44:20.424677692Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:44:20.425557 containerd[1598]: time="2026-01-23T01:44:20.425305657Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:44:20.425557 containerd[1598]: time="2026-01-23T01:44:20.425330324Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:44:20.425557 containerd[1598]: time="2026-01-23T01:44:20.425353278Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:44:20.425557 containerd[1598]: time="2026-01-23T01:44:20.425370842Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:44:20.425816 containerd[1598]: time="2026-01-23T01:44:20.425531963Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:44:20.427896 containerd[1598]: time="2026-01-23T01:44:20.426099270Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:44:20.427896 containerd[1598]: time="2026-01-23T01:44:20.426173060Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:44:20.427896 containerd[1598]: time="2026-01-23T01:44:20.426225283Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:44:20.427896 containerd[1598]: time="2026-01-23T01:44:20.426278508Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:44:20.427896 containerd[1598]: time="2026-01-23T01:44:20.426611714Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:44:20.427896 containerd[1598]: time="2026-01-23T01:44:20.426715499Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:44:20.430679 containerd[1598]: time="2026-01-23T01:44:20.430641269Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:44:20.430735 containerd[1598]: time="2026-01-23T01:44:20.430710044Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:44:20.430770 containerd[1598]: time="2026-01-23T01:44:20.430734891Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:44:20.430770 containerd[1598]: time="2026-01-23T01:44:20.430758779Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:44:20.430854 containerd[1598]: time="2026-01-23T01:44:20.430787608Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:44:20.430854 containerd[1598]: time="2026-01-23T01:44:20.430816444Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:44:20.430941 containerd[1598]: time="2026-01-23T01:44:20.430850548Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:44:20.430941 containerd[1598]: time="2026-01-23T01:44:20.430901808Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:44:20.430941 containerd[1598]: time="2026-01-23T01:44:20.430924649Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:44:20.431022 containerd[1598]: time="2026-01-23T01:44:20.430940850Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:44:20.431022 containerd[1598]: time="2026-01-23T01:44:20.430956602Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:44:20.431022 containerd[1598]: time="2026-01-23T01:44:20.430985097Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:44:20.431448 containerd[1598]: time="2026-01-23T01:44:20.431150969Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:44:20.431448 containerd[1598]: time="2026-01-23T01:44:20.431212507Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:44:20.431448 containerd[1598]: time="2026-01-23T01:44:20.431237021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:44:20.431448 containerd[1598]: time="2026-01-23T01:44:20.431259530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:44:20.431448 containerd[1598]: time="2026-01-23T01:44:20.431276783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:44:20.431448 containerd[1598]: time="2026-01-23T01:44:20.431303348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:44:20.431448 containerd[1598]: time="2026-01-23T01:44:20.431322745Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:44:20.431448 containerd[1598]: time="2026-01-23T01:44:20.431338427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:44:20.431448 containerd[1598]: time="2026-01-23T01:44:20.431355850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:44:20.431448 containerd[1598]: time="2026-01-23T01:44:20.431371900Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:44:20.431448 containerd[1598]: time="2026-01-23T01:44:20.431387423Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:44:20.431779 containerd[1598]: time="2026-01-23T01:44:20.431668588Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:44:20.431779 containerd[1598]: time="2026-01-23T01:44:20.431697981Z" level=info msg="Start snapshots syncer" Jan 23 01:44:20.431779 containerd[1598]: time="2026-01-23T01:44:20.431749053Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:44:20.432591 containerd[1598]: time="2026-01-23T01:44:20.432538365Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:44:20.432833 containerd[1598]: time="2026-01-23T01:44:20.432615338Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:44:20.432833 containerd[1598]: time="2026-01-23T01:44:20.432724415Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:44:20.433648 containerd[1598]: time="2026-01-23T01:44:20.432959260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:44:20.433648 containerd[1598]: time="2026-01-23T01:44:20.432995007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:44:20.433648 containerd[1598]: time="2026-01-23T01:44:20.433013363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:44:20.433648 containerd[1598]: time="2026-01-23T01:44:20.433044500Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:44:20.433648 containerd[1598]: time="2026-01-23T01:44:20.433078269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:44:20.433648 containerd[1598]: time="2026-01-23T01:44:20.433118026Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:44:20.433648 containerd[1598]: time="2026-01-23T01:44:20.433134751Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:44:20.433648 containerd[1598]: time="2026-01-23T01:44:20.433188787Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:44:20.433648 containerd[1598]: time="2026-01-23T01:44:20.433229946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:44:20.433648 containerd[1598]: time="2026-01-23T01:44:20.433251256Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:44:20.433648 containerd[1598]: time="2026-01-23T01:44:20.433321207Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:44:20.433648 containerd[1598]: time="2026-01-23T01:44:20.433346089Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:44:20.433648 containerd[1598]: time="2026-01-23T01:44:20.433461851Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:44:20.434117 containerd[1598]: time="2026-01-23T01:44:20.433490559Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:44:20.434117 containerd[1598]: time="2026-01-23T01:44:20.433503671Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:44:20.434117 containerd[1598]: time="2026-01-23T01:44:20.433550671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:44:20.434117 containerd[1598]: time="2026-01-23T01:44:20.433586398Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:44:20.434117 containerd[1598]: time="2026-01-23T01:44:20.433629549Z" level=info msg="runtime interface created" Jan 23 01:44:20.434117 containerd[1598]: time="2026-01-23T01:44:20.433641487Z" level=info msg="created NRI interface" Jan 23 01:44:20.434117 containerd[1598]: time="2026-01-23T01:44:20.433658526Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:44:20.434117 containerd[1598]: time="2026-01-23T01:44:20.433685127Z" level=info msg="Connect containerd service" Jan 23 01:44:20.434117 containerd[1598]: time="2026-01-23T01:44:20.433744672Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:44:20.436489 containerd[1598]: time="2026-01-23T01:44:20.435414589Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:44:20.469529 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:44:20.470114 locksmithd[1608]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:44:20.477119 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:44:20.480655 systemd[1]: Started sshd@0-10.230.49.206:22-20.161.92.111:55156.service - OpenSSH per-connection server daemon (20.161.92.111:55156). Jan 23 01:44:20.530028 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:44:20.530921 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:44:20.537019 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:44:20.586717 polkitd[1636]: Started polkitd version 126 Jan 23 01:44:20.606095 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:44:20.611459 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:44:20.617270 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:44:20.618403 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:44:20.632014 polkitd[1636]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 01:44:20.632469 polkitd[1636]: Loading rules from directory /run/polkit-1/rules.d Jan 23 01:44:20.632545 polkitd[1636]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:44:20.632917 polkitd[1636]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 01:44:20.632953 polkitd[1636]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:44:20.633003 polkitd[1636]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 01:44:20.642778 polkitd[1636]: Finished loading, compiling and executing 2 rules Jan 23 01:44:20.643237 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 01:44:20.644703 dbus-daemon[1559]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 01:44:20.648493 polkitd[1636]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 01:44:20.661107 containerd[1598]: time="2026-01-23T01:44:20.661006149Z" level=info msg="Start subscribing containerd event" Jan 23 01:44:20.661217 containerd[1598]: time="2026-01-23T01:44:20.661082397Z" level=info msg="Start recovering state" Jan 23 01:44:20.661302 containerd[1598]: time="2026-01-23T01:44:20.661276762Z" level=info msg="Start event monitor" Jan 23 01:44:20.661363 containerd[1598]: time="2026-01-23T01:44:20.661307083Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:44:20.661363 containerd[1598]: time="2026-01-23T01:44:20.661323923Z" level=info msg="Start streaming server" Jan 23 01:44:20.661363 containerd[1598]: time="2026-01-23T01:44:20.661343949Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:44:20.661363 containerd[1598]: time="2026-01-23T01:44:20.661355720Z" level=info msg="runtime interface starting up..." Jan 23 01:44:20.661521 containerd[1598]: time="2026-01-23T01:44:20.661369972Z" level=info msg="starting plugins..." Jan 23 01:44:20.661521 containerd[1598]: time="2026-01-23T01:44:20.661392520Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:44:20.662003 containerd[1598]: time="2026-01-23T01:44:20.661962116Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:44:20.662426 containerd[1598]: time="2026-01-23T01:44:20.662394461Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:44:20.663750 containerd[1598]: time="2026-01-23T01:44:20.662526773Z" level=info msg="containerd successfully booted in 0.257337s" Jan 23 01:44:20.663029 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:44:20.680245 systemd-hostnamed[1607]: Hostname set to (static) Jan 23 01:44:20.749263 systemd-networkd[1479]: eth0: Gained IPv6LL Jan 23 01:44:20.752899 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:44:20.755682 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:44:20.758640 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:44:20.765413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:44:20.769198 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:44:20.802648 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:44:20.856981 tar[1583]: linux-amd64/README.md Jan 23 01:44:20.875290 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 01:44:21.112367 sshd[1658]: Accepted publickey for core from 20.161.92.111 port 55156 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:44:21.114338 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:44:21.136257 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:44:21.140770 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:44:21.149430 systemd-logind[1570]: New session 1 of user core. Jan 23 01:44:21.169625 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:44:21.175753 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:44:21.193674 (systemd)[1703]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:44:21.202041 systemd-logind[1570]: New session c1 of user core. Jan 23 01:44:21.395062 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:44:21.397839 systemd[1703]: Queued start job for default target default.target. Jan 23 01:44:21.405852 systemd[1703]: Created slice app.slice - User Application Slice. Jan 23 01:44:21.406719 systemd[1703]: Reached target paths.target - Paths. Jan 23 01:44:21.406804 systemd[1703]: Reached target timers.target - Timers. Jan 23 01:44:21.409859 systemd[1703]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:44:21.447778 systemd[1703]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:44:21.448004 systemd[1703]: Reached target sockets.target - Sockets. Jan 23 01:44:21.448084 systemd[1703]: Reached target basic.target - Basic System. Jan 23 01:44:21.448176 systemd[1703]: Reached target default.target - Main User Target. Jan 23 01:44:21.448241 systemd[1703]: Startup finished in 228ms. Jan 23 01:44:21.448464 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:44:21.458598 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:44:21.819013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:44:21.832567 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:44:21.881068 systemd[1]: Started sshd@1-10.230.49.206:22-20.161.92.111:55168.service - OpenSSH per-connection server daemon (20.161.92.111:55168). Jan 23 01:44:21.935172 systemd-networkd[1479]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8c73:24:19ff:fee6:31ce/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8c73:24:19ff:fee6:31ce/64 assigned by NDisc. Jan 23 01:44:21.935185 systemd-networkd[1479]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 23 01:44:22.456752 kubelet[1719]: E0123 01:44:22.456614 1719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:44:22.459722 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:44:22.460013 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:44:22.460923 systemd[1]: kubelet.service: Consumed 1.041s CPU time, 266.7M memory peak. Jan 23 01:44:22.461095 sshd[1721]: Accepted publickey for core from 20.161.92.111 port 55168 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:44:22.462785 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:44:22.469747 systemd-logind[1570]: New session 2 of user core. Jan 23 01:44:22.484684 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:44:22.767914 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:44:22.863910 sshd[1731]: Connection closed by 20.161.92.111 port 55168 Jan 23 01:44:22.864816 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jan 23 01:44:22.870186 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. Jan 23 01:44:22.871157 systemd[1]: sshd@1-10.230.49.206:22-20.161.92.111:55168.service: Deactivated successfully. Jan 23 01:44:22.873935 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 01:44:22.876520 systemd-logind[1570]: Removed session 2. Jan 23 01:44:22.968188 systemd[1]: Started sshd@2-10.230.49.206:22-20.161.92.111:41350.service - OpenSSH per-connection server daemon (20.161.92.111:41350). Jan 23 01:44:23.423921 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:44:23.555764 sshd[1738]: Accepted publickey for core from 20.161.92.111 port 41350 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:44:23.557619 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:44:23.564535 systemd-logind[1570]: New session 3 of user core. Jan 23 01:44:23.581228 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:44:23.966197 sshd[1742]: Connection closed by 20.161.92.111 port 41350 Jan 23 01:44:23.967233 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jan 23 01:44:23.973549 systemd[1]: sshd@2-10.230.49.206:22-20.161.92.111:41350.service: Deactivated successfully. Jan 23 01:44:23.976814 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 01:44:23.978429 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Jan 23 01:44:23.980735 systemd-logind[1570]: Removed session 3. Jan 23 01:44:25.711552 login[1680]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 01:44:25.720122 systemd-logind[1570]: New session 4 of user core. Jan 23 01:44:25.723091 login[1678]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 01:44:25.735301 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:44:25.746479 systemd-logind[1570]: New session 5 of user core. Jan 23 01:44:25.756230 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:44:26.780910 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:44:26.790533 coreos-metadata[1558]: Jan 23 01:44:26.790 WARN failed to locate config-drive, using the metadata service API instead Jan 23 01:44:26.815533 coreos-metadata[1558]: Jan 23 01:44:26.815 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 23 01:44:26.824632 coreos-metadata[1558]: Jan 23 01:44:26.824 INFO Fetch failed with 404: resource not found Jan 23 01:44:26.824632 coreos-metadata[1558]: Jan 23 01:44:26.824 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 01:44:26.825074 coreos-metadata[1558]: Jan 23 01:44:26.825 INFO Fetch successful Jan 23 01:44:26.825212 coreos-metadata[1558]: Jan 23 01:44:26.825 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 23 01:44:26.839585 coreos-metadata[1558]: Jan 23 01:44:26.839 INFO Fetch successful Jan 23 01:44:26.840148 coreos-metadata[1558]: Jan 23 01:44:26.840 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 23 01:44:26.858145 coreos-metadata[1558]: Jan 23 01:44:26.858 INFO Fetch successful Jan 23 01:44:26.858298 coreos-metadata[1558]: Jan 23 01:44:26.858 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 23 01:44:26.873320 coreos-metadata[1558]: Jan 23 01:44:26.873 INFO Fetch successful Jan 23 01:44:26.873468 coreos-metadata[1558]: Jan 23 01:44:26.873 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 23 01:44:26.893117 coreos-metadata[1558]: Jan 23 01:44:26.892 INFO Fetch successful Jan 23 01:44:26.937096 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 01:44:26.939042 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:44:27.441928 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 01:44:27.456523 coreos-metadata[1635]: Jan 23 01:44:27.456 WARN failed to locate config-drive, using the metadata service API instead Jan 23 01:44:27.477918 coreos-metadata[1635]: Jan 23 01:44:27.477 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 23 01:44:27.504183 coreos-metadata[1635]: Jan 23 01:44:27.504 INFO Fetch successful Jan 23 01:44:27.504410 coreos-metadata[1635]: Jan 23 01:44:27.504 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 01:44:27.533444 coreos-metadata[1635]: Jan 23 01:44:27.533 INFO Fetch successful Jan 23 01:44:27.535861 unknown[1635]: wrote ssh authorized keys file for user: core Jan 23 01:44:27.570120 update-ssh-keys[1782]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:44:27.572272 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 01:44:27.574756 systemd[1]: Finished sshkeys.service. Jan 23 01:44:27.579037 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:44:27.579594 systemd[1]: Startup finished in 3.436s (kernel) + 14.526s (initrd) + 11.740s (userspace) = 29.702s. Jan 23 01:44:32.679566 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 01:44:32.683184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:44:32.962432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:44:32.974512 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:44:33.031815 kubelet[1793]: E0123 01:44:33.031734 1793 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:44:33.035683 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:44:33.035961 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:44:33.036461 systemd[1]: kubelet.service: Consumed 221ms CPU time, 108.4M memory peak. Jan 23 01:44:34.066616 systemd[1]: Started sshd@3-10.230.49.206:22-20.161.92.111:60986.service - OpenSSH per-connection server daemon (20.161.92.111:60986). Jan 23 01:44:34.648265 sshd[1801]: Accepted publickey for core from 20.161.92.111 port 60986 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:44:34.649945 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:44:34.656922 systemd-logind[1570]: New session 6 of user core. Jan 23 01:44:34.664070 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:44:35.049807 sshd[1804]: Connection closed by 20.161.92.111 port 60986 Jan 23 01:44:35.050640 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Jan 23 01:44:35.055460 systemd[1]: sshd@3-10.230.49.206:22-20.161.92.111:60986.service: Deactivated successfully. Jan 23 01:44:35.057819 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:44:35.058926 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:44:35.060687 systemd-logind[1570]: Removed session 6. Jan 23 01:44:35.154936 systemd[1]: Started sshd@4-10.230.49.206:22-20.161.92.111:60996.service - OpenSSH per-connection server daemon (20.161.92.111:60996). Jan 23 01:44:35.735137 sshd[1810]: Accepted publickey for core from 20.161.92.111 port 60996 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:44:35.736731 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:44:35.743247 systemd-logind[1570]: New session 7 of user core. Jan 23 01:44:35.751078 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:44:36.132262 sshd[1813]: Connection closed by 20.161.92.111 port 60996 Jan 23 01:44:36.133007 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Jan 23 01:44:36.138027 systemd[1]: sshd@4-10.230.49.206:22-20.161.92.111:60996.service: Deactivated successfully. Jan 23 01:44:36.140246 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:44:36.143023 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:44:36.144583 systemd-logind[1570]: Removed session 7. Jan 23 01:44:36.236157 systemd[1]: Started sshd@5-10.230.49.206:22-20.161.92.111:32772.service - OpenSSH per-connection server daemon (20.161.92.111:32772). Jan 23 01:44:36.816365 sshd[1819]: Accepted publickey for core from 20.161.92.111 port 32772 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:44:36.817996 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:44:36.824204 systemd-logind[1570]: New session 8 of user core. Jan 23 01:44:36.833110 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:44:37.217086 sshd[1822]: Connection closed by 20.161.92.111 port 32772 Jan 23 01:44:37.218132 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Jan 23 01:44:37.224650 systemd[1]: sshd@5-10.230.49.206:22-20.161.92.111:32772.service: Deactivated successfully. Jan 23 01:44:37.227179 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:44:37.228443 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:44:37.230354 systemd-logind[1570]: Removed session 8. Jan 23 01:44:37.322972 systemd[1]: Started sshd@6-10.230.49.206:22-20.161.92.111:32788.service - OpenSSH per-connection server daemon (20.161.92.111:32788). Jan 23 01:44:37.916855 sshd[1828]: Accepted publickey for core from 20.161.92.111 port 32788 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:44:37.918821 sshd-session[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:44:37.925941 systemd-logind[1570]: New session 9 of user core. Jan 23 01:44:37.933116 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:44:38.243096 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:44:38.244113 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:44:38.261269 sudo[1832]: pam_unix(sudo:session): session closed for user root Jan 23 01:44:38.350138 sshd[1831]: Connection closed by 20.161.92.111 port 32788 Jan 23 01:44:38.351156 sshd-session[1828]: pam_unix(sshd:session): session closed for user core Jan 23 01:44:38.356800 systemd[1]: sshd@6-10.230.49.206:22-20.161.92.111:32788.service: Deactivated successfully. Jan 23 01:44:38.359113 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:44:38.360350 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:44:38.363036 systemd-logind[1570]: Removed session 9. Jan 23 01:44:38.456355 systemd[1]: Started sshd@7-10.230.49.206:22-20.161.92.111:32792.service - OpenSSH per-connection server daemon (20.161.92.111:32792). Jan 23 01:44:39.049357 sshd[1838]: Accepted publickey for core from 20.161.92.111 port 32792 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:44:39.051291 sshd-session[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:44:39.058499 systemd-logind[1570]: New session 10 of user core. Jan 23 01:44:39.067098 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:44:39.364848 sudo[1843]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:44:39.365301 sudo[1843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:44:39.372498 sudo[1843]: pam_unix(sudo:session): session closed for user root Jan 23 01:44:39.380144 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:44:39.380575 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:44:39.394017 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:44:39.439356 augenrules[1865]: No rules Jan 23 01:44:39.440176 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:44:39.440573 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:44:39.442019 sudo[1842]: pam_unix(sudo:session): session closed for user root Jan 23 01:44:39.531605 sshd[1841]: Connection closed by 20.161.92.111 port 32792 Jan 23 01:44:39.532441 sshd-session[1838]: pam_unix(sshd:session): session closed for user core Jan 23 01:44:39.538082 systemd[1]: sshd@7-10.230.49.206:22-20.161.92.111:32792.service: Deactivated successfully. Jan 23 01:44:39.540715 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:44:39.541768 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:44:39.543624 systemd-logind[1570]: Removed session 10. Jan 23 01:44:39.632513 systemd[1]: Started sshd@8-10.230.49.206:22-20.161.92.111:32796.service - OpenSSH per-connection server daemon (20.161.92.111:32796). Jan 23 01:44:40.206639 sshd[1874]: Accepted publickey for core from 20.161.92.111 port 32796 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:44:40.208255 sshd-session[1874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:44:40.214423 systemd-logind[1570]: New session 11 of user core. Jan 23 01:44:40.226177 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:44:40.520236 sudo[1878]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:44:40.520653 sudo[1878]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:44:41.024040 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 01:44:41.046442 (dockerd)[1896]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 01:44:41.411106 dockerd[1896]: time="2026-01-23T01:44:41.410944656Z" level=info msg="Starting up" Jan 23 01:44:41.414811 dockerd[1896]: time="2026-01-23T01:44:41.414193081Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 01:44:41.431541 dockerd[1896]: time="2026-01-23T01:44:41.431497511Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 01:44:41.449768 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport531479616-merged.mount: Deactivated successfully. Jan 23 01:44:41.463225 systemd[1]: var-lib-docker-metacopy\x2dcheck3389194180-merged.mount: Deactivated successfully. Jan 23 01:44:41.485571 dockerd[1896]: time="2026-01-23T01:44:41.485494056Z" level=info msg="Loading containers: start." Jan 23 01:44:41.497907 kernel: Initializing XFRM netlink socket Jan 23 01:44:41.825025 systemd-networkd[1479]: docker0: Link UP Jan 23 01:44:41.833896 dockerd[1896]: time="2026-01-23T01:44:41.833198734Z" level=info msg="Loading containers: done." Jan 23 01:44:41.854020 dockerd[1896]: time="2026-01-23T01:44:41.852257558Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 01:44:41.854020 dockerd[1896]: time="2026-01-23T01:44:41.852405874Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 01:44:41.854020 dockerd[1896]: time="2026-01-23T01:44:41.852516381Z" level=info msg="Initializing buildkit" Jan 23 01:44:41.880710 dockerd[1896]: time="2026-01-23T01:44:41.880647172Z" level=info msg="Completed buildkit initialization" Jan 23 01:44:41.889554 dockerd[1896]: time="2026-01-23T01:44:41.889512416Z" level=info msg="Daemon has completed initialization" Jan 23 01:44:41.891446 dockerd[1896]: time="2026-01-23T01:44:41.889665697Z" level=info msg="API listen on /run/docker.sock" Jan 23 01:44:41.890890 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 01:44:42.447849 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck720206654-merged.mount: Deactivated successfully. Jan 23 01:44:42.979981 containerd[1598]: time="2026-01-23T01:44:42.979852250Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 01:44:43.179228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 01:44:43.181649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:44:43.358434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:44:43.368653 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:44:43.456285 kubelet[2120]: E0123 01:44:43.456230 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:44:43.458551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:44:43.458801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:44:43.459494 systemd[1]: kubelet.service: Consumed 199ms CPU time, 108.2M memory peak. Jan 23 01:44:43.853208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108145340.mount: Deactivated successfully. Jan 23 01:44:46.944972 containerd[1598]: time="2026-01-23T01:44:46.944888734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:46.946806 containerd[1598]: time="2026-01-23T01:44:46.946510589Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070655" Jan 23 01:44:46.947630 containerd[1598]: time="2026-01-23T01:44:46.947583148Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:46.951385 containerd[1598]: time="2026-01-23T01:44:46.951349423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:46.953020 containerd[1598]: time="2026-01-23T01:44:46.952985574Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.972316035s" Jan 23 01:44:46.953147 containerd[1598]: time="2026-01-23T01:44:46.953120388Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 23 01:44:46.954526 containerd[1598]: time="2026-01-23T01:44:46.954474409Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 01:44:49.177174 containerd[1598]: time="2026-01-23T01:44:49.177023353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:49.180290 containerd[1598]: time="2026-01-23T01:44:49.180257124Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993362" Jan 23 01:44:49.180923 containerd[1598]: time="2026-01-23T01:44:49.180869937Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:49.186929 containerd[1598]: time="2026-01-23T01:44:49.184747060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:49.186929 containerd[1598]: time="2026-01-23T01:44:49.186065096Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.231549013s" Jan 23 01:44:49.186929 containerd[1598]: time="2026-01-23T01:44:49.186713057Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 23 01:44:49.188097 containerd[1598]: time="2026-01-23T01:44:49.188041871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 01:44:51.034474 containerd[1598]: time="2026-01-23T01:44:51.034332481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:51.035901 containerd[1598]: time="2026-01-23T01:44:51.035651706Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405084" Jan 23 01:44:51.036612 containerd[1598]: time="2026-01-23T01:44:51.036569300Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:51.039769 containerd[1598]: time="2026-01-23T01:44:51.039737562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:51.041215 containerd[1598]: time="2026-01-23T01:44:51.041167387Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.852964101s" Jan 23 01:44:51.041300 containerd[1598]: time="2026-01-23T01:44:51.041215827Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 23 01:44:51.041949 containerd[1598]: time="2026-01-23T01:44:51.041832255Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 01:44:51.976499 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 01:44:53.233479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1533512045.mount: Deactivated successfully. Jan 23 01:44:53.679905 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 01:44:53.683850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:44:53.898329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:44:53.909309 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:44:53.988799 kubelet[2212]: E0123 01:44:53.988571 2212 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:44:53.993606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:44:53.994361 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:44:53.995739 systemd[1]: kubelet.service: Consumed 207ms CPU time, 108.6M memory peak. Jan 23 01:44:54.163451 containerd[1598]: time="2026-01-23T01:44:54.163370415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:54.164912 containerd[1598]: time="2026-01-23T01:44:54.164713650Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161907" Jan 23 01:44:54.166207 containerd[1598]: time="2026-01-23T01:44:54.165461868Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:54.167892 containerd[1598]: time="2026-01-23T01:44:54.167690957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:54.169075 containerd[1598]: time="2026-01-23T01:44:54.169026371Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 3.127142938s" Jan 23 01:44:54.169215 containerd[1598]: time="2026-01-23T01:44:54.169189521Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 01:44:54.170059 containerd[1598]: time="2026-01-23T01:44:54.169922667Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 01:44:54.721323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237437841.mount: Deactivated successfully. Jan 23 01:44:55.938913 containerd[1598]: time="2026-01-23T01:44:55.938823004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:55.940209 containerd[1598]: time="2026-01-23T01:44:55.940082861Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 23 01:44:55.940921 containerd[1598]: time="2026-01-23T01:44:55.940887436Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:55.945266 containerd[1598]: time="2026-01-23T01:44:55.944444780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:44:55.946080 containerd[1598]: time="2026-01-23T01:44:55.946044232Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.776075214s" Jan 23 01:44:55.946162 containerd[1598]: time="2026-01-23T01:44:55.946083111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 23 01:44:55.947351 containerd[1598]: time="2026-01-23T01:44:55.947324531Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 01:44:56.438994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1107137925.mount: Deactivated successfully. Jan 23 01:44:56.444762 containerd[1598]: time="2026-01-23T01:44:56.443842581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:44:56.444762 containerd[1598]: time="2026-01-23T01:44:56.444727514Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 23 01:44:56.445248 containerd[1598]: time="2026-01-23T01:44:56.445213623Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:44:56.447716 containerd[1598]: time="2026-01-23T01:44:56.447667229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:44:56.448773 containerd[1598]: time="2026-01-23T01:44:56.448730899Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 501.227116ms" Jan 23 01:44:56.448918 containerd[1598]: time="2026-01-23T01:44:56.448894387Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 01:44:56.449975 containerd[1598]: time="2026-01-23T01:44:56.449940590Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 01:44:57.008013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479445349.mount: Deactivated successfully. Jan 23 01:45:01.499123 containerd[1598]: time="2026-01-23T01:45:01.499047770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:45:01.502425 containerd[1598]: time="2026-01-23T01:45:01.502390775Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Jan 23 01:45:01.503249 containerd[1598]: time="2026-01-23T01:45:01.503194821Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:45:01.511917 containerd[1598]: time="2026-01-23T01:45:01.509958369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:45:01.512093 containerd[1598]: time="2026-01-23T01:45:01.511397766Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.061415403s" Jan 23 01:45:01.512212 containerd[1598]: time="2026-01-23T01:45:01.512187488Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 23 01:45:04.179474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 01:45:04.185085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:45:04.449067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:45:04.460330 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:45:04.509311 kubelet[2357]: E0123 01:45:04.509247 2357 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:45:04.512237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:45:04.512480 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:45:04.513370 systemd[1]: kubelet.service: Consumed 192ms CPU time, 110.4M memory peak. Jan 23 01:45:04.890641 update_engine[1574]: I20260123 01:45:04.889070 1574 update_attempter.cc:509] Updating boot flags... Jan 23 01:45:05.289894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:45:05.290631 systemd[1]: kubelet.service: Consumed 192ms CPU time, 110.4M memory peak. Jan 23 01:45:05.295021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:45:05.335227 systemd[1]: Reload requested from client PID 2387 ('systemctl') (unit session-11.scope)... Jan 23 01:45:05.335274 systemd[1]: Reloading... Jan 23 01:45:05.554915 zram_generator::config[2432]: No configuration found. Jan 23 01:45:05.814241 systemd[1]: Reloading finished in 478 ms. Jan 23 01:45:05.885611 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:45:05.885740 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:45:05.886140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:45:05.886216 systemd[1]: kubelet.service: Consumed 133ms CPU time, 97.6M memory peak. Jan 23 01:45:05.888510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:45:06.062813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:45:06.081138 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:45:06.188579 kubelet[2499]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:45:06.189914 kubelet[2499]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:45:06.189914 kubelet[2499]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:45:06.189914 kubelet[2499]: I0123 01:45:06.189287 2499 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:45:06.850908 kubelet[2499]: I0123 01:45:06.850008 2499 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 01:45:06.850908 kubelet[2499]: I0123 01:45:06.850081 2499 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:45:06.850908 kubelet[2499]: I0123 01:45:06.850450 2499 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 01:45:06.883760 kubelet[2499]: I0123 01:45:06.883725 2499 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:45:06.884724 kubelet[2499]: E0123 01:45:06.884684 2499 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.49.206:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.49.206:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:45:06.905799 kubelet[2499]: I0123 01:45:06.905764 2499 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:45:06.913997 kubelet[2499]: I0123 01:45:06.913974 2499 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:45:06.917316 kubelet[2499]: I0123 01:45:06.917259 2499 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:45:06.917741 kubelet[2499]: I0123 01:45:06.917437 2499 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-idwud.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:45:06.919767 kubelet[2499]: I0123 01:45:06.919741 2499 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:45:06.919907 kubelet[2499]: I0123 01:45:06.919888 2499 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 01:45:06.921477 kubelet[2499]: I0123 01:45:06.921195 2499 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:45:06.924624 kubelet[2499]: I0123 01:45:06.924603 2499 kubelet.go:446] "Attempting to sync node with API server" Jan 23 01:45:06.924793 kubelet[2499]: I0123 01:45:06.924772 2499 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:45:06.926499 kubelet[2499]: I0123 01:45:06.926477 2499 kubelet.go:352] "Adding apiserver pod source" Jan 23 01:45:06.926653 kubelet[2499]: I0123 01:45:06.926633 2499 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:45:06.929917 kubelet[2499]: W0123 01:45:06.929844 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.49.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-idwud.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.49.206:6443: connect: connection refused Jan 23 01:45:06.929998 kubelet[2499]: E0123 01:45:06.929935 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.49.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-idwud.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.49.206:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:45:06.930462 kubelet[2499]: W0123 01:45:06.930400 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.49.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.49.206:6443: connect: connection refused Jan 23 01:45:06.930578 kubelet[2499]: E0123 01:45:06.930462 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.49.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.49.206:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:45:06.932892 kubelet[2499]: I0123 01:45:06.932029 2499 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:45:06.935380 kubelet[2499]: I0123 01:45:06.935289 2499 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 01:45:06.936079 kubelet[2499]: W0123 01:45:06.936040 2499 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:45:06.938968 kubelet[2499]: I0123 01:45:06.938940 2499 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:45:06.939089 kubelet[2499]: I0123 01:45:06.939028 2499 server.go:1287] "Started kubelet" Jan 23 01:45:06.941133 kubelet[2499]: I0123 01:45:06.941065 2499 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:45:06.942547 kubelet[2499]: I0123 01:45:06.942501 2499 server.go:479] "Adding debug handlers to kubelet server" Jan 23 01:45:06.945430 kubelet[2499]: I0123 01:45:06.945370 2499 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:45:06.945778 kubelet[2499]: I0123 01:45:06.945752 2499 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:45:06.947333 kubelet[2499]: I0123 01:45:06.947309 2499 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:45:06.955895 kubelet[2499]: E0123 01:45:06.947442 2499 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.49.206:6443/api/v1/namespaces/default/events\": dial tcp 10.230.49.206:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-idwud.gb1.brightbox.com.188d38d2dd9f4d58 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-idwud.gb1.brightbox.com,UID:srv-idwud.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-idwud.gb1.brightbox.com,},FirstTimestamp:2026-01-23 01:45:06.938965336 +0000 UTC m=+0.849685553,LastTimestamp:2026-01-23 01:45:06.938965336 +0000 UTC m=+0.849685553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-idwud.gb1.brightbox.com,}" Jan 23 01:45:06.955895 kubelet[2499]: I0123 01:45:06.953188 2499 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:45:06.963348 kubelet[2499]: I0123 01:45:06.963321 2499 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:45:06.963666 kubelet[2499]: E0123 01:45:06.963626 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-idwud.gb1.brightbox.com\" not found" Jan 23 01:45:06.964065 kubelet[2499]: I0123 01:45:06.964022 2499 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:45:06.964138 kubelet[2499]: I0123 01:45:06.964120 2499 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:45:06.967257 kubelet[2499]: I0123 01:45:06.967232 2499 factory.go:221] Registration of the systemd container factory successfully Jan 23 01:45:06.967366 kubelet[2499]: I0123 01:45:06.967337 2499 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:45:06.991840 kubelet[2499]: W0123 01:45:06.990920 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.49.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.49.206:6443: connect: connection refused Jan 23 01:45:06.991840 kubelet[2499]: E0123 01:45:06.990988 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.49.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.49.206:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:45:06.991840 kubelet[2499]: E0123 01:45:06.991095 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.49.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-idwud.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.49.206:6443: connect: connection refused" interval="200ms" Jan 23 01:45:06.994128 kubelet[2499]: E0123 01:45:06.994099 2499 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:45:06.997809 kubelet[2499]: I0123 01:45:06.997786 2499 factory.go:221] Registration of the containerd container factory successfully Jan 23 01:45:07.012348 kubelet[2499]: I0123 01:45:07.012263 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 01:45:07.013915 kubelet[2499]: I0123 01:45:07.013863 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 01:45:07.013995 kubelet[2499]: I0123 01:45:07.013920 2499 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 01:45:07.013995 kubelet[2499]: I0123 01:45:07.013956 2499 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:45:07.013995 kubelet[2499]: I0123 01:45:07.013969 2499 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 01:45:07.014137 kubelet[2499]: E0123 01:45:07.014047 2499 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:45:07.024890 kubelet[2499]: W0123 01:45:07.024816 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.49.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.49.206:6443: connect: connection refused Jan 23 01:45:07.025085 kubelet[2499]: E0123 01:45:07.024988 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.49.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.49.206:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:45:07.041977 kubelet[2499]: I0123 01:45:07.041941 2499 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:45:07.041977 kubelet[2499]: I0123 01:45:07.041973 2499 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:45:07.042137 kubelet[2499]: I0123 01:45:07.042002 2499 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:45:07.047089 kubelet[2499]: I0123 01:45:07.047054 2499 policy_none.go:49] "None policy: Start" Jan 23 01:45:07.047187 kubelet[2499]: I0123 01:45:07.047093 2499 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:45:07.047187 kubelet[2499]: I0123 01:45:07.047120 2499 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:45:07.055113 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:45:07.064193 kubelet[2499]: E0123 01:45:07.063910 2499 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-idwud.gb1.brightbox.com\" not found" Jan 23 01:45:07.066948 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:45:07.072466 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:45:07.085234 kubelet[2499]: I0123 01:45:07.085210 2499 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 01:45:07.085626 kubelet[2499]: I0123 01:45:07.085605 2499 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:45:07.086122 kubelet[2499]: I0123 01:45:07.086050 2499 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:45:07.086452 kubelet[2499]: I0123 01:45:07.086423 2499 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:45:07.089252 kubelet[2499]: E0123 01:45:07.088823 2499 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:45:07.089252 kubelet[2499]: E0123 01:45:07.088941 2499 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-idwud.gb1.brightbox.com\" not found" Jan 23 01:45:07.129005 systemd[1]: Created slice kubepods-burstable-pod11dcf35efb3ed98397aaf4fb0ab4e51c.slice - libcontainer container kubepods-burstable-pod11dcf35efb3ed98397aaf4fb0ab4e51c.slice. Jan 23 01:45:07.145247 kubelet[2499]: E0123 01:45:07.145161 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-idwud.gb1.brightbox.com\" not found" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.150075 systemd[1]: Created slice kubepods-burstable-pod1e5462fd7171293c77dd7edd53d68654.slice - libcontainer container kubepods-burstable-pod1e5462fd7171293c77dd7edd53d68654.slice. Jan 23 01:45:07.153832 kubelet[2499]: E0123 01:45:07.153514 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-idwud.gb1.brightbox.com\" not found" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.158352 systemd[1]: Created slice kubepods-burstable-pod387db5b335fc290d750b21c37771898c.slice - libcontainer container kubepods-burstable-pod387db5b335fc290d750b21c37771898c.slice. Jan 23 01:45:07.160654 kubelet[2499]: E0123 01:45:07.160630 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-idwud.gb1.brightbox.com\" not found" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.166245 kubelet[2499]: I0123 01:45:07.165930 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e5462fd7171293c77dd7edd53d68654-ca-certs\") pod \"kube-controller-manager-srv-idwud.gb1.brightbox.com\" (UID: \"1e5462fd7171293c77dd7edd53d68654\") " pod="kube-system/kube-controller-manager-srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.166245 kubelet[2499]: I0123 01:45:07.165975 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1e5462fd7171293c77dd7edd53d68654-flexvolume-dir\") pod \"kube-controller-manager-srv-idwud.gb1.brightbox.com\" (UID: \"1e5462fd7171293c77dd7edd53d68654\") " pod="kube-system/kube-controller-manager-srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.166245 kubelet[2499]: I0123 01:45:07.166023 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e5462fd7171293c77dd7edd53d68654-k8s-certs\") pod \"kube-controller-manager-srv-idwud.gb1.brightbox.com\" (UID: \"1e5462fd7171293c77dd7edd53d68654\") " pod="kube-system/kube-controller-manager-srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.166245 kubelet[2499]: I0123 01:45:07.166069 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1e5462fd7171293c77dd7edd53d68654-kubeconfig\") pod \"kube-controller-manager-srv-idwud.gb1.brightbox.com\" (UID: \"1e5462fd7171293c77dd7edd53d68654\") " pod="kube-system/kube-controller-manager-srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.166245 kubelet[2499]: I0123 01:45:07.166103 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e5462fd7171293c77dd7edd53d68654-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-idwud.gb1.brightbox.com\" (UID: \"1e5462fd7171293c77dd7edd53d68654\") " pod="kube-system/kube-controller-manager-srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.166511 kubelet[2499]: I0123 01:45:07.166129 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/387db5b335fc290d750b21c37771898c-kubeconfig\") pod \"kube-scheduler-srv-idwud.gb1.brightbox.com\" (UID: \"387db5b335fc290d750b21c37771898c\") " pod="kube-system/kube-scheduler-srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.166511 kubelet[2499]: I0123 01:45:07.166152 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11dcf35efb3ed98397aaf4fb0ab4e51c-k8s-certs\") pod \"kube-apiserver-srv-idwud.gb1.brightbox.com\" (UID: \"11dcf35efb3ed98397aaf4fb0ab4e51c\") " pod="kube-system/kube-apiserver-srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.166511 kubelet[2499]: I0123 01:45:07.166178 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11dcf35efb3ed98397aaf4fb0ab4e51c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-idwud.gb1.brightbox.com\" (UID: \"11dcf35efb3ed98397aaf4fb0ab4e51c\") " pod="kube-system/kube-apiserver-srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.166511 kubelet[2499]: I0123 01:45:07.166202 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11dcf35efb3ed98397aaf4fb0ab4e51c-ca-certs\") pod \"kube-apiserver-srv-idwud.gb1.brightbox.com\" (UID: \"11dcf35efb3ed98397aaf4fb0ab4e51c\") " pod="kube-system/kube-apiserver-srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.188827 kubelet[2499]: I0123 01:45:07.188788 2499 kubelet_node_status.go:75] "Attempting to register node" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.189412 kubelet[2499]: E0123 01:45:07.189381 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.49.206:6443/api/v1/nodes\": dial tcp 10.230.49.206:6443: connect: connection refused" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.191835 kubelet[2499]: E0123 01:45:07.191793 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.49.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-idwud.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.49.206:6443: connect: connection refused" interval="400ms" Jan 23 01:45:07.393518 kubelet[2499]: I0123 01:45:07.393359 2499 kubelet_node_status.go:75] "Attempting to register node" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.394118 kubelet[2499]: E0123 01:45:07.394012 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.49.206:6443/api/v1/nodes\": dial tcp 10.230.49.206:6443: connect: connection refused" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.448963 containerd[1598]: time="2026-01-23T01:45:07.448659758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-idwud.gb1.brightbox.com,Uid:11dcf35efb3ed98397aaf4fb0ab4e51c,Namespace:kube-system,Attempt:0,}" Jan 23 01:45:07.455705 containerd[1598]: time="2026-01-23T01:45:07.455086289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-idwud.gb1.brightbox.com,Uid:1e5462fd7171293c77dd7edd53d68654,Namespace:kube-system,Attempt:0,}" Jan 23 01:45:07.461794 containerd[1598]: time="2026-01-23T01:45:07.461762255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-idwud.gb1.brightbox.com,Uid:387db5b335fc290d750b21c37771898c,Namespace:kube-system,Attempt:0,}" Jan 23 01:45:07.625962 kubelet[2499]: E0123 01:45:07.621692 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.49.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-idwud.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.49.206:6443: connect: connection refused" interval="800ms" Jan 23 01:45:07.626211 containerd[1598]: time="2026-01-23T01:45:07.623087412Z" level=info msg="connecting to shim 92352d6096f68253e7e443b083a20747ab16ac00ea501beb303a6d3d3c09d5d4" address="unix:///run/containerd/s/520fc2e490b6efd7baa98d93fa279b780d7ddff16fd650122a0e4740e63c134e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:45:07.629906 containerd[1598]: time="2026-01-23T01:45:07.629493804Z" level=info msg="connecting to shim 7d4eb338fd2839d814f2aa158a7d00973189fa21162cfe5d191b1bbd8d64d25a" address="unix:///run/containerd/s/4c45941512de222742db3e56d86478c84506a713d01f490fd039b52f3bc1baa2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:45:07.637525 containerd[1598]: time="2026-01-23T01:45:07.637480505Z" level=info msg="connecting to shim d4cf263c6c2ac483e89f93861bb54304415f41cce9923bbbdedd2affc7c53ead" address="unix:///run/containerd/s/3ae3cf1f897d6ffb80e8fc95aa0db534f12cb320f15eecd90575f53259b626d9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:45:07.758216 systemd[1]: Started cri-containerd-92352d6096f68253e7e443b083a20747ab16ac00ea501beb303a6d3d3c09d5d4.scope - libcontainer container 92352d6096f68253e7e443b083a20747ab16ac00ea501beb303a6d3d3c09d5d4. Jan 23 01:45:07.773751 systemd[1]: Started cri-containerd-7d4eb338fd2839d814f2aa158a7d00973189fa21162cfe5d191b1bbd8d64d25a.scope - libcontainer container 7d4eb338fd2839d814f2aa158a7d00973189fa21162cfe5d191b1bbd8d64d25a. Jan 23 01:45:07.776931 systemd[1]: Started cri-containerd-d4cf263c6c2ac483e89f93861bb54304415f41cce9923bbbdedd2affc7c53ead.scope - libcontainer container d4cf263c6c2ac483e89f93861bb54304415f41cce9923bbbdedd2affc7c53ead. Jan 23 01:45:07.798419 kubelet[2499]: I0123 01:45:07.798207 2499 kubelet_node_status.go:75] "Attempting to register node" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.799665 kubelet[2499]: E0123 01:45:07.799454 2499 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.49.206:6443/api/v1/nodes\": dial tcp 10.230.49.206:6443: connect: connection refused" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:07.884184 containerd[1598]: time="2026-01-23T01:45:07.884124492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-idwud.gb1.brightbox.com,Uid:11dcf35efb3ed98397aaf4fb0ab4e51c,Namespace:kube-system,Attempt:0,} returns sandbox id \"92352d6096f68253e7e443b083a20747ab16ac00ea501beb303a6d3d3c09d5d4\"" Jan 23 01:45:07.891588 containerd[1598]: time="2026-01-23T01:45:07.890325870Z" level=info msg="CreateContainer within sandbox \"92352d6096f68253e7e443b083a20747ab16ac00ea501beb303a6d3d3c09d5d4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 01:45:07.923701 containerd[1598]: time="2026-01-23T01:45:07.923654367Z" level=info msg="Container 56f2f20ecfd618e00440b2d42f8b343152d0e4e3686f68069ce4c5792b3dfaeb: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:45:07.925179 containerd[1598]: time="2026-01-23T01:45:07.924213213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-idwud.gb1.brightbox.com,Uid:1e5462fd7171293c77dd7edd53d68654,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d4eb338fd2839d814f2aa158a7d00973189fa21162cfe5d191b1bbd8d64d25a\"" Jan 23 01:45:07.929158 containerd[1598]: time="2026-01-23T01:45:07.929118129Z" level=info msg="CreateContainer within sandbox \"7d4eb338fd2839d814f2aa158a7d00973189fa21162cfe5d191b1bbd8d64d25a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 01:45:07.939276 containerd[1598]: time="2026-01-23T01:45:07.939248834Z" level=info msg="Container 773ae79a286a85159ffcc1e868e940673f57735177fc175f8890be6e1b6d8639: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:45:07.941524 containerd[1598]: time="2026-01-23T01:45:07.941404262Z" level=info msg="CreateContainer within sandbox \"92352d6096f68253e7e443b083a20747ab16ac00ea501beb303a6d3d3c09d5d4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"56f2f20ecfd618e00440b2d42f8b343152d0e4e3686f68069ce4c5792b3dfaeb\"" Jan 23 01:45:07.942475 kubelet[2499]: W0123 01:45:07.942340 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.49.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.49.206:6443: connect: connection refused Jan 23 01:45:07.942807 kubelet[2499]: E0123 01:45:07.942658 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.49.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.49.206:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:45:07.944070 containerd[1598]: time="2026-01-23T01:45:07.943963377Z" level=info msg="StartContainer for \"56f2f20ecfd618e00440b2d42f8b343152d0e4e3686f68069ce4c5792b3dfaeb\"" Jan 23 01:45:07.948592 containerd[1598]: time="2026-01-23T01:45:07.948495857Z" level=info msg="connecting to shim 56f2f20ecfd618e00440b2d42f8b343152d0e4e3686f68069ce4c5792b3dfaeb" address="unix:///run/containerd/s/520fc2e490b6efd7baa98d93fa279b780d7ddff16fd650122a0e4740e63c134e" protocol=ttrpc version=3 Jan 23 01:45:07.950773 containerd[1598]: time="2026-01-23T01:45:07.950730278Z" level=info msg="CreateContainer within sandbox \"7d4eb338fd2839d814f2aa158a7d00973189fa21162cfe5d191b1bbd8d64d25a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"773ae79a286a85159ffcc1e868e940673f57735177fc175f8890be6e1b6d8639\"" Jan 23 01:45:07.951185 containerd[1598]: time="2026-01-23T01:45:07.951156550Z" level=info msg="StartContainer for \"773ae79a286a85159ffcc1e868e940673f57735177fc175f8890be6e1b6d8639\"" Jan 23 01:45:07.952385 containerd[1598]: time="2026-01-23T01:45:07.952350678Z" level=info msg="connecting to shim 773ae79a286a85159ffcc1e868e940673f57735177fc175f8890be6e1b6d8639" address="unix:///run/containerd/s/4c45941512de222742db3e56d86478c84506a713d01f490fd039b52f3bc1baa2" protocol=ttrpc version=3 Jan 23 01:45:07.953429 containerd[1598]: time="2026-01-23T01:45:07.953064128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-idwud.gb1.brightbox.com,Uid:387db5b335fc290d750b21c37771898c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4cf263c6c2ac483e89f93861bb54304415f41cce9923bbbdedd2affc7c53ead\"" Jan 23 01:45:07.958005 containerd[1598]: time="2026-01-23T01:45:07.957974620Z" level=info msg="CreateContainer within sandbox \"d4cf263c6c2ac483e89f93861bb54304415f41cce9923bbbdedd2affc7c53ead\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 01:45:07.967255 containerd[1598]: time="2026-01-23T01:45:07.967206467Z" level=info msg="Container 4a7dd9ec22fd8c5d129240bdd6ff0381907ad7fba6cb94c7a62f573e7ebdb0d4: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:45:07.982324 systemd[1]: Started cri-containerd-773ae79a286a85159ffcc1e868e940673f57735177fc175f8890be6e1b6d8639.scope - libcontainer container 773ae79a286a85159ffcc1e868e940673f57735177fc175f8890be6e1b6d8639. Jan 23 01:45:07.988736 containerd[1598]: time="2026-01-23T01:45:07.988628547Z" level=info msg="CreateContainer within sandbox \"d4cf263c6c2ac483e89f93861bb54304415f41cce9923bbbdedd2affc7c53ead\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4a7dd9ec22fd8c5d129240bdd6ff0381907ad7fba6cb94c7a62f573e7ebdb0d4\"" Jan 23 01:45:07.991483 containerd[1598]: time="2026-01-23T01:45:07.989989154Z" level=info msg="StartContainer for \"4a7dd9ec22fd8c5d129240bdd6ff0381907ad7fba6cb94c7a62f573e7ebdb0d4\"" Jan 23 01:45:07.991483 containerd[1598]: time="2026-01-23T01:45:07.991273301Z" level=info msg="connecting to shim 4a7dd9ec22fd8c5d129240bdd6ff0381907ad7fba6cb94c7a62f573e7ebdb0d4" address="unix:///run/containerd/s/3ae3cf1f897d6ffb80e8fc95aa0db534f12cb320f15eecd90575f53259b626d9" protocol=ttrpc version=3 Jan 23 01:45:07.993219 systemd[1]: Started cri-containerd-56f2f20ecfd618e00440b2d42f8b343152d0e4e3686f68069ce4c5792b3dfaeb.scope - libcontainer container 56f2f20ecfd618e00440b2d42f8b343152d0e4e3686f68069ce4c5792b3dfaeb. Jan 23 01:45:08.025071 systemd[1]: Started cri-containerd-4a7dd9ec22fd8c5d129240bdd6ff0381907ad7fba6cb94c7a62f573e7ebdb0d4.scope - libcontainer container 4a7dd9ec22fd8c5d129240bdd6ff0381907ad7fba6cb94c7a62f573e7ebdb0d4. Jan 23 01:45:08.135962 containerd[1598]: time="2026-01-23T01:45:08.135878251Z" level=info msg="StartContainer for \"773ae79a286a85159ffcc1e868e940673f57735177fc175f8890be6e1b6d8639\" returns successfully" Jan 23 01:45:08.136287 containerd[1598]: time="2026-01-23T01:45:08.136255615Z" level=info msg="StartContainer for \"56f2f20ecfd618e00440b2d42f8b343152d0e4e3686f68069ce4c5792b3dfaeb\" returns successfully" Jan 23 01:45:08.173463 containerd[1598]: time="2026-01-23T01:45:08.173417534Z" level=info msg="StartContainer for \"4a7dd9ec22fd8c5d129240bdd6ff0381907ad7fba6cb94c7a62f573e7ebdb0d4\" returns successfully" Jan 23 01:45:08.200497 kubelet[2499]: W0123 01:45:08.200198 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.49.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-idwud.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.49.206:6443: connect: connection refused Jan 23 01:45:08.201588 kubelet[2499]: E0123 01:45:08.201117 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.49.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-idwud.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.49.206:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:45:08.302267 kubelet[2499]: W0123 01:45:08.301799 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.49.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.49.206:6443: connect: connection refused Jan 23 01:45:08.302267 kubelet[2499]: E0123 01:45:08.301909 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.49.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.49.206:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:45:08.396278 kubelet[2499]: W0123 01:45:08.396208 2499 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.49.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.49.206:6443: connect: connection refused Jan 23 01:45:08.396448 kubelet[2499]: E0123 01:45:08.396290 2499 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.49.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.49.206:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:45:08.423396 kubelet[2499]: E0123 01:45:08.423352 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.49.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-idwud.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.49.206:6443: connect: connection refused" interval="1.6s" Jan 23 01:45:08.606047 kubelet[2499]: I0123 01:45:08.605931 2499 kubelet_node_status.go:75] "Attempting to register node" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:09.063484 kubelet[2499]: E0123 01:45:09.063433 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-idwud.gb1.brightbox.com\" not found" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:09.068107 kubelet[2499]: E0123 01:45:09.068073 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-idwud.gb1.brightbox.com\" not found" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:09.071269 kubelet[2499]: E0123 01:45:09.071233 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-idwud.gb1.brightbox.com\" not found" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:10.078921 kubelet[2499]: E0123 01:45:10.077317 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-idwud.gb1.brightbox.com\" not found" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:10.078921 kubelet[2499]: E0123 01:45:10.077538 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-idwud.gb1.brightbox.com\" not found" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:10.079675 kubelet[2499]: E0123 01:45:10.078967 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-idwud.gb1.brightbox.com\" not found" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:11.078758 kubelet[2499]: E0123 01:45:11.078541 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-idwud.gb1.brightbox.com\" not found" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:11.079897 kubelet[2499]: E0123 01:45:11.079754 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-idwud.gb1.brightbox.com\" not found" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:11.140710 kubelet[2499]: E0123 01:45:11.140649 2499 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-idwud.gb1.brightbox.com\" not found" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:11.292737 kubelet[2499]: E0123 01:45:11.292548 2499 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-idwud.gb1.brightbox.com\" not found" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:11.371989 kubelet[2499]: I0123 01:45:11.371655 2499 kubelet_node_status.go:78] "Successfully registered node" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:11.371989 kubelet[2499]: E0123 01:45:11.371710 2499 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-idwud.gb1.brightbox.com\": node \"srv-idwud.gb1.brightbox.com\" not found" Jan 23 01:45:11.464221 kubelet[2499]: I0123 01:45:11.464173 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-idwud.gb1.brightbox.com" Jan 23 01:45:11.473651 kubelet[2499]: E0123 01:45:11.473606 2499 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-idwud.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-idwud.gb1.brightbox.com" Jan 23 01:45:11.473838 kubelet[2499]: I0123 01:45:11.473634 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-idwud.gb1.brightbox.com" Jan 23 01:45:11.475758 kubelet[2499]: E0123 01:45:11.475732 2499 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-idwud.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-idwud.gb1.brightbox.com" Jan 23 01:45:11.475758 kubelet[2499]: I0123 01:45:11.475760 2499 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-idwud.gb1.brightbox.com" Jan 23 01:45:11.477419 kubelet[2499]: E0123 01:45:11.477367 2499 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-idwud.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-idwud.gb1.brightbox.com" Jan 23 01:45:11.933595 kubelet[2499]: I0123 01:45:11.933526 2499 apiserver.go:52] "Watching apiserver" Jan 23 01:45:11.964534 kubelet[2499]: I0123 01:45:11.964416 2499 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:45:13.614412 systemd[1]: Reload requested from client PID 2770 ('systemctl') (unit session-11.scope)... Jan 23 01:45:13.614436 systemd[1]: Reloading... Jan 23 01:45:13.743686 zram_generator::config[2815]: No configuration found. Jan 23 01:45:14.083867 systemd[1]: Reloading finished in 468 ms. Jan 23 01:45:14.123729 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:45:14.135527 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:45:14.135851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:45:14.135954 systemd[1]: kubelet.service: Consumed 1.348s CPU time, 127.1M memory peak. Jan 23 01:45:14.139283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:45:14.413798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:45:14.422411 (kubelet)[2878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:45:14.498664 kubelet[2878]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:45:14.498664 kubelet[2878]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:45:14.498664 kubelet[2878]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:45:14.499240 kubelet[2878]: I0123 01:45:14.498746 2878 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:45:14.510026 kubelet[2878]: I0123 01:45:14.509972 2878 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 01:45:14.510026 kubelet[2878]: I0123 01:45:14.510005 2878 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:45:14.510329 kubelet[2878]: I0123 01:45:14.510299 2878 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 01:45:14.514279 kubelet[2878]: I0123 01:45:14.514121 2878 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 01:45:14.517020 kubelet[2878]: I0123 01:45:14.516994 2878 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:45:14.522930 kubelet[2878]: I0123 01:45:14.522809 2878 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:45:14.528579 kubelet[2878]: I0123 01:45:14.528557 2878 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:45:14.528928 kubelet[2878]: I0123 01:45:14.528869 2878 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:45:14.529113 kubelet[2878]: I0123 01:45:14.528931 2878 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-idwud.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:45:14.529265 kubelet[2878]: I0123 01:45:14.529129 2878 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:45:14.529265 kubelet[2878]: I0123 01:45:14.529144 2878 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 01:45:14.529265 kubelet[2878]: I0123 01:45:14.529223 2878 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:45:14.529437 kubelet[2878]: I0123 01:45:14.529418 2878 kubelet.go:446] "Attempting to sync node with API server" Jan 23 01:45:14.529936 kubelet[2878]: I0123 01:45:14.529454 2878 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:45:14.529936 kubelet[2878]: I0123 01:45:14.529492 2878 kubelet.go:352] "Adding apiserver pod source" Jan 23 01:45:14.529936 kubelet[2878]: I0123 01:45:14.529545 2878 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:45:14.531703 kubelet[2878]: I0123 01:45:14.531591 2878 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:45:14.532888 kubelet[2878]: I0123 01:45:14.532350 2878 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 01:45:14.534221 kubelet[2878]: I0123 01:45:14.533624 2878 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:45:14.534445 kubelet[2878]: I0123 01:45:14.534426 2878 server.go:1287] "Started kubelet" Jan 23 01:45:14.545733 kubelet[2878]: I0123 01:45:14.543687 2878 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:45:14.546592 kubelet[2878]: I0123 01:45:14.546484 2878 server.go:479] "Adding debug handlers to kubelet server" Jan 23 01:45:14.551069 kubelet[2878]: I0123 01:45:14.550980 2878 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:45:14.551304 kubelet[2878]: I0123 01:45:14.551279 2878 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:45:14.551903 kubelet[2878]: I0123 01:45:14.551861 2878 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:45:14.561186 kubelet[2878]: I0123 01:45:14.555134 2878 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:45:14.566313 kubelet[2878]: I0123 01:45:14.561528 2878 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:45:14.580946 kubelet[2878]: I0123 01:45:14.561546 2878 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:45:14.581253 kubelet[2878]: E0123 01:45:14.561694 2878 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-idwud.gb1.brightbox.com\" not found" Jan 23 01:45:14.581253 kubelet[2878]: I0123 01:45:14.566745 2878 factory.go:221] Registration of the systemd container factory successfully Jan 23 01:45:14.582093 kubelet[2878]: I0123 01:45:14.581453 2878 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:45:14.584562 kubelet[2878]: I0123 01:45:14.584317 2878 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:45:14.587945 kubelet[2878]: E0123 01:45:14.585540 2878 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:45:14.596270 kubelet[2878]: I0123 01:45:14.595774 2878 factory.go:221] Registration of the containerd container factory successfully Jan 23 01:45:14.600315 kubelet[2878]: I0123 01:45:14.599519 2878 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 01:45:14.602524 kubelet[2878]: I0123 01:45:14.602101 2878 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 01:45:14.602524 kubelet[2878]: I0123 01:45:14.602145 2878 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 01:45:14.602524 kubelet[2878]: I0123 01:45:14.602171 2878 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:45:14.602524 kubelet[2878]: I0123 01:45:14.602186 2878 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 01:45:14.602524 kubelet[2878]: E0123 01:45:14.602243 2878 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:45:14.646738 sudo[2907]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 01:45:14.647305 sudo[2907]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 01:45:14.692718 kubelet[2878]: I0123 01:45:14.692577 2878 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:45:14.693326 kubelet[2878]: I0123 01:45:14.693302 2878 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:45:14.693668 kubelet[2878]: I0123 01:45:14.693651 2878 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:45:14.694625 kubelet[2878]: I0123 01:45:14.694288 2878 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 01:45:14.694768 kubelet[2878]: I0123 01:45:14.694729 2878 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 01:45:14.695214 kubelet[2878]: I0123 01:45:14.695194 2878 policy_none.go:49] "None policy: Start" Jan 23 01:45:14.695406 kubelet[2878]: I0123 01:45:14.695300 2878 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:45:14.695784 kubelet[2878]: I0123 01:45:14.695763 2878 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:45:14.696371 kubelet[2878]: I0123 01:45:14.696136 2878 state_mem.go:75] "Updated machine memory state" Jan 23 01:45:14.703759 kubelet[2878]: E0123 01:45:14.702336 2878 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 01:45:14.708835 kubelet[2878]: I0123 01:45:14.708459 2878 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 01:45:14.708958 kubelet[2878]: I0123 01:45:14.708900 2878 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:45:14.709012 kubelet[2878]: I0123 01:45:14.708919 2878 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:45:14.710007 kubelet[2878]: I0123 01:45:14.709481 2878 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:45:14.720344 kubelet[2878]: E0123 01:45:14.719268 2878 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:45:14.835665 kubelet[2878]: I0123 01:45:14.835219 2878 kubelet_node_status.go:75] "Attempting to register node" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:14.844139 kubelet[2878]: I0123 01:45:14.844021 2878 kubelet_node_status.go:124] "Node was previously registered" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:14.844400 kubelet[2878]: I0123 01:45:14.844379 2878 kubelet_node_status.go:78] "Successfully registered node" node="srv-idwud.gb1.brightbox.com" Jan 23 01:45:14.903933 kubelet[2878]: I0123 01:45:14.903202 2878 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-idwud.gb1.brightbox.com" Jan 23 01:45:14.905008 kubelet[2878]: I0123 01:45:14.904850 2878 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-idwud.gb1.brightbox.com" Jan 23 01:45:14.905351 kubelet[2878]: I0123 01:45:14.905310 2878 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-idwud.gb1.brightbox.com" Jan 23 01:45:14.917518 kubelet[2878]: W0123 01:45:14.917475 2878 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 01:45:14.918243 kubelet[2878]: W0123 01:45:14.918049 2878 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 01:45:14.924097 kubelet[2878]: W0123 01:45:14.923957 2878 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 01:45:14.989784 kubelet[2878]: I0123 01:45:14.989067 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e5462fd7171293c77dd7edd53d68654-ca-certs\") pod \"kube-controller-manager-srv-idwud.gb1.brightbox.com\" (UID: \"1e5462fd7171293c77dd7edd53d68654\") " pod="kube-system/kube-controller-manager-srv-idwud.gb1.brightbox.com" Jan 23 01:45:14.989784 kubelet[2878]: I0123 01:45:14.989215 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/387db5b335fc290d750b21c37771898c-kubeconfig\") pod \"kube-scheduler-srv-idwud.gb1.brightbox.com\" (UID: \"387db5b335fc290d750b21c37771898c\") " pod="kube-system/kube-scheduler-srv-idwud.gb1.brightbox.com" Jan 23 01:45:14.989784 kubelet[2878]: I0123 01:45:14.989247 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11dcf35efb3ed98397aaf4fb0ab4e51c-ca-certs\") pod \"kube-apiserver-srv-idwud.gb1.brightbox.com\" (UID: \"11dcf35efb3ed98397aaf4fb0ab4e51c\") " pod="kube-system/kube-apiserver-srv-idwud.gb1.brightbox.com" Jan 23 01:45:14.989784 kubelet[2878]: I0123 01:45:14.989282 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11dcf35efb3ed98397aaf4fb0ab4e51c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-idwud.gb1.brightbox.com\" (UID: \"11dcf35efb3ed98397aaf4fb0ab4e51c\") " pod="kube-system/kube-apiserver-srv-idwud.gb1.brightbox.com" Jan 23 01:45:14.989784 kubelet[2878]: I0123 01:45:14.989309 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1e5462fd7171293c77dd7edd53d68654-flexvolume-dir\") pod \"kube-controller-manager-srv-idwud.gb1.brightbox.com\" (UID: \"1e5462fd7171293c77dd7edd53d68654\") " pod="kube-system/kube-controller-manager-srv-idwud.gb1.brightbox.com" Jan 23 01:45:14.990169 kubelet[2878]: I0123 01:45:14.989347 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e5462fd7171293c77dd7edd53d68654-k8s-certs\") pod \"kube-controller-manager-srv-idwud.gb1.brightbox.com\" (UID: \"1e5462fd7171293c77dd7edd53d68654\") " pod="kube-system/kube-controller-manager-srv-idwud.gb1.brightbox.com" Jan 23 01:45:14.990169 kubelet[2878]: I0123 01:45:14.989374 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1e5462fd7171293c77dd7edd53d68654-kubeconfig\") pod \"kube-controller-manager-srv-idwud.gb1.brightbox.com\" (UID: \"1e5462fd7171293c77dd7edd53d68654\") " pod="kube-system/kube-controller-manager-srv-idwud.gb1.brightbox.com" Jan 23 01:45:14.990169 kubelet[2878]: I0123 01:45:14.989410 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e5462fd7171293c77dd7edd53d68654-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-idwud.gb1.brightbox.com\" (UID: \"1e5462fd7171293c77dd7edd53d68654\") " pod="kube-system/kube-controller-manager-srv-idwud.gb1.brightbox.com" Jan 23 01:45:14.990169 kubelet[2878]: I0123 01:45:14.989438 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11dcf35efb3ed98397aaf4fb0ab4e51c-k8s-certs\") pod \"kube-apiserver-srv-idwud.gb1.brightbox.com\" (UID: \"11dcf35efb3ed98397aaf4fb0ab4e51c\") " pod="kube-system/kube-apiserver-srv-idwud.gb1.brightbox.com" Jan 23 01:45:15.160769 sudo[2907]: pam_unix(sudo:session): session closed for user root Jan 23 01:45:15.541680 kubelet[2878]: I0123 01:45:15.541623 2878 apiserver.go:52] "Watching apiserver" Jan 23 01:45:15.582514 kubelet[2878]: I0123 01:45:15.582421 2878 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:45:15.657649 kubelet[2878]: I0123 01:45:15.657615 2878 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-idwud.gb1.brightbox.com" Jan 23 01:45:15.670065 kubelet[2878]: W0123 01:45:15.670031 2878 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 01:45:15.671953 kubelet[2878]: E0123 01:45:15.670090 2878 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-idwud.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-idwud.gb1.brightbox.com" Jan 23 01:45:15.686321 kubelet[2878]: I0123 01:45:15.686227 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-idwud.gb1.brightbox.com" podStartSLOduration=1.686197773 podStartE2EDuration="1.686197773s" podCreationTimestamp="2026-01-23 01:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:45:15.685199716 +0000 UTC m=+1.255462919" watchObservedRunningTime="2026-01-23 01:45:15.686197773 +0000 UTC m=+1.256460972" Jan 23 01:45:15.709630 kubelet[2878]: I0123 01:45:15.708858 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-idwud.gb1.brightbox.com" podStartSLOduration=1.708836813 podStartE2EDuration="1.708836813s" podCreationTimestamp="2026-01-23 01:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:45:15.696764975 +0000 UTC m=+1.267028174" watchObservedRunningTime="2026-01-23 01:45:15.708836813 +0000 UTC m=+1.279100010" Jan 23 01:45:16.898693 sudo[1878]: pam_unix(sudo:session): session closed for user root Jan 23 01:45:16.987922 sshd[1877]: Connection closed by 20.161.92.111 port 32796 Jan 23 01:45:16.998174 sshd-session[1874]: pam_unix(sshd:session): session closed for user core Jan 23 01:45:17.005064 systemd[1]: sshd@8-10.230.49.206:22-20.161.92.111:32796.service: Deactivated successfully. Jan 23 01:45:17.008627 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:45:17.009290 systemd[1]: session-11.scope: Consumed 5.603s CPU time, 213.8M memory peak. Jan 23 01:45:17.012299 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:45:17.015037 systemd-logind[1570]: Removed session 11. Jan 23 01:45:20.077334 kubelet[2878]: I0123 01:45:20.077269 2878 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 01:45:20.078779 containerd[1598]: time="2026-01-23T01:45:20.078455055Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:45:20.079868 kubelet[2878]: I0123 01:45:20.078679 2878 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 01:45:20.340605 kubelet[2878]: I0123 01:45:20.340447 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-idwud.gb1.brightbox.com" podStartSLOduration=6.3403993849999996 podStartE2EDuration="6.340399385s" podCreationTimestamp="2026-01-23 01:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:45:15.709767724 +0000 UTC m=+1.280030908" watchObservedRunningTime="2026-01-23 01:45:20.340399385 +0000 UTC m=+5.910662562" Jan 23 01:45:20.355047 kubelet[2878]: W0123 01:45:20.354517 2878 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-idwud.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-idwud.gb1.brightbox.com' and this object Jan 23 01:45:20.355047 kubelet[2878]: E0123 01:45:20.354586 2878 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-idwud.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-idwud.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 23 01:45:20.355047 kubelet[2878]: I0123 01:45:20.354657 2878 status_manager.go:890] "Failed to get status for pod" podUID="fc03ae61-13a1-4b97-bc3a-80289bf1a293" pod="kube-system/kube-proxy-rgvb9" err="pods \"kube-proxy-rgvb9\" is forbidden: User \"system:node:srv-idwud.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-idwud.gb1.brightbox.com' and this object" Jan 23 01:45:20.355047 kubelet[2878]: W0123 01:45:20.354741 2878 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:srv-idwud.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-idwud.gb1.brightbox.com' and this object Jan 23 01:45:20.355272 kubelet[2878]: E0123 01:45:20.354766 2878 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:srv-idwud.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-idwud.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 23 01:45:20.361061 systemd[1]: Created slice kubepods-besteffort-podfc03ae61_13a1_4b97_bc3a_80289bf1a293.slice - libcontainer container kubepods-besteffort-podfc03ae61_13a1_4b97_bc3a_80289bf1a293.slice. Jan 23 01:45:20.386362 systemd[1]: Created slice kubepods-burstable-pod52d6e21d_0f28_4e55_b197_8ac55e09b9ac.slice - libcontainer container kubepods-burstable-pod52d6e21d_0f28_4e55_b197_8ac55e09b9ac.slice. Jan 23 01:45:20.426123 kubelet[2878]: I0123 01:45:20.426057 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc03ae61-13a1-4b97-bc3a-80289bf1a293-kube-proxy\") pod \"kube-proxy-rgvb9\" (UID: \"fc03ae61-13a1-4b97-bc3a-80289bf1a293\") " pod="kube-system/kube-proxy-rgvb9" Jan 23 01:45:20.426123 kubelet[2878]: I0123 01:45:20.426117 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cni-path\") pod \"cilium-6m6sk\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " pod="kube-system/cilium-6m6sk" Jan 23 01:45:20.426477 kubelet[2878]: I0123 01:45:20.426151 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc03ae61-13a1-4b97-bc3a-80289bf1a293-lib-modules\") pod \"kube-proxy-rgvb9\" (UID: \"fc03ae61-13a1-4b97-bc3a-80289bf1a293\") " pod="kube-system/kube-proxy-rgvb9" Jan 23 01:45:20.426477 kubelet[2878]: I0123 01:45:20.426184 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cilium-run\") pod \"cilium-6m6sk\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " pod="kube-system/cilium-6m6sk" Jan 23 01:45:20.426477 kubelet[2878]: I0123 01:45:20.426210 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-clustermesh-secrets\") pod \"cilium-6m6sk\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " pod="kube-system/cilium-6m6sk" Jan 23 01:45:20.426477 kubelet[2878]: I0123 01:45:20.426233 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-hostproc\") pod \"cilium-6m6sk\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " pod="kube-system/cilium-6m6sk" Jan 23 01:45:20.426477 kubelet[2878]: I0123 01:45:20.426263 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-host-proc-sys-kernel\") pod \"cilium-6m6sk\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " pod="kube-system/cilium-6m6sk" Jan 23 01:45:20.426477 kubelet[2878]: I0123 01:45:20.426287 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-bpf-maps\") pod \"cilium-6m6sk\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " pod="kube-system/cilium-6m6sk" Jan 23 01:45:20.426720 kubelet[2878]: I0123 01:45:20.426312 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-etc-cni-netd\") pod \"cilium-6m6sk\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " pod="kube-system/cilium-6m6sk" Jan 23 01:45:20.426720 kubelet[2878]: I0123 01:45:20.426349 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-lib-modules\") pod \"cilium-6m6sk\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " pod="kube-system/cilium-6m6sk" Jan 23 01:45:20.426720 kubelet[2878]: I0123 01:45:20.426373 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-host-proc-sys-net\") pod \"cilium-6m6sk\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " pod="kube-system/cilium-6m6sk" Jan 23 01:45:20.426720 kubelet[2878]: I0123 01:45:20.426425 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-hubble-tls\") pod \"cilium-6m6sk\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " pod="kube-system/cilium-6m6sk" Jan 23 01:45:20.426720 kubelet[2878]: I0123 01:45:20.426454 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cilium-cgroup\") pod \"cilium-6m6sk\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " pod="kube-system/cilium-6m6sk" Jan 23 01:45:20.426720 kubelet[2878]: I0123 01:45:20.426503 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs22m\" (UniqueName: \"kubernetes.io/projected/fc03ae61-13a1-4b97-bc3a-80289bf1a293-kube-api-access-gs22m\") pod \"kube-proxy-rgvb9\" (UID: \"fc03ae61-13a1-4b97-bc3a-80289bf1a293\") " pod="kube-system/kube-proxy-rgvb9" Jan 23 01:45:20.427031 kubelet[2878]: I0123 01:45:20.426531 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cilium-config-path\") pod \"cilium-6m6sk\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " pod="kube-system/cilium-6m6sk" Jan 23 01:45:20.427031 kubelet[2878]: I0123 01:45:20.426555 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4rbc\" (UniqueName: \"kubernetes.io/projected/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-kube-api-access-n4rbc\") pod \"cilium-6m6sk\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " pod="kube-system/cilium-6m6sk" Jan 23 01:45:20.427031 kubelet[2878]: I0123 01:45:20.426579 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc03ae61-13a1-4b97-bc3a-80289bf1a293-xtables-lock\") pod \"kube-proxy-rgvb9\" (UID: \"fc03ae61-13a1-4b97-bc3a-80289bf1a293\") " pod="kube-system/kube-proxy-rgvb9" Jan 23 01:45:20.427031 kubelet[2878]: I0123 01:45:20.426609 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-xtables-lock\") pod \"cilium-6m6sk\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " pod="kube-system/cilium-6m6sk" Jan 23 01:45:21.017076 systemd[1]: Created slice kubepods-besteffort-podc0afec68_e636_4b58_b93e_84e1fa6c7559.slice - libcontainer container kubepods-besteffort-podc0afec68_e636_4b58_b93e_84e1fa6c7559.slice. Jan 23 01:45:21.030323 kubelet[2878]: I0123 01:45:21.030277 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj9rw\" (UniqueName: \"kubernetes.io/projected/c0afec68-e636-4b58-b93e-84e1fa6c7559-kube-api-access-kj9rw\") pod \"cilium-operator-6c4d7847fc-s2z6t\" (UID: \"c0afec68-e636-4b58-b93e-84e1fa6c7559\") " pod="kube-system/cilium-operator-6c4d7847fc-s2z6t" Jan 23 01:45:21.030524 kubelet[2878]: I0123 01:45:21.030460 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0afec68-e636-4b58-b93e-84e1fa6c7559-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-s2z6t\" (UID: \"c0afec68-e636-4b58-b93e-84e1fa6c7559\") " pod="kube-system/cilium-operator-6c4d7847fc-s2z6t" Jan 23 01:45:21.529143 kubelet[2878]: E0123 01:45:21.528615 2878 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 23 01:45:21.529143 kubelet[2878]: E0123 01:45:21.528795 2878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc03ae61-13a1-4b97-bc3a-80289bf1a293-kube-proxy podName:fc03ae61-13a1-4b97-bc3a-80289bf1a293 nodeName:}" failed. No retries permitted until 2026-01-23 01:45:22.028747247 +0000 UTC m=+7.599010433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/fc03ae61-13a1-4b97-bc3a-80289bf1a293-kube-proxy") pod "kube-proxy-rgvb9" (UID: "fc03ae61-13a1-4b97-bc3a-80289bf1a293") : failed to sync configmap cache: timed out waiting for the condition Jan 23 01:45:21.551756 kubelet[2878]: E0123 01:45:21.550941 2878 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 01:45:21.551756 kubelet[2878]: E0123 01:45:21.550994 2878 projected.go:194] Error preparing data for projected volume kube-api-access-n4rbc for pod kube-system/cilium-6m6sk: failed to sync configmap cache: timed out waiting for the condition Jan 23 01:45:21.551756 kubelet[2878]: E0123 01:45:21.551086 2878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-kube-api-access-n4rbc podName:52d6e21d-0f28-4e55-b197-8ac55e09b9ac nodeName:}" failed. No retries permitted until 2026-01-23 01:45:22.051052417 +0000 UTC m=+7.621315612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n4rbc" (UniqueName: "kubernetes.io/projected/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-kube-api-access-n4rbc") pod "cilium-6m6sk" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac") : failed to sync configmap cache: timed out waiting for the condition Jan 23 01:45:21.551756 kubelet[2878]: E0123 01:45:21.551213 2878 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 01:45:21.551756 kubelet[2878]: E0123 01:45:21.551240 2878 projected.go:194] Error preparing data for projected volume kube-api-access-gs22m for pod kube-system/kube-proxy-rgvb9: failed to sync configmap cache: timed out waiting for the condition Jan 23 01:45:21.552346 kubelet[2878]: E0123 01:45:21.551291 2878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc03ae61-13a1-4b97-bc3a-80289bf1a293-kube-api-access-gs22m podName:fc03ae61-13a1-4b97-bc3a-80289bf1a293 nodeName:}" failed. No retries permitted until 2026-01-23 01:45:22.051277192 +0000 UTC m=+7.621540369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gs22m" (UniqueName: "kubernetes.io/projected/fc03ae61-13a1-4b97-bc3a-80289bf1a293-kube-api-access-gs22m") pod "kube-proxy-rgvb9" (UID: "fc03ae61-13a1-4b97-bc3a-80289bf1a293") : failed to sync configmap cache: timed out waiting for the condition Jan 23 01:45:21.923846 containerd[1598]: time="2026-01-23T01:45:21.923226863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s2z6t,Uid:c0afec68-e636-4b58-b93e-84e1fa6c7559,Namespace:kube-system,Attempt:0,}" Jan 23 01:45:21.951036 containerd[1598]: time="2026-01-23T01:45:21.950957804Z" level=info msg="connecting to shim 5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5" address="unix:///run/containerd/s/0de5b04462920bb57aad1303d63306a3bf9c887927f5a26086f6d57dc7e10d92" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:45:21.991133 systemd[1]: Started cri-containerd-5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5.scope - libcontainer container 5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5. Jan 23 01:45:22.060804 containerd[1598]: time="2026-01-23T01:45:22.060752729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s2z6t,Uid:c0afec68-e636-4b58-b93e-84e1fa6c7559,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5\"" Jan 23 01:45:22.064129 containerd[1598]: time="2026-01-23T01:45:22.064082721Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 01:45:22.180563 containerd[1598]: time="2026-01-23T01:45:22.180520846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rgvb9,Uid:fc03ae61-13a1-4b97-bc3a-80289bf1a293,Namespace:kube-system,Attempt:0,}" Jan 23 01:45:22.195784 containerd[1598]: time="2026-01-23T01:45:22.195727109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6m6sk,Uid:52d6e21d-0f28-4e55-b197-8ac55e09b9ac,Namespace:kube-system,Attempt:0,}" Jan 23 01:45:22.201758 containerd[1598]: time="2026-01-23T01:45:22.201446364Z" level=info msg="connecting to shim 7c353d53781fd3d1f29df7a282140625d33283e738ae77c12a3d419448153ab6" address="unix:///run/containerd/s/6053bf36e1c7e123d810a231b8f081d3b38d5f8af6145234467b2c616eb7ad06" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:45:22.225551 containerd[1598]: time="2026-01-23T01:45:22.225446547Z" level=info msg="connecting to shim a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0" address="unix:///run/containerd/s/c73a14ad162cc5653f869fa09c5ca6e1e2768027f3523a9ea39d436bb620d375" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:45:22.241067 systemd[1]: Started cri-containerd-7c353d53781fd3d1f29df7a282140625d33283e738ae77c12a3d419448153ab6.scope - libcontainer container 7c353d53781fd3d1f29df7a282140625d33283e738ae77c12a3d419448153ab6. Jan 23 01:45:22.284115 systemd[1]: Started cri-containerd-a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0.scope - libcontainer container a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0. Jan 23 01:45:22.289524 containerd[1598]: time="2026-01-23T01:45:22.289278165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rgvb9,Uid:fc03ae61-13a1-4b97-bc3a-80289bf1a293,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c353d53781fd3d1f29df7a282140625d33283e738ae77c12a3d419448153ab6\"" Jan 23 01:45:22.300165 containerd[1598]: time="2026-01-23T01:45:22.299834643Z" level=info msg="CreateContainer within sandbox \"7c353d53781fd3d1f29df7a282140625d33283e738ae77c12a3d419448153ab6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:45:22.320279 containerd[1598]: time="2026-01-23T01:45:22.320050748Z" level=info msg="Container 47f9046c542fc5519cbff3f5082b01572810ec3e7a4df97f9e39fa75f1df0f27: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:45:22.339383 containerd[1598]: time="2026-01-23T01:45:22.339259051Z" level=info msg="CreateContainer within sandbox \"7c353d53781fd3d1f29df7a282140625d33283e738ae77c12a3d419448153ab6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"47f9046c542fc5519cbff3f5082b01572810ec3e7a4df97f9e39fa75f1df0f27\"" Jan 23 01:45:22.340602 containerd[1598]: time="2026-01-23T01:45:22.340526344Z" level=info msg="StartContainer for \"47f9046c542fc5519cbff3f5082b01572810ec3e7a4df97f9e39fa75f1df0f27\"" Jan 23 01:45:22.345423 containerd[1598]: time="2026-01-23T01:45:22.345342986Z" level=info msg="connecting to shim 47f9046c542fc5519cbff3f5082b01572810ec3e7a4df97f9e39fa75f1df0f27" address="unix:///run/containerd/s/6053bf36e1c7e123d810a231b8f081d3b38d5f8af6145234467b2c616eb7ad06" protocol=ttrpc version=3 Jan 23 01:45:22.370828 containerd[1598]: time="2026-01-23T01:45:22.370743820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6m6sk,Uid:52d6e21d-0f28-4e55-b197-8ac55e09b9ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\"" Jan 23 01:45:22.382165 systemd[1]: Started cri-containerd-47f9046c542fc5519cbff3f5082b01572810ec3e7a4df97f9e39fa75f1df0f27.scope - libcontainer container 47f9046c542fc5519cbff3f5082b01572810ec3e7a4df97f9e39fa75f1df0f27. Jan 23 01:45:22.472188 containerd[1598]: time="2026-01-23T01:45:22.471951107Z" level=info msg="StartContainer for \"47f9046c542fc5519cbff3f5082b01572810ec3e7a4df97f9e39fa75f1df0f27\" returns successfully" Jan 23 01:45:22.712950 kubelet[2878]: I0123 01:45:22.712606 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rgvb9" podStartSLOduration=2.712581215 podStartE2EDuration="2.712581215s" podCreationTimestamp="2026-01-23 01:45:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:45:22.710124175 +0000 UTC m=+8.280387374" watchObservedRunningTime="2026-01-23 01:45:22.712581215 +0000 UTC m=+8.282844408" Jan 23 01:45:23.339642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3067381947.mount: Deactivated successfully. Jan 23 01:45:24.434414 containerd[1598]: time="2026-01-23T01:45:24.434356778Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:45:24.435345 containerd[1598]: time="2026-01-23T01:45:24.435301868Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 01:45:24.436412 containerd[1598]: time="2026-01-23T01:45:24.436377238Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:45:24.438656 containerd[1598]: time="2026-01-23T01:45:24.438620808Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.374475756s" Jan 23 01:45:24.438832 containerd[1598]: time="2026-01-23T01:45:24.438772797Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 01:45:24.441264 containerd[1598]: time="2026-01-23T01:45:24.441207860Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 01:45:24.444189 containerd[1598]: time="2026-01-23T01:45:24.444144097Z" level=info msg="CreateContainer within sandbox \"5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 01:45:24.456822 containerd[1598]: time="2026-01-23T01:45:24.456754267Z" level=info msg="Container 572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:45:24.466233 containerd[1598]: time="2026-01-23T01:45:24.466120585Z" level=info msg="CreateContainer within sandbox \"5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\"" Jan 23 01:45:24.467059 containerd[1598]: time="2026-01-23T01:45:24.467026247Z" level=info msg="StartContainer for \"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\"" Jan 23 01:45:24.468145 containerd[1598]: time="2026-01-23T01:45:24.468104016Z" level=info msg="connecting to shim 572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1" address="unix:///run/containerd/s/0de5b04462920bb57aad1303d63306a3bf9c887927f5a26086f6d57dc7e10d92" protocol=ttrpc version=3 Jan 23 01:45:24.501352 systemd[1]: Started cri-containerd-572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1.scope - libcontainer container 572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1. Jan 23 01:45:24.579127 containerd[1598]: time="2026-01-23T01:45:24.578960650Z" level=info msg="StartContainer for \"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\" returns successfully" Jan 23 01:45:24.774039 kubelet[2878]: I0123 01:45:24.773837 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-s2z6t" podStartSLOduration=2.396735942 podStartE2EDuration="4.773815154s" podCreationTimestamp="2026-01-23 01:45:20 +0000 UTC" firstStartedPulling="2026-01-23 01:45:22.06301586 +0000 UTC m=+7.633279036" lastFinishedPulling="2026-01-23 01:45:24.440095058 +0000 UTC m=+10.010358248" observedRunningTime="2026-01-23 01:45:24.739156941 +0000 UTC m=+10.309420149" watchObservedRunningTime="2026-01-23 01:45:24.773815154 +0000 UTC m=+10.344078339" Jan 23 01:45:30.995128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3678441473.mount: Deactivated successfully. Jan 23 01:45:34.476994 containerd[1598]: time="2026-01-23T01:45:34.476868389Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:45:34.478933 containerd[1598]: time="2026-01-23T01:45:34.478817076Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 01:45:34.479733 containerd[1598]: time="2026-01-23T01:45:34.479617298Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:45:34.482342 containerd[1598]: time="2026-01-23T01:45:34.482244485Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.040994746s" Jan 23 01:45:34.482342 containerd[1598]: time="2026-01-23T01:45:34.482285071Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 01:45:34.486079 containerd[1598]: time="2026-01-23T01:45:34.485984126Z" level=info msg="CreateContainer within sandbox \"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:45:34.524771 containerd[1598]: time="2026-01-23T01:45:34.524056839Z" level=info msg="Container 8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:45:34.529675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1146002070.mount: Deactivated successfully. Jan 23 01:45:34.537479 containerd[1598]: time="2026-01-23T01:45:34.537331434Z" level=info msg="CreateContainer within sandbox \"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7\"" Jan 23 01:45:34.540180 containerd[1598]: time="2026-01-23T01:45:34.539673526Z" level=info msg="StartContainer for \"8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7\"" Jan 23 01:45:34.543629 containerd[1598]: time="2026-01-23T01:45:34.543561749Z" level=info msg="connecting to shim 8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7" address="unix:///run/containerd/s/c73a14ad162cc5653f869fa09c5ca6e1e2768027f3523a9ea39d436bb620d375" protocol=ttrpc version=3 Jan 23 01:45:34.578327 systemd[1]: Started cri-containerd-8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7.scope - libcontainer container 8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7. Jan 23 01:45:34.629223 containerd[1598]: time="2026-01-23T01:45:34.629122449Z" level=info msg="StartContainer for \"8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7\" returns successfully" Jan 23 01:45:34.653233 systemd[1]: cri-containerd-8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7.scope: Deactivated successfully. Jan 23 01:45:34.689531 containerd[1598]: time="2026-01-23T01:45:34.689443712Z" level=info msg="received container exit event container_id:\"8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7\" id:\"8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7\" pid:3349 exited_at:{seconds:1769132734 nanos:657455354}" Jan 23 01:45:34.736661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7-rootfs.mount: Deactivated successfully. Jan 23 01:45:35.752170 containerd[1598]: time="2026-01-23T01:45:35.752081036Z" level=info msg="CreateContainer within sandbox \"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:45:35.764986 containerd[1598]: time="2026-01-23T01:45:35.764913214Z" level=info msg="Container 7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:45:35.772260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749945051.mount: Deactivated successfully. Jan 23 01:45:35.778524 containerd[1598]: time="2026-01-23T01:45:35.778361566Z" level=info msg="CreateContainer within sandbox \"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e\"" Jan 23 01:45:35.780274 containerd[1598]: time="2026-01-23T01:45:35.779645688Z" level=info msg="StartContainer for \"7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e\"" Jan 23 01:45:35.783638 containerd[1598]: time="2026-01-23T01:45:35.783601477Z" level=info msg="connecting to shim 7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e" address="unix:///run/containerd/s/c73a14ad162cc5653f869fa09c5ca6e1e2768027f3523a9ea39d436bb620d375" protocol=ttrpc version=3 Jan 23 01:45:35.823232 systemd[1]: Started cri-containerd-7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e.scope - libcontainer container 7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e. Jan 23 01:45:35.875318 containerd[1598]: time="2026-01-23T01:45:35.875240771Z" level=info msg="StartContainer for \"7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e\" returns successfully" Jan 23 01:45:35.893713 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:45:35.894860 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:45:35.897353 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:45:35.902482 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:45:35.905629 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:45:35.906501 systemd[1]: cri-containerd-7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e.scope: Deactivated successfully. Jan 23 01:45:35.909185 containerd[1598]: time="2026-01-23T01:45:35.909079992Z" level=info msg="received container exit event container_id:\"7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e\" id:\"7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e\" pid:3394 exited_at:{seconds:1769132735 nanos:907149373}" Jan 23 01:45:35.956394 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:45:36.758969 containerd[1598]: time="2026-01-23T01:45:36.758761975Z" level=info msg="CreateContainer within sandbox \"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:45:36.768722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e-rootfs.mount: Deactivated successfully. Jan 23 01:45:36.829906 containerd[1598]: time="2026-01-23T01:45:36.828243511Z" level=info msg="Container ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:45:36.838242 containerd[1598]: time="2026-01-23T01:45:36.838117854Z" level=info msg="CreateContainer within sandbox \"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef\"" Jan 23 01:45:36.840910 containerd[1598]: time="2026-01-23T01:45:36.839809056Z" level=info msg="StartContainer for \"ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef\"" Jan 23 01:45:36.841824 containerd[1598]: time="2026-01-23T01:45:36.841793535Z" level=info msg="connecting to shim ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef" address="unix:///run/containerd/s/c73a14ad162cc5653f869fa09c5ca6e1e2768027f3523a9ea39d436bb620d375" protocol=ttrpc version=3 Jan 23 01:45:36.877207 systemd[1]: Started cri-containerd-ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef.scope - libcontainer container ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef. Jan 23 01:45:36.978907 containerd[1598]: time="2026-01-23T01:45:36.978704805Z" level=info msg="StartContainer for \"ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef\" returns successfully" Jan 23 01:45:36.987213 systemd[1]: cri-containerd-ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef.scope: Deactivated successfully. Jan 23 01:45:36.988047 systemd[1]: cri-containerd-ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef.scope: Consumed 44ms CPU time, 6.1M memory peak, 1.2M read from disk. Jan 23 01:45:36.991665 containerd[1598]: time="2026-01-23T01:45:36.991596819Z" level=info msg="received container exit event container_id:\"ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef\" id:\"ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef\" pid:3443 exited_at:{seconds:1769132736 nanos:990781334}" Jan 23 01:45:37.018708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef-rootfs.mount: Deactivated successfully. Jan 23 01:45:37.764955 containerd[1598]: time="2026-01-23T01:45:37.764811485Z" level=info msg="CreateContainer within sandbox \"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:45:37.785342 containerd[1598]: time="2026-01-23T01:45:37.785282897Z" level=info msg="Container 80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:45:37.795676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount940920867.mount: Deactivated successfully. Jan 23 01:45:37.801808 containerd[1598]: time="2026-01-23T01:45:37.801752130Z" level=info msg="CreateContainer within sandbox \"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471\"" Jan 23 01:45:37.816966 containerd[1598]: time="2026-01-23T01:45:37.816915385Z" level=info msg="StartContainer for \"80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471\"" Jan 23 01:45:37.818717 containerd[1598]: time="2026-01-23T01:45:37.818220238Z" level=info msg="connecting to shim 80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471" address="unix:///run/containerd/s/c73a14ad162cc5653f869fa09c5ca6e1e2768027f3523a9ea39d436bb620d375" protocol=ttrpc version=3 Jan 23 01:45:37.856182 systemd[1]: Started cri-containerd-80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471.scope - libcontainer container 80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471. Jan 23 01:45:37.904439 systemd[1]: cri-containerd-80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471.scope: Deactivated successfully. Jan 23 01:45:37.907596 containerd[1598]: time="2026-01-23T01:45:37.907528360Z" level=info msg="StartContainer for \"80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471\" returns successfully" Jan 23 01:45:37.908453 containerd[1598]: time="2026-01-23T01:45:37.908120511Z" level=info msg="received container exit event container_id:\"80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471\" id:\"80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471\" pid:3481 exited_at:{seconds:1769132737 nanos:906365097}" Jan 23 01:45:37.953710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471-rootfs.mount: Deactivated successfully. Jan 23 01:45:38.774856 containerd[1598]: time="2026-01-23T01:45:38.774685211Z" level=info msg="CreateContainer within sandbox \"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:45:38.795546 containerd[1598]: time="2026-01-23T01:45:38.795474267Z" level=info msg="Container 139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:45:38.811261 containerd[1598]: time="2026-01-23T01:45:38.811131252Z" level=info msg="CreateContainer within sandbox \"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\"" Jan 23 01:45:38.812970 containerd[1598]: time="2026-01-23T01:45:38.812330410Z" level=info msg="StartContainer for \"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\"" Jan 23 01:45:38.813833 containerd[1598]: time="2026-01-23T01:45:38.813748625Z" level=info msg="connecting to shim 139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878" address="unix:///run/containerd/s/c73a14ad162cc5653f869fa09c5ca6e1e2768027f3523a9ea39d436bb620d375" protocol=ttrpc version=3 Jan 23 01:45:38.857212 systemd[1]: Started cri-containerd-139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878.scope - libcontainer container 139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878. Jan 23 01:45:38.922260 containerd[1598]: time="2026-01-23T01:45:38.922211845Z" level=info msg="StartContainer for \"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\" returns successfully" Jan 23 01:45:39.098608 kubelet[2878]: I0123 01:45:39.098258 2878 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 01:45:39.152629 systemd[1]: Created slice kubepods-burstable-pode9d05a38_7ca9_4247_a22c_a14651d71267.slice - libcontainer container kubepods-burstable-pode9d05a38_7ca9_4247_a22c_a14651d71267.slice. Jan 23 01:45:39.168622 systemd[1]: Created slice kubepods-burstable-pod250bfe8e_6699_43d2_bb69_ac13e2a8f624.slice - libcontainer container kubepods-burstable-pod250bfe8e_6699_43d2_bb69_ac13e2a8f624.slice. Jan 23 01:45:39.174362 kubelet[2878]: I0123 01:45:39.174242 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9d05a38-7ca9-4247-a22c-a14651d71267-config-volume\") pod \"coredns-668d6bf9bc-j6kf4\" (UID: \"e9d05a38-7ca9-4247-a22c-a14651d71267\") " pod="kube-system/coredns-668d6bf9bc-j6kf4" Jan 23 01:45:39.174362 kubelet[2878]: I0123 01:45:39.174299 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghggm\" (UniqueName: \"kubernetes.io/projected/e9d05a38-7ca9-4247-a22c-a14651d71267-kube-api-access-ghggm\") pod \"coredns-668d6bf9bc-j6kf4\" (UID: \"e9d05a38-7ca9-4247-a22c-a14651d71267\") " pod="kube-system/coredns-668d6bf9bc-j6kf4" Jan 23 01:45:39.174635 kubelet[2878]: I0123 01:45:39.174371 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/250bfe8e-6699-43d2-bb69-ac13e2a8f624-config-volume\") pod \"coredns-668d6bf9bc-9jh24\" (UID: \"250bfe8e-6699-43d2-bb69-ac13e2a8f624\") " pod="kube-system/coredns-668d6bf9bc-9jh24" Jan 23 01:45:39.174635 kubelet[2878]: I0123 01:45:39.174431 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmh44\" (UniqueName: \"kubernetes.io/projected/250bfe8e-6699-43d2-bb69-ac13e2a8f624-kube-api-access-pmh44\") pod \"coredns-668d6bf9bc-9jh24\" (UID: \"250bfe8e-6699-43d2-bb69-ac13e2a8f624\") " pod="kube-system/coredns-668d6bf9bc-9jh24" Jan 23 01:45:39.465860 containerd[1598]: time="2026-01-23T01:45:39.465678185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j6kf4,Uid:e9d05a38-7ca9-4247-a22c-a14651d71267,Namespace:kube-system,Attempt:0,}" Jan 23 01:45:39.477595 containerd[1598]: time="2026-01-23T01:45:39.477135457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9jh24,Uid:250bfe8e-6699-43d2-bb69-ac13e2a8f624,Namespace:kube-system,Attempt:0,}" Jan 23 01:45:39.813645 kubelet[2878]: I0123 01:45:39.813469 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6m6sk" podStartSLOduration=7.70329063 podStartE2EDuration="19.813446535s" podCreationTimestamp="2026-01-23 01:45:20 +0000 UTC" firstStartedPulling="2026-01-23 01:45:22.373059591 +0000 UTC m=+7.943322769" lastFinishedPulling="2026-01-23 01:45:34.483215489 +0000 UTC m=+20.053478674" observedRunningTime="2026-01-23 01:45:39.811250187 +0000 UTC m=+25.381513396" watchObservedRunningTime="2026-01-23 01:45:39.813446535 +0000 UTC m=+25.383709721" Jan 23 01:45:41.595753 systemd-networkd[1479]: cilium_host: Link UP Jan 23 01:45:41.599533 systemd-networkd[1479]: cilium_net: Link UP Jan 23 01:45:41.600216 systemd-networkd[1479]: cilium_net: Gained carrier Jan 23 01:45:41.601126 systemd-networkd[1479]: cilium_host: Gained carrier Jan 23 01:45:41.669133 systemd-networkd[1479]: cilium_host: Gained IPv6LL Jan 23 01:45:41.777760 systemd-networkd[1479]: cilium_vxlan: Link UP Jan 23 01:45:41.777770 systemd-networkd[1479]: cilium_vxlan: Gained carrier Jan 23 01:45:42.013068 systemd-networkd[1479]: cilium_net: Gained IPv6LL Jan 23 01:45:42.329435 kernel: NET: Registered PF_ALG protocol family Jan 23 01:45:42.925241 systemd-networkd[1479]: cilium_vxlan: Gained IPv6LL Jan 23 01:45:43.423260 systemd-networkd[1479]: lxc_health: Link UP Jan 23 01:45:43.430153 systemd-networkd[1479]: lxc_health: Gained carrier Jan 23 01:45:44.045457 systemd-networkd[1479]: lxc73af33e1c8cf: Link UP Jan 23 01:45:44.046592 kernel: eth0: renamed from tmp319a5 Jan 23 01:45:44.056083 systemd-networkd[1479]: lxc73af33e1c8cf: Gained carrier Jan 23 01:45:44.079965 systemd-networkd[1479]: lxc9c556973a6af: Link UP Jan 23 01:45:44.083909 kernel: eth0: renamed from tmp78c50 Jan 23 01:45:44.095418 systemd-networkd[1479]: lxc9c556973a6af: Gained carrier Jan 23 01:45:44.590007 systemd-networkd[1479]: lxc_health: Gained IPv6LL Jan 23 01:45:45.229726 systemd-networkd[1479]: lxc73af33e1c8cf: Gained IPv6LL Jan 23 01:45:45.933111 systemd-networkd[1479]: lxc9c556973a6af: Gained IPv6LL Jan 23 01:45:49.779004 containerd[1598]: time="2026-01-23T01:45:49.778563762Z" level=info msg="connecting to shim 319a5e2daa1a8fd7f07d81e50e5fc58ff331a3b0f1ddd66566c4a86a7058958e" address="unix:///run/containerd/s/5bb15b3103430c6fb55688bd780721eb3d73690de58e81633b79745103ea1819" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:45:49.826901 containerd[1598]: time="2026-01-23T01:45:49.826738951Z" level=info msg="connecting to shim 78c5061b3fc898d1113ea9f00ec0d9a1985883d74f49dbcab4d2e99af97aabaa" address="unix:///run/containerd/s/b9ac77e252163f7f1ad1dd602c00bc61dc648c660e4d4356d5de1fd4c12d1591" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:45:49.876424 systemd[1]: Started cri-containerd-78c5061b3fc898d1113ea9f00ec0d9a1985883d74f49dbcab4d2e99af97aabaa.scope - libcontainer container 78c5061b3fc898d1113ea9f00ec0d9a1985883d74f49dbcab4d2e99af97aabaa. Jan 23 01:45:49.885917 systemd[1]: Started cri-containerd-319a5e2daa1a8fd7f07d81e50e5fc58ff331a3b0f1ddd66566c4a86a7058958e.scope - libcontainer container 319a5e2daa1a8fd7f07d81e50e5fc58ff331a3b0f1ddd66566c4a86a7058958e. Jan 23 01:45:50.003983 containerd[1598]: time="2026-01-23T01:45:50.003063623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j6kf4,Uid:e9d05a38-7ca9-4247-a22c-a14651d71267,Namespace:kube-system,Attempt:0,} returns sandbox id \"319a5e2daa1a8fd7f07d81e50e5fc58ff331a3b0f1ddd66566c4a86a7058958e\"" Jan 23 01:45:50.008896 containerd[1598]: time="2026-01-23T01:45:50.008737892Z" level=info msg="CreateContainer within sandbox \"319a5e2daa1a8fd7f07d81e50e5fc58ff331a3b0f1ddd66566c4a86a7058958e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:45:50.039729 containerd[1598]: time="2026-01-23T01:45:50.039309310Z" level=info msg="Container c84628ff0fc8aac5b6a818c04abe4a883166689a24cc77f6786ca199b8f0df87: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:45:50.044971 containerd[1598]: time="2026-01-23T01:45:50.044911970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9jh24,Uid:250bfe8e-6699-43d2-bb69-ac13e2a8f624,Namespace:kube-system,Attempt:0,} returns sandbox id \"78c5061b3fc898d1113ea9f00ec0d9a1985883d74f49dbcab4d2e99af97aabaa\"" Jan 23 01:45:50.054279 containerd[1598]: time="2026-01-23T01:45:50.054238412Z" level=info msg="CreateContainer within sandbox \"319a5e2daa1a8fd7f07d81e50e5fc58ff331a3b0f1ddd66566c4a86a7058958e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c84628ff0fc8aac5b6a818c04abe4a883166689a24cc77f6786ca199b8f0df87\"" Jan 23 01:45:50.054705 containerd[1598]: time="2026-01-23T01:45:50.054546659Z" level=info msg="CreateContainer within sandbox \"78c5061b3fc898d1113ea9f00ec0d9a1985883d74f49dbcab4d2e99af97aabaa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:45:50.055501 containerd[1598]: time="2026-01-23T01:45:50.055470866Z" level=info msg="StartContainer for \"c84628ff0fc8aac5b6a818c04abe4a883166689a24cc77f6786ca199b8f0df87\"" Jan 23 01:45:50.058808 containerd[1598]: time="2026-01-23T01:45:50.058740114Z" level=info msg="connecting to shim c84628ff0fc8aac5b6a818c04abe4a883166689a24cc77f6786ca199b8f0df87" address="unix:///run/containerd/s/5bb15b3103430c6fb55688bd780721eb3d73690de58e81633b79745103ea1819" protocol=ttrpc version=3 Jan 23 01:45:50.071248 containerd[1598]: time="2026-01-23T01:45:50.070963705Z" level=info msg="Container 7fcd2de518d63c2862f541416fd60f1f91adcc5fd2587081662f017a8c81958b: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:45:50.081440 containerd[1598]: time="2026-01-23T01:45:50.080784290Z" level=info msg="CreateContainer within sandbox \"78c5061b3fc898d1113ea9f00ec0d9a1985883d74f49dbcab4d2e99af97aabaa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7fcd2de518d63c2862f541416fd60f1f91adcc5fd2587081662f017a8c81958b\"" Jan 23 01:45:50.083243 containerd[1598]: time="2026-01-23T01:45:50.083074363Z" level=info msg="StartContainer for \"7fcd2de518d63c2862f541416fd60f1f91adcc5fd2587081662f017a8c81958b\"" Jan 23 01:45:50.085499 containerd[1598]: time="2026-01-23T01:45:50.085450050Z" level=info msg="connecting to shim 7fcd2de518d63c2862f541416fd60f1f91adcc5fd2587081662f017a8c81958b" address="unix:///run/containerd/s/b9ac77e252163f7f1ad1dd602c00bc61dc648c660e4d4356d5de1fd4c12d1591" protocol=ttrpc version=3 Jan 23 01:45:50.122116 systemd[1]: Started cri-containerd-c84628ff0fc8aac5b6a818c04abe4a883166689a24cc77f6786ca199b8f0df87.scope - libcontainer container c84628ff0fc8aac5b6a818c04abe4a883166689a24cc77f6786ca199b8f0df87. Jan 23 01:45:50.140083 systemd[1]: Started cri-containerd-7fcd2de518d63c2862f541416fd60f1f91adcc5fd2587081662f017a8c81958b.scope - libcontainer container 7fcd2de518d63c2862f541416fd60f1f91adcc5fd2587081662f017a8c81958b. Jan 23 01:45:50.213098 containerd[1598]: time="2026-01-23T01:45:50.213055888Z" level=info msg="StartContainer for \"c84628ff0fc8aac5b6a818c04abe4a883166689a24cc77f6786ca199b8f0df87\" returns successfully" Jan 23 01:45:50.223348 containerd[1598]: time="2026-01-23T01:45:50.223316120Z" level=info msg="StartContainer for \"7fcd2de518d63c2862f541416fd60f1f91adcc5fd2587081662f017a8c81958b\" returns successfully" Jan 23 01:45:50.760334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3453466343.mount: Deactivated successfully. Jan 23 01:45:50.859750 kubelet[2878]: I0123 01:45:50.858832 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9jh24" podStartSLOduration=30.858793753 podStartE2EDuration="30.858793753s" podCreationTimestamp="2026-01-23 01:45:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:45:50.857775992 +0000 UTC m=+36.428039191" watchObservedRunningTime="2026-01-23 01:45:50.858793753 +0000 UTC m=+36.429056952" Jan 23 01:45:50.906911 kubelet[2878]: I0123 01:45:50.905712 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j6kf4" podStartSLOduration=30.90568899 podStartE2EDuration="30.90568899s" podCreationTimestamp="2026-01-23 01:45:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:45:50.883580889 +0000 UTC m=+36.453844097" watchObservedRunningTime="2026-01-23 01:45:50.90568899 +0000 UTC m=+36.475952181" Jan 23 01:46:26.442268 systemd[1]: Started sshd@9-10.230.49.206:22-20.161.92.111:34126.service - OpenSSH per-connection server daemon (20.161.92.111:34126). Jan 23 01:46:27.083488 sshd[4215]: Accepted publickey for core from 20.161.92.111 port 34126 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:46:27.087074 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:46:27.107957 systemd-logind[1570]: New session 12 of user core. Jan 23 01:46:27.113083 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:46:28.025239 sshd[4221]: Connection closed by 20.161.92.111 port 34126 Jan 23 01:46:28.026270 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Jan 23 01:46:28.035935 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:46:28.036598 systemd[1]: sshd@9-10.230.49.206:22-20.161.92.111:34126.service: Deactivated successfully. Jan 23 01:46:28.041420 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:46:28.045680 systemd-logind[1570]: Removed session 12. Jan 23 01:46:33.132419 systemd[1]: Started sshd@10-10.230.49.206:22-20.161.92.111:35212.service - OpenSSH per-connection server daemon (20.161.92.111:35212). Jan 23 01:46:33.719949 sshd[4234]: Accepted publickey for core from 20.161.92.111 port 35212 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:46:33.721896 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:46:33.730947 systemd-logind[1570]: New session 13 of user core. Jan 23 01:46:33.739079 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:46:34.219520 sshd[4237]: Connection closed by 20.161.92.111 port 35212 Jan 23 01:46:34.219161 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Jan 23 01:46:34.225496 systemd[1]: sshd@10-10.230.49.206:22-20.161.92.111:35212.service: Deactivated successfully. Jan 23 01:46:34.229808 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:46:34.232075 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:46:34.234623 systemd-logind[1570]: Removed session 13. Jan 23 01:46:39.326453 systemd[1]: Started sshd@11-10.230.49.206:22-20.161.92.111:35220.service - OpenSSH per-connection server daemon (20.161.92.111:35220). Jan 23 01:46:39.906519 sshd[4250]: Accepted publickey for core from 20.161.92.111 port 35220 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:46:39.908460 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:46:39.915941 systemd-logind[1570]: New session 14 of user core. Jan 23 01:46:39.921050 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:46:40.396577 sshd[4253]: Connection closed by 20.161.92.111 port 35220 Jan 23 01:46:40.396305 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Jan 23 01:46:40.403838 systemd[1]: sshd@11-10.230.49.206:22-20.161.92.111:35220.service: Deactivated successfully. Jan 23 01:46:40.406752 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:46:40.408336 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:46:40.411162 systemd-logind[1570]: Removed session 14. Jan 23 01:46:45.510363 systemd[1]: Started sshd@12-10.230.49.206:22-20.161.92.111:33770.service - OpenSSH per-connection server daemon (20.161.92.111:33770). Jan 23 01:46:46.138408 sshd[4267]: Accepted publickey for core from 20.161.92.111 port 33770 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:46:46.140654 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:46:46.146948 systemd-logind[1570]: New session 15 of user core. Jan 23 01:46:46.166118 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:46:46.662966 sshd[4270]: Connection closed by 20.161.92.111 port 33770 Jan 23 01:46:46.665221 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Jan 23 01:46:46.672691 systemd[1]: sshd@12-10.230.49.206:22-20.161.92.111:33770.service: Deactivated successfully. Jan 23 01:46:46.678363 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:46:46.682030 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:46:46.685326 systemd-logind[1570]: Removed session 15. Jan 23 01:46:46.767851 systemd[1]: Started sshd@13-10.230.49.206:22-20.161.92.111:33784.service - OpenSSH per-connection server daemon (20.161.92.111:33784). Jan 23 01:46:47.369120 sshd[4283]: Accepted publickey for core from 20.161.92.111 port 33784 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:46:47.371278 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:46:47.379569 systemd-logind[1570]: New session 16 of user core. Jan 23 01:46:47.396215 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:46:47.959104 sshd[4286]: Connection closed by 20.161.92.111 port 33784 Jan 23 01:46:47.960337 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Jan 23 01:46:47.969971 systemd[1]: sshd@13-10.230.49.206:22-20.161.92.111:33784.service: Deactivated successfully. Jan 23 01:46:47.975158 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:46:47.977833 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:46:47.980065 systemd-logind[1570]: Removed session 16. Jan 23 01:46:48.063136 systemd[1]: Started sshd@14-10.230.49.206:22-20.161.92.111:33800.service - OpenSSH per-connection server daemon (20.161.92.111:33800). Jan 23 01:46:48.643645 sshd[4296]: Accepted publickey for core from 20.161.92.111 port 33800 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:46:48.645857 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:46:48.654236 systemd-logind[1570]: New session 17 of user core. Jan 23 01:46:48.662114 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:46:49.149005 sshd[4299]: Connection closed by 20.161.92.111 port 33800 Jan 23 01:46:49.150126 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Jan 23 01:46:49.161525 systemd[1]: sshd@14-10.230.49.206:22-20.161.92.111:33800.service: Deactivated successfully. Jan 23 01:46:49.165487 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:46:49.167523 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:46:49.169358 systemd-logind[1570]: Removed session 17. Jan 23 01:46:54.262270 systemd[1]: Started sshd@15-10.230.49.206:22-20.161.92.111:37300.service - OpenSSH per-connection server daemon (20.161.92.111:37300). Jan 23 01:46:54.860909 sshd[4313]: Accepted publickey for core from 20.161.92.111 port 37300 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:46:54.863045 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:46:54.870551 systemd-logind[1570]: New session 18 of user core. Jan 23 01:46:54.877152 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:46:55.351033 sshd[4316]: Connection closed by 20.161.92.111 port 37300 Jan 23 01:46:55.352521 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Jan 23 01:46:55.357425 systemd[1]: sshd@15-10.230.49.206:22-20.161.92.111:37300.service: Deactivated successfully. Jan 23 01:46:55.361123 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:46:55.365005 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:46:55.366623 systemd-logind[1570]: Removed session 18. Jan 23 01:47:00.454735 systemd[1]: Started sshd@16-10.230.49.206:22-20.161.92.111:37308.service - OpenSSH per-connection server daemon (20.161.92.111:37308). Jan 23 01:47:01.038594 sshd[4327]: Accepted publickey for core from 20.161.92.111 port 37308 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:47:01.041014 sshd-session[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:47:01.048681 systemd-logind[1570]: New session 19 of user core. Jan 23 01:47:01.054426 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:47:01.529519 sshd[4330]: Connection closed by 20.161.92.111 port 37308 Jan 23 01:47:01.530590 sshd-session[4327]: pam_unix(sshd:session): session closed for user core Jan 23 01:47:01.535231 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:47:01.536419 systemd[1]: sshd@16-10.230.49.206:22-20.161.92.111:37308.service: Deactivated successfully. Jan 23 01:47:01.538834 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:47:01.542292 systemd-logind[1570]: Removed session 19. Jan 23 01:47:01.635197 systemd[1]: Started sshd@17-10.230.49.206:22-20.161.92.111:37316.service - OpenSSH per-connection server daemon (20.161.92.111:37316). Jan 23 01:47:02.219850 sshd[4342]: Accepted publickey for core from 20.161.92.111 port 37316 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:47:02.221657 sshd-session[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:47:02.229038 systemd-logind[1570]: New session 20 of user core. Jan 23 01:47:02.237159 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:47:03.066937 sshd[4345]: Connection closed by 20.161.92.111 port 37316 Jan 23 01:47:03.071323 sshd-session[4342]: pam_unix(sshd:session): session closed for user core Jan 23 01:47:03.091159 systemd[1]: sshd@17-10.230.49.206:22-20.161.92.111:37316.service: Deactivated successfully. Jan 23 01:47:03.095604 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:47:03.098398 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:47:03.100373 systemd-logind[1570]: Removed session 20. Jan 23 01:47:03.167814 systemd[1]: Started sshd@18-10.230.49.206:22-20.161.92.111:37842.service - OpenSSH per-connection server daemon (20.161.92.111:37842). Jan 23 01:47:03.761169 sshd[4355]: Accepted publickey for core from 20.161.92.111 port 37842 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:47:03.762963 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:47:03.769716 systemd-logind[1570]: New session 21 of user core. Jan 23 01:47:03.782089 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:47:04.984452 sshd[4358]: Connection closed by 20.161.92.111 port 37842 Jan 23 01:47:04.985091 sshd-session[4355]: pam_unix(sshd:session): session closed for user core Jan 23 01:47:04.993522 systemd[1]: sshd@18-10.230.49.206:22-20.161.92.111:37842.service: Deactivated successfully. Jan 23 01:47:04.997753 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:47:05.000371 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:47:05.002666 systemd-logind[1570]: Removed session 21. Jan 23 01:47:05.086471 systemd[1]: Started sshd@19-10.230.49.206:22-20.161.92.111:37852.service - OpenSSH per-connection server daemon (20.161.92.111:37852). Jan 23 01:47:05.678032 sshd[4375]: Accepted publickey for core from 20.161.92.111 port 37852 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:47:05.680574 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:47:05.687662 systemd-logind[1570]: New session 22 of user core. Jan 23 01:47:05.699074 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:47:06.351627 sshd[4378]: Connection closed by 20.161.92.111 port 37852 Jan 23 01:47:06.353204 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Jan 23 01:47:06.360284 systemd[1]: sshd@19-10.230.49.206:22-20.161.92.111:37852.service: Deactivated successfully. Jan 23 01:47:06.363819 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:47:06.365695 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:47:06.367768 systemd-logind[1570]: Removed session 22. Jan 23 01:47:06.452190 systemd[1]: Started sshd@20-10.230.49.206:22-20.161.92.111:37858.service - OpenSSH per-connection server daemon (20.161.92.111:37858). Jan 23 01:47:07.033183 sshd[4388]: Accepted publickey for core from 20.161.92.111 port 37858 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:47:07.034832 sshd-session[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:47:07.042497 systemd-logind[1570]: New session 23 of user core. Jan 23 01:47:07.051093 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:47:07.516247 sshd[4391]: Connection closed by 20.161.92.111 port 37858 Jan 23 01:47:07.517197 sshd-session[4388]: pam_unix(sshd:session): session closed for user core Jan 23 01:47:07.522193 systemd-logind[1570]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:47:07.523831 systemd[1]: sshd@20-10.230.49.206:22-20.161.92.111:37858.service: Deactivated successfully. Jan 23 01:47:07.526299 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:47:07.528705 systemd-logind[1570]: Removed session 23. Jan 23 01:47:12.619793 systemd[1]: Started sshd@21-10.230.49.206:22-20.161.92.111:51756.service - OpenSSH per-connection server daemon (20.161.92.111:51756). Jan 23 01:47:13.202361 sshd[4402]: Accepted publickey for core from 20.161.92.111 port 51756 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:47:13.205271 sshd-session[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:47:13.212118 systemd-logind[1570]: New session 24 of user core. Jan 23 01:47:13.221125 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 01:47:13.693904 sshd[4407]: Connection closed by 20.161.92.111 port 51756 Jan 23 01:47:13.693236 sshd-session[4402]: pam_unix(sshd:session): session closed for user core Jan 23 01:47:13.698468 systemd[1]: sshd@21-10.230.49.206:22-20.161.92.111:51756.service: Deactivated successfully. Jan 23 01:47:13.704424 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 01:47:13.709293 systemd-logind[1570]: Session 24 logged out. Waiting for processes to exit. Jan 23 01:47:13.711492 systemd-logind[1570]: Removed session 24. Jan 23 01:47:18.796213 systemd[1]: Started sshd@22-10.230.49.206:22-20.161.92.111:51764.service - OpenSSH per-connection server daemon (20.161.92.111:51764). Jan 23 01:47:19.369793 sshd[4421]: Accepted publickey for core from 20.161.92.111 port 51764 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:47:19.371509 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:47:19.379221 systemd-logind[1570]: New session 25 of user core. Jan 23 01:47:19.385062 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 01:47:19.849807 sshd[4424]: Connection closed by 20.161.92.111 port 51764 Jan 23 01:47:19.850657 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Jan 23 01:47:19.855692 systemd[1]: sshd@22-10.230.49.206:22-20.161.92.111:51764.service: Deactivated successfully. Jan 23 01:47:19.859016 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 01:47:19.860409 systemd-logind[1570]: Session 25 logged out. Waiting for processes to exit. Jan 23 01:47:19.862399 systemd-logind[1570]: Removed session 25. Jan 23 01:47:24.956649 systemd[1]: Started sshd@23-10.230.49.206:22-20.161.92.111:46336.service - OpenSSH per-connection server daemon (20.161.92.111:46336). Jan 23 01:47:25.549820 sshd[4437]: Accepted publickey for core from 20.161.92.111 port 46336 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:47:25.551634 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:47:25.558220 systemd-logind[1570]: New session 26 of user core. Jan 23 01:47:25.567141 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 01:47:26.036531 sshd[4440]: Connection closed by 20.161.92.111 port 46336 Jan 23 01:47:26.037924 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Jan 23 01:47:26.046366 systemd[1]: sshd@23-10.230.49.206:22-20.161.92.111:46336.service: Deactivated successfully. Jan 23 01:47:26.050674 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 01:47:26.052372 systemd-logind[1570]: Session 26 logged out. Waiting for processes to exit. Jan 23 01:47:26.054539 systemd-logind[1570]: Removed session 26. Jan 23 01:47:26.139211 systemd[1]: Started sshd@24-10.230.49.206:22-20.161.92.111:46344.service - OpenSSH per-connection server daemon (20.161.92.111:46344). Jan 23 01:47:26.722171 sshd[4452]: Accepted publickey for core from 20.161.92.111 port 46344 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:47:26.724027 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:47:26.732107 systemd-logind[1570]: New session 27 of user core. Jan 23 01:47:26.743175 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 01:47:28.679504 containerd[1598]: time="2026-01-23T01:47:28.678853249Z" level=info msg="StopContainer for \"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\" with timeout 30 (s)" Jan 23 01:47:28.705006 containerd[1598]: time="2026-01-23T01:47:28.704930719Z" level=info msg="Stop container \"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\" with signal terminated" Jan 23 01:47:28.758072 systemd[1]: cri-containerd-572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1.scope: Deactivated successfully. Jan 23 01:47:28.758576 systemd[1]: cri-containerd-572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1.scope: Consumed 518ms CPU time, 40.1M memory peak, 17M read from disk, 4K written to disk. Jan 23 01:47:28.765898 containerd[1598]: time="2026-01-23T01:47:28.765707445Z" level=info msg="received container exit event container_id:\"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\" id:\"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\" pid:3287 exited_at:{seconds:1769132848 nanos:764810519}" Jan 23 01:47:28.780289 containerd[1598]: time="2026-01-23T01:47:28.780208117Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:47:28.800400 containerd[1598]: time="2026-01-23T01:47:28.800331010Z" level=info msg="StopContainer for \"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\" with timeout 2 (s)" Jan 23 01:47:28.803286 containerd[1598]: time="2026-01-23T01:47:28.802938587Z" level=info msg="Stop container \"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\" with signal terminated" Jan 23 01:47:28.825901 systemd-networkd[1479]: lxc_health: Link DOWN Jan 23 01:47:28.829570 systemd-networkd[1479]: lxc_health: Lost carrier Jan 23 01:47:28.849113 systemd[1]: cri-containerd-139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878.scope: Deactivated successfully. Jan 23 01:47:28.849576 systemd[1]: cri-containerd-139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878.scope: Consumed 9.971s CPU time, 195.7M memory peak, 75.2M read from disk, 13.3M written to disk. Jan 23 01:47:28.853183 containerd[1598]: time="2026-01-23T01:47:28.853121269Z" level=info msg="received container exit event container_id:\"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\" id:\"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\" pid:3521 exited_at:{seconds:1769132848 nanos:851530037}" Jan 23 01:47:28.859162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1-rootfs.mount: Deactivated successfully. Jan 23 01:47:28.870650 containerd[1598]: time="2026-01-23T01:47:28.870603962Z" level=info msg="StopContainer for \"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\" returns successfully" Jan 23 01:47:28.872160 containerd[1598]: time="2026-01-23T01:47:28.872109666Z" level=info msg="StopPodSandbox for \"5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5\"" Jan 23 01:47:28.880452 containerd[1598]: time="2026-01-23T01:47:28.880415747Z" level=info msg="Container to stop \"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:47:28.895806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878-rootfs.mount: Deactivated successfully. Jan 23 01:47:28.904647 containerd[1598]: time="2026-01-23T01:47:28.904556285Z" level=info msg="StopContainer for \"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\" returns successfully" Jan 23 01:47:28.906145 containerd[1598]: time="2026-01-23T01:47:28.905857991Z" level=info msg="StopPodSandbox for \"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\"" Jan 23 01:47:28.906145 containerd[1598]: time="2026-01-23T01:47:28.905985872Z" level=info msg="Container to stop \"80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:47:28.906145 containerd[1598]: time="2026-01-23T01:47:28.906009179Z" level=info msg="Container to stop \"8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:47:28.906145 containerd[1598]: time="2026-01-23T01:47:28.906023128Z" level=info msg="Container to stop \"7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:47:28.906145 containerd[1598]: time="2026-01-23T01:47:28.906036617Z" level=info msg="Container to stop \"ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:47:28.906145 containerd[1598]: time="2026-01-23T01:47:28.906050469Z" level=info msg="Container to stop \"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:47:28.909058 systemd[1]: cri-containerd-5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5.scope: Deactivated successfully. Jan 23 01:47:28.916820 containerd[1598]: time="2026-01-23T01:47:28.916717170Z" level=info msg="received sandbox exit event container_id:\"5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5\" id:\"5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5\" exit_status:137 exited_at:{seconds:1769132848 nanos:915927604}" monitor_name=podsandbox Jan 23 01:47:28.920674 systemd[1]: cri-containerd-a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0.scope: Deactivated successfully. Jan 23 01:47:28.928156 containerd[1598]: time="2026-01-23T01:47:28.928110580Z" level=info msg="received sandbox exit event container_id:\"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" id:\"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" exit_status:137 exited_at:{seconds:1769132848 nanos:927326391}" monitor_name=podsandbox Jan 23 01:47:28.961049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5-rootfs.mount: Deactivated successfully. Jan 23 01:47:28.966584 containerd[1598]: time="2026-01-23T01:47:28.966209705Z" level=info msg="shim disconnected" id=5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5 namespace=k8s.io Jan 23 01:47:28.966584 containerd[1598]: time="2026-01-23T01:47:28.966258839Z" level=warning msg="cleaning up after shim disconnected" id=5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5 namespace=k8s.io Jan 23 01:47:28.977989 containerd[1598]: time="2026-01-23T01:47:28.966281644Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:47:28.977456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0-rootfs.mount: Deactivated successfully. Jan 23 01:47:28.981226 containerd[1598]: time="2026-01-23T01:47:28.981010169Z" level=info msg="shim disconnected" id=a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0 namespace=k8s.io Jan 23 01:47:28.981226 containerd[1598]: time="2026-01-23T01:47:28.981044550Z" level=warning msg="cleaning up after shim disconnected" id=a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0 namespace=k8s.io Jan 23 01:47:28.981226 containerd[1598]: time="2026-01-23T01:47:28.981057259Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:47:29.033936 containerd[1598]: time="2026-01-23T01:47:29.032048992Z" level=info msg="received sandbox container exit event sandbox_id:\"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" exit_status:137 exited_at:{seconds:1769132848 nanos:927326391}" monitor_name=criService Jan 23 01:47:29.033936 containerd[1598]: time="2026-01-23T01:47:29.032979810Z" level=info msg="received sandbox container exit event sandbox_id:\"5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5\" exit_status:137 exited_at:{seconds:1769132848 nanos:915927604}" monitor_name=criService Jan 23 01:47:29.035086 containerd[1598]: time="2026-01-23T01:47:29.035056639Z" level=info msg="TearDown network for sandbox \"5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5\" successfully" Jan 23 01:47:29.035559 containerd[1598]: time="2026-01-23T01:47:29.035241081Z" level=info msg="StopPodSandbox for \"5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5\" returns successfully" Jan 23 01:47:29.038354 containerd[1598]: time="2026-01-23T01:47:29.038303930Z" level=info msg="TearDown network for sandbox \"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" successfully" Jan 23 01:47:29.038692 containerd[1598]: time="2026-01-23T01:47:29.038614359Z" level=info msg="StopPodSandbox for \"a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0\" returns successfully" Jan 23 01:47:29.038904 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5fe54fdba69f64a9c716e16ae5be9e22b5e6d566579ccd801e455edd59df18a5-shm.mount: Deactivated successfully. Jan 23 01:47:29.111173 kubelet[2878]: I0123 01:47:29.111108 2878 scope.go:117] "RemoveContainer" containerID="572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1" Jan 23 01:47:29.115068 containerd[1598]: time="2026-01-23T01:47:29.113814063Z" level=info msg="RemoveContainer for \"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\"" Jan 23 01:47:29.128401 containerd[1598]: time="2026-01-23T01:47:29.128354527Z" level=info msg="RemoveContainer for \"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\" returns successfully" Jan 23 01:47:29.128725 kubelet[2878]: I0123 01:47:29.128694 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0afec68-e636-4b58-b93e-84e1fa6c7559-cilium-config-path\") pod \"c0afec68-e636-4b58-b93e-84e1fa6c7559\" (UID: \"c0afec68-e636-4b58-b93e-84e1fa6c7559\") " Jan 23 01:47:29.128930 kubelet[2878]: I0123 01:47:29.128753 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj9rw\" (UniqueName: \"kubernetes.io/projected/c0afec68-e636-4b58-b93e-84e1fa6c7559-kube-api-access-kj9rw\") pod \"c0afec68-e636-4b58-b93e-84e1fa6c7559\" (UID: \"c0afec68-e636-4b58-b93e-84e1fa6c7559\") " Jan 23 01:47:29.129121 kubelet[2878]: I0123 01:47:29.129038 2878 scope.go:117] "RemoveContainer" containerID="572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1" Jan 23 01:47:29.129908 containerd[1598]: time="2026-01-23T01:47:29.129703613Z" level=error msg="ContainerStatus for \"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\": not found" Jan 23 01:47:29.135938 kubelet[2878]: I0123 01:47:29.135161 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0afec68-e636-4b58-b93e-84e1fa6c7559-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c0afec68-e636-4b58-b93e-84e1fa6c7559" (UID: "c0afec68-e636-4b58-b93e-84e1fa6c7559"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:47:29.135938 kubelet[2878]: E0123 01:47:29.135344 2878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\": not found" containerID="572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1" Jan 23 01:47:29.135938 kubelet[2878]: I0123 01:47:29.135396 2878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1"} err="failed to get container status \"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\": rpc error: code = NotFound desc = an error occurred when try to find container \"572da35c2dfad3aeec378c6d1677febf0c15b356d54f7dbeb1aa2c8aa326aae1\": not found" Jan 23 01:47:29.135938 kubelet[2878]: I0123 01:47:29.135520 2878 scope.go:117] "RemoveContainer" containerID="139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878" Jan 23 01:47:29.142428 kubelet[2878]: I0123 01:47:29.142362 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0afec68-e636-4b58-b93e-84e1fa6c7559-kube-api-access-kj9rw" (OuterVolumeSpecName: "kube-api-access-kj9rw") pod "c0afec68-e636-4b58-b93e-84e1fa6c7559" (UID: "c0afec68-e636-4b58-b93e-84e1fa6c7559"). InnerVolumeSpecName "kube-api-access-kj9rw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:47:29.142811 containerd[1598]: time="2026-01-23T01:47:29.142769022Z" level=info msg="RemoveContainer for \"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\"" Jan 23 01:47:29.150328 containerd[1598]: time="2026-01-23T01:47:29.150286490Z" level=info msg="RemoveContainer for \"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\" returns successfully" Jan 23 01:47:29.150864 kubelet[2878]: I0123 01:47:29.150825 2878 scope.go:117] "RemoveContainer" containerID="80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471" Jan 23 01:47:29.152917 containerd[1598]: time="2026-01-23T01:47:29.152850321Z" level=info msg="RemoveContainer for \"80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471\"" Jan 23 01:47:29.157518 containerd[1598]: time="2026-01-23T01:47:29.157477619Z" level=info msg="RemoveContainer for \"80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471\" returns successfully" Jan 23 01:47:29.157814 kubelet[2878]: I0123 01:47:29.157739 2878 scope.go:117] "RemoveContainer" containerID="ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef" Jan 23 01:47:29.161382 containerd[1598]: time="2026-01-23T01:47:29.161329857Z" level=info msg="RemoveContainer for \"ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef\"" Jan 23 01:47:29.167028 containerd[1598]: time="2026-01-23T01:47:29.166992373Z" level=info msg="RemoveContainer for \"ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef\" returns successfully" Jan 23 01:47:29.167355 kubelet[2878]: I0123 01:47:29.167203 2878 scope.go:117] "RemoveContainer" containerID="7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e" Jan 23 01:47:29.170402 containerd[1598]: time="2026-01-23T01:47:29.170351118Z" level=info msg="RemoveContainer for \"7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e\"" Jan 23 01:47:29.176231 containerd[1598]: time="2026-01-23T01:47:29.176190864Z" level=info msg="RemoveContainer for \"7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e\" returns successfully" Jan 23 01:47:29.179177 kubelet[2878]: I0123 01:47:29.178856 2878 scope.go:117] "RemoveContainer" containerID="8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7" Jan 23 01:47:29.182836 containerd[1598]: time="2026-01-23T01:47:29.182772686Z" level=info msg="RemoveContainer for \"8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7\"" Jan 23 01:47:29.188354 containerd[1598]: time="2026-01-23T01:47:29.188310886Z" level=info msg="RemoveContainer for \"8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7\" returns successfully" Jan 23 01:47:29.189035 kubelet[2878]: I0123 01:47:29.188999 2878 scope.go:117] "RemoveContainer" containerID="139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878" Jan 23 01:47:29.190045 containerd[1598]: time="2026-01-23T01:47:29.189493976Z" level=error msg="ContainerStatus for \"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\": not found" Jan 23 01:47:29.190741 kubelet[2878]: E0123 01:47:29.190450 2878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\": not found" containerID="139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878" Jan 23 01:47:29.190741 kubelet[2878]: I0123 01:47:29.190490 2878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878"} err="failed to get container status \"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\": rpc error: code = NotFound desc = an error occurred when try to find container \"139eabe557484457083d685450b0890526dfc623aa6054dda31c73c1e39fb878\": not found" Jan 23 01:47:29.190741 kubelet[2878]: I0123 01:47:29.190523 2878 scope.go:117] "RemoveContainer" containerID="80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471" Jan 23 01:47:29.191505 containerd[1598]: time="2026-01-23T01:47:29.191470127Z" level=error msg="ContainerStatus for \"80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471\": not found" Jan 23 01:47:29.192310 kubelet[2878]: E0123 01:47:29.192157 2878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471\": not found" containerID="80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471" Jan 23 01:47:29.192675 kubelet[2878]: I0123 01:47:29.192528 2878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471"} err="failed to get container status \"80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471\": rpc error: code = NotFound desc = an error occurred when try to find container \"80f651ea66b4bd4e26a97ff9efddd8fd28470e0e7d1f46fa06dfcb2c6606d471\": not found" Jan 23 01:47:29.192675 kubelet[2878]: I0123 01:47:29.192562 2878 scope.go:117] "RemoveContainer" containerID="ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef" Jan 23 01:47:29.193208 containerd[1598]: time="2026-01-23T01:47:29.193153408Z" level=error msg="ContainerStatus for \"ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef\": not found" Jan 23 01:47:29.193603 kubelet[2878]: E0123 01:47:29.193446 2878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef\": not found" containerID="ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef" Jan 23 01:47:29.193603 kubelet[2878]: I0123 01:47:29.193501 2878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef"} err="failed to get container status \"ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac1aaccec33f849936de2c169dc1898093a5715477c468cce4f7643edd50bcef\": not found" Jan 23 01:47:29.193603 kubelet[2878]: I0123 01:47:29.193523 2878 scope.go:117] "RemoveContainer" containerID="7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e" Jan 23 01:47:29.194054 containerd[1598]: time="2026-01-23T01:47:29.193851800Z" level=error msg="ContainerStatus for \"7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e\": not found" Jan 23 01:47:29.194351 kubelet[2878]: E0123 01:47:29.194318 2878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e\": not found" containerID="7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e" Jan 23 01:47:29.194569 kubelet[2878]: I0123 01:47:29.194461 2878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e"} err="failed to get container status \"7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a13e430be24cd40c459bca5fd9979998bf90f5ce2b6fba78a99d890a1a9a13e\": not found" Jan 23 01:47:29.194569 kubelet[2878]: I0123 01:47:29.194488 2878 scope.go:117] "RemoveContainer" containerID="8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7" Jan 23 01:47:29.194811 containerd[1598]: time="2026-01-23T01:47:29.194777033Z" level=error msg="ContainerStatus for \"8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7\": not found" Jan 23 01:47:29.195220 kubelet[2878]: E0123 01:47:29.195175 2878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7\": not found" containerID="8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7" Jan 23 01:47:29.195401 kubelet[2878]: I0123 01:47:29.195361 2878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7"} err="failed to get container status \"8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d55038090190e5c9434be257198d233c8dbccdef4d413b71e3a1bec685a9af7\": not found" Jan 23 01:47:29.233034 kubelet[2878]: I0123 01:47:29.229769 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-xtables-lock\") pod \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " Jan 23 01:47:29.233034 kubelet[2878]: I0123 01:47:29.231325 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cni-path\") pod \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " Jan 23 01:47:29.233034 kubelet[2878]: I0123 01:47:29.230052 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "52d6e21d-0f28-4e55-b197-8ac55e09b9ac" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:47:29.233034 kubelet[2878]: I0123 01:47:29.231371 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-host-proc-sys-kernel\") pod \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " Jan 23 01:47:29.233034 kubelet[2878]: I0123 01:47:29.231416 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-host-proc-sys-net\") pod \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " Jan 23 01:47:29.233034 kubelet[2878]: I0123 01:47:29.231463 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cilium-config-path\") pod \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " Jan 23 01:47:29.233431 kubelet[2878]: I0123 01:47:29.231475 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cni-path" (OuterVolumeSpecName: "cni-path") pod "52d6e21d-0f28-4e55-b197-8ac55e09b9ac" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:47:29.233431 kubelet[2878]: I0123 01:47:29.231498 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-clustermesh-secrets\") pod \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " Jan 23 01:47:29.233431 kubelet[2878]: I0123 01:47:29.231515 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "52d6e21d-0f28-4e55-b197-8ac55e09b9ac" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:47:29.233431 kubelet[2878]: I0123 01:47:29.231568 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "52d6e21d-0f28-4e55-b197-8ac55e09b9ac" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:47:29.233431 kubelet[2878]: I0123 01:47:29.231534 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-etc-cni-netd\") pod \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " Jan 23 01:47:29.233660 kubelet[2878]: I0123 01:47:29.231616 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "52d6e21d-0f28-4e55-b197-8ac55e09b9ac" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:47:29.233660 kubelet[2878]: I0123 01:47:29.231631 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cilium-run\") pod \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " Jan 23 01:47:29.233660 kubelet[2878]: I0123 01:47:29.231668 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-lib-modules\") pod \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " Jan 23 01:47:29.233660 kubelet[2878]: I0123 01:47:29.231694 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "52d6e21d-0f28-4e55-b197-8ac55e09b9ac" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:47:29.233660 kubelet[2878]: I0123 01:47:29.231731 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-hubble-tls\") pod \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " Jan 23 01:47:29.233888 kubelet[2878]: I0123 01:47:29.231810 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4rbc\" (UniqueName: \"kubernetes.io/projected/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-kube-api-access-n4rbc\") pod \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " Jan 23 01:47:29.233888 kubelet[2878]: I0123 01:47:29.231999 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-hostproc\") pod \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " Jan 23 01:47:29.233888 kubelet[2878]: I0123 01:47:29.232032 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-bpf-maps\") pod \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " Jan 23 01:47:29.233888 kubelet[2878]: I0123 01:47:29.232092 2878 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cilium-cgroup\") pod \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\" (UID: \"52d6e21d-0f28-4e55-b197-8ac55e09b9ac\") " Jan 23 01:47:29.233888 kubelet[2878]: I0123 01:47:29.232199 2878 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-host-proc-sys-kernel\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.233888 kubelet[2878]: I0123 01:47:29.232269 2878 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kj9rw\" (UniqueName: \"kubernetes.io/projected/c0afec68-e636-4b58-b93e-84e1fa6c7559-kube-api-access-kj9rw\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.233888 kubelet[2878]: I0123 01:47:29.232302 2878 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-etc-cni-netd\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.234192 kubelet[2878]: I0123 01:47:29.232320 2878 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-host-proc-sys-net\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.234192 kubelet[2878]: I0123 01:47:29.232366 2878 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-lib-modules\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.234192 kubelet[2878]: I0123 01:47:29.232385 2878 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0afec68-e636-4b58-b93e-84e1fa6c7559-cilium-config-path\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.234192 kubelet[2878]: I0123 01:47:29.232398 2878 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-xtables-lock\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.234192 kubelet[2878]: I0123 01:47:29.232424 2878 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cni-path\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.234192 kubelet[2878]: I0123 01:47:29.232504 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "52d6e21d-0f28-4e55-b197-8ac55e09b9ac" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:47:29.235076 kubelet[2878]: I0123 01:47:29.235050 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "52d6e21d-0f28-4e55-b197-8ac55e09b9ac" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:47:29.237557 kubelet[2878]: I0123 01:47:29.237489 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-hostproc" (OuterVolumeSpecName: "hostproc") pod "52d6e21d-0f28-4e55-b197-8ac55e09b9ac" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:47:29.237654 kubelet[2878]: I0123 01:47:29.237576 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "52d6e21d-0f28-4e55-b197-8ac55e09b9ac" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:47:29.239899 kubelet[2878]: I0123 01:47:29.239808 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "52d6e21d-0f28-4e55-b197-8ac55e09b9ac" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:47:29.240847 kubelet[2878]: I0123 01:47:29.240821 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "52d6e21d-0f28-4e55-b197-8ac55e09b9ac" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:47:29.242765 kubelet[2878]: I0123 01:47:29.242738 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "52d6e21d-0f28-4e55-b197-8ac55e09b9ac" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:47:29.251743 kubelet[2878]: I0123 01:47:29.251676 2878 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-kube-api-access-n4rbc" (OuterVolumeSpecName: "kube-api-access-n4rbc") pod "52d6e21d-0f28-4e55-b197-8ac55e09b9ac" (UID: "52d6e21d-0f28-4e55-b197-8ac55e09b9ac"). InnerVolumeSpecName "kube-api-access-n4rbc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:47:29.333403 kubelet[2878]: I0123 01:47:29.333096 2878 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-clustermesh-secrets\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.333403 kubelet[2878]: I0123 01:47:29.333145 2878 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cilium-config-path\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.333403 kubelet[2878]: I0123 01:47:29.333162 2878 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cilium-run\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.333403 kubelet[2878]: I0123 01:47:29.333176 2878 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-hostproc\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.333403 kubelet[2878]: I0123 01:47:29.333191 2878 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-bpf-maps\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.333403 kubelet[2878]: I0123 01:47:29.333205 2878 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-hubble-tls\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.333403 kubelet[2878]: I0123 01:47:29.333227 2878 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n4rbc\" (UniqueName: \"kubernetes.io/projected/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-kube-api-access-n4rbc\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.333403 kubelet[2878]: I0123 01:47:29.333242 2878 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52d6e21d-0f28-4e55-b197-8ac55e09b9ac-cilium-cgroup\") on node \"srv-idwud.gb1.brightbox.com\" DevicePath \"\"" Jan 23 01:47:29.417293 systemd[1]: Removed slice kubepods-besteffort-podc0afec68_e636_4b58_b93e_84e1fa6c7559.slice - libcontainer container kubepods-besteffort-podc0afec68_e636_4b58_b93e_84e1fa6c7559.slice. Jan 23 01:47:29.417957 systemd[1]: kubepods-besteffort-podc0afec68_e636_4b58_b93e_84e1fa6c7559.slice: Consumed 559ms CPU time, 40.4M memory peak, 17M read from disk, 4K written to disk. Jan 23 01:47:29.429496 systemd[1]: Removed slice kubepods-burstable-pod52d6e21d_0f28_4e55_b197_8ac55e09b9ac.slice - libcontainer container kubepods-burstable-pod52d6e21d_0f28_4e55_b197_8ac55e09b9ac.slice. Jan 23 01:47:29.429961 systemd[1]: kubepods-burstable-pod52d6e21d_0f28_4e55_b197_8ac55e09b9ac.slice: Consumed 10.123s CPU time, 196.2M memory peak, 77.5M read from disk, 13.3M written to disk. Jan 23 01:47:29.753522 kubelet[2878]: E0123 01:47:29.753450 2878 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 01:47:29.857504 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a75f476b7cdf49f02fe142a901a0bddb8259a125abe5a57db7a651e2e13d43d0-shm.mount: Deactivated successfully. Jan 23 01:47:29.857651 systemd[1]: var-lib-kubelet-pods-52d6e21d\x2d0f28\x2d4e55\x2db197\x2d8ac55e09b9ac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn4rbc.mount: Deactivated successfully. Jan 23 01:47:29.857775 systemd[1]: var-lib-kubelet-pods-c0afec68\x2de636\x2d4b58\x2db93e\x2d84e1fa6c7559-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkj9rw.mount: Deactivated successfully. Jan 23 01:47:29.857886 systemd[1]: var-lib-kubelet-pods-52d6e21d\x2d0f28\x2d4e55\x2db197\x2d8ac55e09b9ac-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 01:47:29.858034 systemd[1]: var-lib-kubelet-pods-52d6e21d\x2d0f28\x2d4e55\x2db197\x2d8ac55e09b9ac-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 01:47:30.616729 kubelet[2878]: I0123 01:47:30.616639 2878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d6e21d-0f28-4e55-b197-8ac55e09b9ac" path="/var/lib/kubelet/pods/52d6e21d-0f28-4e55-b197-8ac55e09b9ac/volumes" Jan 23 01:47:30.618936 kubelet[2878]: I0123 01:47:30.617915 2878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0afec68-e636-4b58-b93e-84e1fa6c7559" path="/var/lib/kubelet/pods/c0afec68-e636-4b58-b93e-84e1fa6c7559/volumes" Jan 23 01:47:30.679350 sshd[4455]: Connection closed by 20.161.92.111 port 46344 Jan 23 01:47:30.679842 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Jan 23 01:47:30.686836 systemd[1]: sshd@24-10.230.49.206:22-20.161.92.111:46344.service: Deactivated successfully. Jan 23 01:47:30.690511 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 01:47:30.693027 systemd-logind[1570]: Session 27 logged out. Waiting for processes to exit. Jan 23 01:47:30.695903 systemd-logind[1570]: Removed session 27. Jan 23 01:47:30.785487 systemd[1]: Started sshd@25-10.230.49.206:22-20.161.92.111:46358.service - OpenSSH per-connection server daemon (20.161.92.111:46358). Jan 23 01:47:31.375260 sshd[4599]: Accepted publickey for core from 20.161.92.111 port 46358 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:47:31.377630 sshd-session[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:47:31.386260 systemd-logind[1570]: New session 28 of user core. Jan 23 01:47:31.390070 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 01:47:32.246527 kubelet[2878]: I0123 01:47:32.246446 2878 memory_manager.go:355] "RemoveStaleState removing state" podUID="c0afec68-e636-4b58-b93e-84e1fa6c7559" containerName="cilium-operator" Jan 23 01:47:32.246527 kubelet[2878]: I0123 01:47:32.246505 2878 memory_manager.go:355] "RemoveStaleState removing state" podUID="52d6e21d-0f28-4e55-b197-8ac55e09b9ac" containerName="cilium-agent" Jan 23 01:47:32.252959 sshd[4602]: Connection closed by 20.161.92.111 port 46358 Jan 23 01:47:32.253874 sshd-session[4599]: pam_unix(sshd:session): session closed for user core Jan 23 01:47:32.265803 systemd[1]: sshd@25-10.230.49.206:22-20.161.92.111:46358.service: Deactivated successfully. Jan 23 01:47:32.271046 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 01:47:32.277017 systemd-logind[1570]: Session 28 logged out. Waiting for processes to exit. Jan 23 01:47:32.278661 systemd[1]: Created slice kubepods-burstable-pod85fbd155_2333_4a39_ae69_dc4c1ab85fca.slice - libcontainer container kubepods-burstable-pod85fbd155_2333_4a39_ae69_dc4c1ab85fca.slice. Jan 23 01:47:32.285506 systemd-logind[1570]: Removed session 28. Jan 23 01:47:32.355203 kubelet[2878]: I0123 01:47:32.354137 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/85fbd155-2333-4a39-ae69-dc4c1ab85fca-cilium-run\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.355203 kubelet[2878]: I0123 01:47:32.354227 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/85fbd155-2333-4a39-ae69-dc4c1ab85fca-clustermesh-secrets\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.355203 kubelet[2878]: I0123 01:47:32.354256 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/85fbd155-2333-4a39-ae69-dc4c1ab85fca-cilium-ipsec-secrets\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.355203 kubelet[2878]: I0123 01:47:32.354321 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpq5z\" (UniqueName: \"kubernetes.io/projected/85fbd155-2333-4a39-ae69-dc4c1ab85fca-kube-api-access-lpq5z\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.355203 kubelet[2878]: I0123 01:47:32.354355 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/85fbd155-2333-4a39-ae69-dc4c1ab85fca-hubble-tls\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.355203 kubelet[2878]: I0123 01:47:32.354415 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85fbd155-2333-4a39-ae69-dc4c1ab85fca-etc-cni-netd\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.356101 kubelet[2878]: I0123 01:47:32.354441 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85fbd155-2333-4a39-ae69-dc4c1ab85fca-xtables-lock\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.356101 kubelet[2878]: I0123 01:47:32.354502 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/85fbd155-2333-4a39-ae69-dc4c1ab85fca-bpf-maps\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.356101 kubelet[2878]: I0123 01:47:32.354528 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85fbd155-2333-4a39-ae69-dc4c1ab85fca-cilium-config-path\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.356101 kubelet[2878]: I0123 01:47:32.354713 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/85fbd155-2333-4a39-ae69-dc4c1ab85fca-cni-path\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.356101 kubelet[2878]: I0123 01:47:32.355077 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85fbd155-2333-4a39-ae69-dc4c1ab85fca-lib-modules\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.357095 kubelet[2878]: I0123 01:47:32.355150 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/85fbd155-2333-4a39-ae69-dc4c1ab85fca-host-proc-sys-kernel\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.357095 kubelet[2878]: I0123 01:47:32.356663 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/85fbd155-2333-4a39-ae69-dc4c1ab85fca-hostproc\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.357095 kubelet[2878]: I0123 01:47:32.356691 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/85fbd155-2333-4a39-ae69-dc4c1ab85fca-cilium-cgroup\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.357095 kubelet[2878]: I0123 01:47:32.356719 2878 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/85fbd155-2333-4a39-ae69-dc4c1ab85fca-host-proc-sys-net\") pod \"cilium-695nl\" (UID: \"85fbd155-2333-4a39-ae69-dc4c1ab85fca\") " pod="kube-system/cilium-695nl" Jan 23 01:47:32.358196 systemd[1]: Started sshd@26-10.230.49.206:22-20.161.92.111:46370.service - OpenSSH per-connection server daemon (20.161.92.111:46370). Jan 23 01:47:32.589544 containerd[1598]: time="2026-01-23T01:47:32.589356433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-695nl,Uid:85fbd155-2333-4a39-ae69-dc4c1ab85fca,Namespace:kube-system,Attempt:0,}" Jan 23 01:47:32.616164 containerd[1598]: time="2026-01-23T01:47:32.616083124Z" level=info msg="connecting to shim c95c3e7da89b24b9e1e26e52e7309b454d77c5978623eaa0f8573dbbb01f209b" address="unix:///run/containerd/s/bdc0669f4e9320e49f77a34ca7606981c4dfcd1d56a55e254d93fcfc1c23ecb4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:47:32.663172 systemd[1]: Started cri-containerd-c95c3e7da89b24b9e1e26e52e7309b454d77c5978623eaa0f8573dbbb01f209b.scope - libcontainer container c95c3e7da89b24b9e1e26e52e7309b454d77c5978623eaa0f8573dbbb01f209b. Jan 23 01:47:32.715991 containerd[1598]: time="2026-01-23T01:47:32.715930838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-695nl,Uid:85fbd155-2333-4a39-ae69-dc4c1ab85fca,Namespace:kube-system,Attempt:0,} returns sandbox id \"c95c3e7da89b24b9e1e26e52e7309b454d77c5978623eaa0f8573dbbb01f209b\"" Jan 23 01:47:32.721484 containerd[1598]: time="2026-01-23T01:47:32.721372621Z" level=info msg="CreateContainer within sandbox \"c95c3e7da89b24b9e1e26e52e7309b454d77c5978623eaa0f8573dbbb01f209b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:47:32.733380 containerd[1598]: time="2026-01-23T01:47:32.733334176Z" level=info msg="Container efbf88946536d4dd5411ac0eff3e058dd52fe583bfafa4fb9d3ba1b0746b9242: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:47:32.741157 containerd[1598]: time="2026-01-23T01:47:32.741124231Z" level=info msg="CreateContainer within sandbox \"c95c3e7da89b24b9e1e26e52e7309b454d77c5978623eaa0f8573dbbb01f209b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"efbf88946536d4dd5411ac0eff3e058dd52fe583bfafa4fb9d3ba1b0746b9242\"" Jan 23 01:47:32.742483 containerd[1598]: time="2026-01-23T01:47:32.742450833Z" level=info msg="StartContainer for \"efbf88946536d4dd5411ac0eff3e058dd52fe583bfafa4fb9d3ba1b0746b9242\"" Jan 23 01:47:32.745375 containerd[1598]: time="2026-01-23T01:47:32.745339735Z" level=info msg="connecting to shim efbf88946536d4dd5411ac0eff3e058dd52fe583bfafa4fb9d3ba1b0746b9242" address="unix:///run/containerd/s/bdc0669f4e9320e49f77a34ca7606981c4dfcd1d56a55e254d93fcfc1c23ecb4" protocol=ttrpc version=3 Jan 23 01:47:32.780179 systemd[1]: Started cri-containerd-efbf88946536d4dd5411ac0eff3e058dd52fe583bfafa4fb9d3ba1b0746b9242.scope - libcontainer container efbf88946536d4dd5411ac0eff3e058dd52fe583bfafa4fb9d3ba1b0746b9242. Jan 23 01:47:32.830499 containerd[1598]: time="2026-01-23T01:47:32.830400754Z" level=info msg="StartContainer for \"efbf88946536d4dd5411ac0eff3e058dd52fe583bfafa4fb9d3ba1b0746b9242\" returns successfully" Jan 23 01:47:32.846041 systemd[1]: cri-containerd-efbf88946536d4dd5411ac0eff3e058dd52fe583bfafa4fb9d3ba1b0746b9242.scope: Deactivated successfully. Jan 23 01:47:32.846483 systemd[1]: cri-containerd-efbf88946536d4dd5411ac0eff3e058dd52fe583bfafa4fb9d3ba1b0746b9242.scope: Consumed 33ms CPU time, 9.8M memory peak, 3.3M read from disk. Jan 23 01:47:32.848659 containerd[1598]: time="2026-01-23T01:47:32.848606384Z" level=info msg="received container exit event container_id:\"efbf88946536d4dd5411ac0eff3e058dd52fe583bfafa4fb9d3ba1b0746b9242\" id:\"efbf88946536d4dd5411ac0eff3e058dd52fe583bfafa4fb9d3ba1b0746b9242\" pid:4677 exited_at:{seconds:1769132852 nanos:848186663}" Jan 23 01:47:32.952527 sshd[4612]: Accepted publickey for core from 20.161.92.111 port 46370 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:47:32.954531 sshd-session[4612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:47:32.963480 systemd-logind[1570]: New session 29 of user core. Jan 23 01:47:32.978163 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 01:47:33.147010 containerd[1598]: time="2026-01-23T01:47:33.145930985Z" level=info msg="CreateContainer within sandbox \"c95c3e7da89b24b9e1e26e52e7309b454d77c5978623eaa0f8573dbbb01f209b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:47:33.155485 containerd[1598]: time="2026-01-23T01:47:33.155295184Z" level=info msg="Container 4d3a2ba2fb96d0dadbadc633a6f8ed1f7d16da3e36083f98520be5bc8a88aa8c: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:47:33.164967 containerd[1598]: time="2026-01-23T01:47:33.164925903Z" level=info msg="CreateContainer within sandbox \"c95c3e7da89b24b9e1e26e52e7309b454d77c5978623eaa0f8573dbbb01f209b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4d3a2ba2fb96d0dadbadc633a6f8ed1f7d16da3e36083f98520be5bc8a88aa8c\"" Jan 23 01:47:33.167258 containerd[1598]: time="2026-01-23T01:47:33.167211419Z" level=info msg="StartContainer for \"4d3a2ba2fb96d0dadbadc633a6f8ed1f7d16da3e36083f98520be5bc8a88aa8c\"" Jan 23 01:47:33.173803 containerd[1598]: time="2026-01-23T01:47:33.173681973Z" level=info msg="connecting to shim 4d3a2ba2fb96d0dadbadc633a6f8ed1f7d16da3e36083f98520be5bc8a88aa8c" address="unix:///run/containerd/s/bdc0669f4e9320e49f77a34ca7606981c4dfcd1d56a55e254d93fcfc1c23ecb4" protocol=ttrpc version=3 Jan 23 01:47:33.205160 systemd[1]: Started cri-containerd-4d3a2ba2fb96d0dadbadc633a6f8ed1f7d16da3e36083f98520be5bc8a88aa8c.scope - libcontainer container 4d3a2ba2fb96d0dadbadc633a6f8ed1f7d16da3e36083f98520be5bc8a88aa8c. Jan 23 01:47:33.253227 containerd[1598]: time="2026-01-23T01:47:33.253175743Z" level=info msg="StartContainer for \"4d3a2ba2fb96d0dadbadc633a6f8ed1f7d16da3e36083f98520be5bc8a88aa8c\" returns successfully" Jan 23 01:47:33.267648 systemd[1]: cri-containerd-4d3a2ba2fb96d0dadbadc633a6f8ed1f7d16da3e36083f98520be5bc8a88aa8c.scope: Deactivated successfully. Jan 23 01:47:33.268820 systemd[1]: cri-containerd-4d3a2ba2fb96d0dadbadc633a6f8ed1f7d16da3e36083f98520be5bc8a88aa8c.scope: Consumed 30ms CPU time, 7.5M memory peak, 2.2M read from disk. Jan 23 01:47:33.270207 containerd[1598]: time="2026-01-23T01:47:33.269840417Z" level=info msg="received container exit event container_id:\"4d3a2ba2fb96d0dadbadc633a6f8ed1f7d16da3e36083f98520be5bc8a88aa8c\" id:\"4d3a2ba2fb96d0dadbadc633a6f8ed1f7d16da3e36083f98520be5bc8a88aa8c\" pid:4723 exited_at:{seconds:1769132853 nanos:268346362}" Jan 23 01:47:33.354939 sshd[4709]: Connection closed by 20.161.92.111 port 46370 Jan 23 01:47:33.355906 sshd-session[4612]: pam_unix(sshd:session): session closed for user core Jan 23 01:47:33.361702 systemd[1]: sshd@26-10.230.49.206:22-20.161.92.111:46370.service: Deactivated successfully. Jan 23 01:47:33.364319 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 01:47:33.366369 systemd-logind[1570]: Session 29 logged out. Waiting for processes to exit. Jan 23 01:47:33.368123 systemd-logind[1570]: Removed session 29. Jan 23 01:47:33.455386 systemd[1]: Started sshd@27-10.230.49.206:22-20.161.92.111:35854.service - OpenSSH per-connection server daemon (20.161.92.111:35854). Jan 23 01:47:34.031805 sshd[4759]: Accepted publickey for core from 20.161.92.111 port 35854 ssh2: RSA SHA256:xmrNBcMUQ4BtQS9j1UUOUiqyKYQLzqyjjm/JHkHV2R8 Jan 23 01:47:34.033631 sshd-session[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:47:34.041751 systemd-logind[1570]: New session 30 of user core. Jan 23 01:47:34.051097 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 23 01:47:34.146216 containerd[1598]: time="2026-01-23T01:47:34.146146057Z" level=info msg="CreateContainer within sandbox \"c95c3e7da89b24b9e1e26e52e7309b454d77c5978623eaa0f8573dbbb01f209b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:47:34.167169 containerd[1598]: time="2026-01-23T01:47:34.167020996Z" level=info msg="Container 6d6f4776a7773f19e5ca63d03793714ab8e9837e06e4f753056f92836787cd06: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:47:34.176793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount220373052.mount: Deactivated successfully. Jan 23 01:47:34.187338 containerd[1598]: time="2026-01-23T01:47:34.187225942Z" level=info msg="CreateContainer within sandbox \"c95c3e7da89b24b9e1e26e52e7309b454d77c5978623eaa0f8573dbbb01f209b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6d6f4776a7773f19e5ca63d03793714ab8e9837e06e4f753056f92836787cd06\"" Jan 23 01:47:34.189003 containerd[1598]: time="2026-01-23T01:47:34.188480058Z" level=info msg="StartContainer for \"6d6f4776a7773f19e5ca63d03793714ab8e9837e06e4f753056f92836787cd06\"" Jan 23 01:47:34.191761 containerd[1598]: time="2026-01-23T01:47:34.191728648Z" level=info msg="connecting to shim 6d6f4776a7773f19e5ca63d03793714ab8e9837e06e4f753056f92836787cd06" address="unix:///run/containerd/s/bdc0669f4e9320e49f77a34ca7606981c4dfcd1d56a55e254d93fcfc1c23ecb4" protocol=ttrpc version=3 Jan 23 01:47:34.223151 systemd[1]: Started cri-containerd-6d6f4776a7773f19e5ca63d03793714ab8e9837e06e4f753056f92836787cd06.scope - libcontainer container 6d6f4776a7773f19e5ca63d03793714ab8e9837e06e4f753056f92836787cd06. Jan 23 01:47:34.339930 containerd[1598]: time="2026-01-23T01:47:34.339035819Z" level=info msg="StartContainer for \"6d6f4776a7773f19e5ca63d03793714ab8e9837e06e4f753056f92836787cd06\" returns successfully" Jan 23 01:47:34.347715 systemd[1]: cri-containerd-6d6f4776a7773f19e5ca63d03793714ab8e9837e06e4f753056f92836787cd06.scope: Deactivated successfully. Jan 23 01:47:34.351534 containerd[1598]: time="2026-01-23T01:47:34.351233123Z" level=info msg="received container exit event container_id:\"6d6f4776a7773f19e5ca63d03793714ab8e9837e06e4f753056f92836787cd06\" id:\"6d6f4776a7773f19e5ca63d03793714ab8e9837e06e4f753056f92836787cd06\" pid:4775 exited_at:{seconds:1769132854 nanos:351015340}" Jan 23 01:47:34.391924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d6f4776a7773f19e5ca63d03793714ab8e9837e06e4f753056f92836787cd06-rootfs.mount: Deactivated successfully. Jan 23 01:47:34.755440 kubelet[2878]: E0123 01:47:34.755376 2878 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 01:47:35.155763 containerd[1598]: time="2026-01-23T01:47:35.155315595Z" level=info msg="CreateContainer within sandbox \"c95c3e7da89b24b9e1e26e52e7309b454d77c5978623eaa0f8573dbbb01f209b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:47:35.170575 containerd[1598]: time="2026-01-23T01:47:35.170520790Z" level=info msg="Container 09a12ca8f5eab173598175fd78be066a73add511aec7a6846653deedb7d07fb0: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:47:35.182278 containerd[1598]: time="2026-01-23T01:47:35.182159368Z" level=info msg="CreateContainer within sandbox \"c95c3e7da89b24b9e1e26e52e7309b454d77c5978623eaa0f8573dbbb01f209b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"09a12ca8f5eab173598175fd78be066a73add511aec7a6846653deedb7d07fb0\"" Jan 23 01:47:35.183060 containerd[1598]: time="2026-01-23T01:47:35.183013897Z" level=info msg="StartContainer for \"09a12ca8f5eab173598175fd78be066a73add511aec7a6846653deedb7d07fb0\"" Jan 23 01:47:35.186080 containerd[1598]: time="2026-01-23T01:47:35.186019356Z" level=info msg="connecting to shim 09a12ca8f5eab173598175fd78be066a73add511aec7a6846653deedb7d07fb0" address="unix:///run/containerd/s/bdc0669f4e9320e49f77a34ca7606981c4dfcd1d56a55e254d93fcfc1c23ecb4" protocol=ttrpc version=3 Jan 23 01:47:35.225132 systemd[1]: Started cri-containerd-09a12ca8f5eab173598175fd78be066a73add511aec7a6846653deedb7d07fb0.scope - libcontainer container 09a12ca8f5eab173598175fd78be066a73add511aec7a6846653deedb7d07fb0. Jan 23 01:47:35.280666 systemd[1]: cri-containerd-09a12ca8f5eab173598175fd78be066a73add511aec7a6846653deedb7d07fb0.scope: Deactivated successfully. Jan 23 01:47:35.283744 containerd[1598]: time="2026-01-23T01:47:35.283502248Z" level=info msg="received container exit event container_id:\"09a12ca8f5eab173598175fd78be066a73add511aec7a6846653deedb7d07fb0\" id:\"09a12ca8f5eab173598175fd78be066a73add511aec7a6846653deedb7d07fb0\" pid:4823 exited_at:{seconds:1769132855 nanos:282752433}" Jan 23 01:47:35.301265 containerd[1598]: time="2026-01-23T01:47:35.301210308Z" level=info msg="StartContainer for \"09a12ca8f5eab173598175fd78be066a73add511aec7a6846653deedb7d07fb0\" returns successfully" Jan 23 01:47:36.165330 containerd[1598]: time="2026-01-23T01:47:36.165213537Z" level=info msg="CreateContainer within sandbox \"c95c3e7da89b24b9e1e26e52e7309b454d77c5978623eaa0f8573dbbb01f209b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:47:36.170122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09a12ca8f5eab173598175fd78be066a73add511aec7a6846653deedb7d07fb0-rootfs.mount: Deactivated successfully. Jan 23 01:47:36.183509 containerd[1598]: time="2026-01-23T01:47:36.183434350Z" level=info msg="Container f209aecc2c7ad698ff34f973482516ecb3ccfee9a3021de826e80878bfdd5863: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:47:36.195454 containerd[1598]: time="2026-01-23T01:47:36.195321899Z" level=info msg="CreateContainer within sandbox \"c95c3e7da89b24b9e1e26e52e7309b454d77c5978623eaa0f8573dbbb01f209b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f209aecc2c7ad698ff34f973482516ecb3ccfee9a3021de826e80878bfdd5863\"" Jan 23 01:47:36.197369 containerd[1598]: time="2026-01-23T01:47:36.197338507Z" level=info msg="StartContainer for \"f209aecc2c7ad698ff34f973482516ecb3ccfee9a3021de826e80878bfdd5863\"" Jan 23 01:47:36.201904 containerd[1598]: time="2026-01-23T01:47:36.201753685Z" level=info msg="connecting to shim f209aecc2c7ad698ff34f973482516ecb3ccfee9a3021de826e80878bfdd5863" address="unix:///run/containerd/s/bdc0669f4e9320e49f77a34ca7606981c4dfcd1d56a55e254d93fcfc1c23ecb4" protocol=ttrpc version=3 Jan 23 01:47:36.232092 systemd[1]: Started cri-containerd-f209aecc2c7ad698ff34f973482516ecb3ccfee9a3021de826e80878bfdd5863.scope - libcontainer container f209aecc2c7ad698ff34f973482516ecb3ccfee9a3021de826e80878bfdd5863. Jan 23 01:47:36.315851 containerd[1598]: time="2026-01-23T01:47:36.315689745Z" level=info msg="StartContainer for \"f209aecc2c7ad698ff34f973482516ecb3ccfee9a3021de826e80878bfdd5863\" returns successfully" Jan 23 01:47:37.069487 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 23 01:47:37.193176 kubelet[2878]: I0123 01:47:37.193088 2878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-695nl" podStartSLOduration=5.193052112 podStartE2EDuration="5.193052112s" podCreationTimestamp="2026-01-23 01:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:47:37.191320008 +0000 UTC m=+142.761583210" watchObservedRunningTime="2026-01-23 01:47:37.193052112 +0000 UTC m=+142.763315303" Jan 23 01:47:37.715928 kubelet[2878]: I0123 01:47:37.715762 2878 setters.go:602] "Node became not ready" node="srv-idwud.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T01:47:37Z","lastTransitionTime":"2026-01-23T01:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 01:47:40.769846 systemd-networkd[1479]: lxc_health: Link UP Jan 23 01:47:40.772127 systemd-networkd[1479]: lxc_health: Gained carrier Jan 23 01:47:42.349061 systemd-networkd[1479]: lxc_health: Gained IPv6LL Jan 23 01:47:43.568917 kubelet[2878]: E0123 01:47:43.568457 2878 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54294->127.0.0.1:45533: write tcp 127.0.0.1:54294->127.0.0.1:45533: write: connection reset by peer Jan 23 01:47:45.877948 kubelet[2878]: E0123 01:47:45.877618 2878 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54300->127.0.0.1:45533: write tcp 127.0.0.1:54300->127.0.0.1:45533: write: broken pipe Jan 23 01:47:48.146965 sshd[4762]: Connection closed by 20.161.92.111 port 35854 Jan 23 01:47:48.148691 sshd-session[4759]: pam_unix(sshd:session): session closed for user core Jan 23 01:47:48.155759 systemd[1]: sshd@27-10.230.49.206:22-20.161.92.111:35854.service: Deactivated successfully. Jan 23 01:47:48.158820 systemd[1]: session-30.scope: Deactivated successfully. Jan 23 01:47:48.160832 systemd-logind[1570]: Session 30 logged out. Waiting for processes to exit. Jan 23 01:47:48.163059 systemd-logind[1570]: Removed session 30. Jan 23 01:47:48.924727 update_engine[1574]: I20260123 01:47:48.924423 1574 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 01:47:48.924727 update_engine[1574]: I20260123 01:47:48.924590 1574 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 01:47:48.931909 update_engine[1574]: I20260123 01:47:48.931593 1574 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 01:47:48.932915 update_engine[1574]: I20260123 01:47:48.932855 1574 omaha_request_params.cc:62] Current group set to stable Jan 23 01:47:48.933312 update_engine[1574]: I20260123 01:47:48.933284 1574 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 01:47:48.934946 update_engine[1574]: I20260123 01:47:48.933671 1574 update_attempter.cc:643] Scheduling an action processor start. Jan 23 01:47:48.934946 update_engine[1574]: I20260123 01:47:48.933729 1574 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 01:47:48.934946 update_engine[1574]: I20260123 01:47:48.933812 1574 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 01:47:48.934946 update_engine[1574]: I20260123 01:47:48.933974 1574 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 01:47:48.934946 update_engine[1574]: I20260123 01:47:48.933992 1574 omaha_request_action.cc:272] Request: Jan 23 01:47:48.934946 update_engine[1574]: Jan 23 01:47:48.934946 update_engine[1574]: Jan 23 01:47:48.934946 update_engine[1574]: Jan 23 01:47:48.934946 update_engine[1574]: Jan 23 01:47:48.934946 update_engine[1574]: Jan 23 01:47:48.934946 update_engine[1574]: Jan 23 01:47:48.934946 update_engine[1574]: Jan 23 01:47:48.934946 update_engine[1574]: Jan 23 01:47:48.934946 update_engine[1574]: I20260123 01:47:48.934005 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 01:47:48.940608 update_engine[1574]: I20260123 01:47:48.940524 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 01:47:48.941956 update_engine[1574]: I20260123 01:47:48.941867 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 01:47:48.951179 update_engine[1574]: E20260123 01:47:48.950317 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 01:47:48.951444 update_engine[1574]: I20260123 01:47:48.951390 1574 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 01:47:48.952285 locksmithd[1608]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0