Dec 16 16:14:53.985387 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 16:14:53.985427 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 16:14:53.985442 kernel: BIOS-provided physical RAM map: Dec 16 16:14:53.985453 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 16:14:53.985469 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 16:14:53.985479 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 16:14:53.985492 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 16 16:14:53.985511 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 16 16:14:53.985522 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 16:14:53.985533 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 16:14:53.985544 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 16:14:53.985555 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 16:14:53.985566 kernel: NX (Execute Disable) protection: active Dec 16 16:14:53.985582 kernel: APIC: Static calls initialized Dec 16 16:14:53.985595 kernel: SMBIOS 2.8 present. Dec 16 16:14:53.985607 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 16 16:14:53.985625 kernel: DMI: Memory slots populated: 1/1 Dec 16 16:14:53.985638 kernel: Hypervisor detected: KVM Dec 16 16:14:53.985649 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 16 16:14:53.985666 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 16:14:53.985678 kernel: kvm-clock: using sched offset of 6751362482 cycles Dec 16 16:14:53.985691 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 16:14:53.985703 kernel: tsc: Detected 2499.998 MHz processor Dec 16 16:14:53.985715 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 16:14:53.985727 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 16:14:53.985750 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 16 16:14:53.985762 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 16:14:53.985774 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 16:14:53.985792 kernel: Using GB pages for direct mapping Dec 16 16:14:53.985804 kernel: ACPI: Early table checksum verification disabled Dec 16 16:14:53.985816 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 16 16:14:53.985827 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 16:14:53.985840 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 16:14:53.985851 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 16:14:53.985863 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 16 16:14:53.985875 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 16:14:53.985887 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 16:14:53.985904 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 16:14:53.985915 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 16:14:53.985928 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 16 16:14:53.985946 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 16 16:14:53.985958 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 16 16:14:53.985970 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 16 16:14:53.985988 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 16 16:14:53.986000 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 16 16:14:53.986012 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 16 16:14:53.986024 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 16:14:53.990343 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 16 16:14:53.990369 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 16 16:14:53.990383 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Dec 16 16:14:53.990396 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Dec 16 16:14:53.990418 kernel: Zone ranges: Dec 16 16:14:53.990431 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 16:14:53.990444 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 16 16:14:53.990456 kernel: Normal empty Dec 16 16:14:53.990468 kernel: Device empty Dec 16 16:14:53.990481 kernel: Movable zone start for each node Dec 16 16:14:53.990493 kernel: Early memory node ranges Dec 16 16:14:53.990506 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 16:14:53.990518 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 16 16:14:53.990536 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 16 16:14:53.990548 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 16:14:53.990561 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 16:14:53.990583 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 16 16:14:53.990597 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 16:14:53.990614 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 16:14:53.990627 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 16:14:53.990640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 16:14:53.990653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 16:14:53.990672 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 16:14:53.990684 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 16:14:53.990697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 16:14:53.990709 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 16:14:53.990722 kernel: TSC deadline timer available Dec 16 16:14:53.990735 kernel: CPU topo: Max. logical packages: 16 Dec 16 16:14:53.990759 kernel: CPU topo: Max. logical dies: 16 Dec 16 16:14:53.990771 kernel: CPU topo: Max. dies per package: 1 Dec 16 16:14:53.990783 kernel: CPU topo: Max. threads per core: 1 Dec 16 16:14:53.990796 kernel: CPU topo: Num. cores per package: 1 Dec 16 16:14:53.990814 kernel: CPU topo: Num. threads per package: 1 Dec 16 16:14:53.990826 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Dec 16 16:14:53.990839 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 16:14:53.990851 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 16:14:53.990864 kernel: Booting paravirtualized kernel on KVM Dec 16 16:14:53.990876 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 16:14:53.990889 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 16 16:14:53.990902 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Dec 16 16:14:53.990914 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Dec 16 16:14:53.990932 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 16 16:14:53.990944 kernel: kvm-guest: PV spinlocks enabled Dec 16 16:14:53.990956 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 16:14:53.990971 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 16:14:53.990984 kernel: random: crng init done Dec 16 16:14:53.990996 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 16:14:53.991009 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 16:14:53.991021 kernel: Fallback order for Node 0: 0 Dec 16 16:14:53.991587 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Dec 16 16:14:53.991617 kernel: Policy zone: DMA32 Dec 16 16:14:53.991639 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 16:14:53.991662 kernel: software IO TLB: area num 16. Dec 16 16:14:53.991684 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 16 16:14:53.991706 kernel: Kernel/User page tables isolation: enabled Dec 16 16:14:53.991728 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 16:14:53.991764 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 16:14:53.991787 kernel: Dynamic Preempt: voluntary Dec 16 16:14:53.991822 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 16:14:53.991846 kernel: rcu: RCU event tracing is enabled. Dec 16 16:14:53.991868 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 16 16:14:53.991891 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 16:14:53.991922 kernel: Rude variant of Tasks RCU enabled. Dec 16 16:14:53.991945 kernel: Tracing variant of Tasks RCU enabled. Dec 16 16:14:53.991967 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 16:14:53.991982 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 16 16:14:53.991995 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 16:14:53.992014 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 16:14:53.992027 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 16:14:53.992056 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 16 16:14:53.992069 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 16:14:53.992096 kernel: Console: colour VGA+ 80x25 Dec 16 16:14:53.992114 kernel: printk: legacy console [tty0] enabled Dec 16 16:14:53.992128 kernel: printk: legacy console [ttyS0] enabled Dec 16 16:14:53.992141 kernel: ACPI: Core revision 20240827 Dec 16 16:14:53.992161 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 16:14:53.992174 kernel: x2apic enabled Dec 16 16:14:53.992188 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 16:14:53.992201 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 16 16:14:53.992221 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 16 16:14:53.992234 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 16:14:53.992248 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 16:14:53.992261 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 16:14:53.992274 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 16:14:53.992292 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 16:14:53.992304 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 16:14:53.992318 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 16:14:53.992331 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 16:14:53.992343 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 16:14:53.992356 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 16:14:53.992369 kernel: MMIO Stale Data: Unknown: No mitigations Dec 16 16:14:53.992382 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 16 16:14:53.992395 kernel: active return thunk: its_return_thunk Dec 16 16:14:53.992407 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 16:14:53.992420 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 16:14:53.992439 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 16:14:53.992451 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 16:14:53.992464 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 16:14:53.992477 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 16:14:53.992490 kernel: Freeing SMP alternatives memory: 32K Dec 16 16:14:53.992503 kernel: pid_max: default: 32768 minimum: 301 Dec 16 16:14:53.992515 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 16:14:53.992528 kernel: landlock: Up and running. Dec 16 16:14:53.992541 kernel: SELinux: Initializing. Dec 16 16:14:53.992554 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 16:14:53.992567 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 16:14:53.992585 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 16 16:14:53.992598 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 16 16:14:53.992611 kernel: signal: max sigframe size: 1776 Dec 16 16:14:53.992629 kernel: rcu: Hierarchical SRCU implementation. Dec 16 16:14:53.992644 kernel: rcu: Max phase no-delay instances is 400. Dec 16 16:14:53.992657 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Dec 16 16:14:53.992670 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 16:14:53.992684 kernel: smp: Bringing up secondary CPUs ... Dec 16 16:14:53.992697 kernel: smpboot: x86: Booting SMP configuration: Dec 16 16:14:53.992716 kernel: .... node #0, CPUs: #1 Dec 16 16:14:53.992729 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 16:14:53.992751 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 16 16:14:53.992765 kernel: Memory: 1887476K/2096616K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 203124K reserved, 0K cma-reserved) Dec 16 16:14:53.992779 kernel: devtmpfs: initialized Dec 16 16:14:53.992792 kernel: x86/mm: Memory block size: 128MB Dec 16 16:14:53.992805 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 16:14:53.992819 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 16 16:14:53.992832 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 16:14:53.992851 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 16:14:53.992864 kernel: audit: initializing netlink subsys (disabled) Dec 16 16:14:53.992878 kernel: audit: type=2000 audit(1765901689.854:1): state=initialized audit_enabled=0 res=1 Dec 16 16:14:53.992891 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 16:14:53.992904 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 16:14:53.992917 kernel: cpuidle: using governor menu Dec 16 16:14:53.992930 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 16:14:53.992943 kernel: dca service started, version 1.12.1 Dec 16 16:14:53.992956 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 16:14:53.992974 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 16 16:14:53.992987 kernel: PCI: Using configuration type 1 for base access Dec 16 16:14:53.993001 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 16:14:53.993014 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 16:14:53.993027 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 16:14:53.993059 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 16:14:53.993072 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 16:14:53.993085 kernel: ACPI: Added _OSI(Module Device) Dec 16 16:14:53.993099 kernel: ACPI: Added _OSI(Processor Device) Dec 16 16:14:53.993118 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 16:14:53.993131 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 16:14:53.993145 kernel: ACPI: Interpreter enabled Dec 16 16:14:53.993158 kernel: ACPI: PM: (supports S0 S5) Dec 16 16:14:53.993171 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 16:14:53.993184 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 16:14:53.993197 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 16:14:53.993210 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 16:14:53.993223 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 16:14:53.993531 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 16:14:53.993716 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 16:14:53.993906 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 16:14:53.993926 kernel: PCI host bridge to bus 0000:00 Dec 16 16:14:53.994212 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 16:14:53.994426 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 16:14:53.994596 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 16:14:53.994771 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 16 16:14:53.994931 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 16:14:53.996093 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 16 16:14:53.996261 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 16:14:53.996484 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 16:14:53.996699 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Dec 16 16:14:53.996900 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Dec 16 16:14:53.997099 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Dec 16 16:14:53.997309 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Dec 16 16:14:53.997490 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 16:14:53.997728 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:14:53.997921 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Dec 16 16:14:53.998122 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 16:14:53.998307 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 16 16:14:53.998482 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 16:14:53.998684 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:14:53.998873 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Dec 16 16:14:54.000195 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 16:14:54.000382 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 16:14:54.000562 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 16:14:54.000803 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:14:54.000983 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Dec 16 16:14:54.001189 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 16:14:54.001367 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 16:14:54.001543 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 16:14:54.001783 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:14:54.001963 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Dec 16 16:14:54.004210 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 16:14:54.004396 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 16:14:54.004575 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 16:14:54.004779 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:14:54.004958 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Dec 16 16:14:54.005167 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 16:14:54.005343 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 16:14:54.005530 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 16:14:54.005793 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:14:54.005972 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Dec 16 16:14:54.009433 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 16:14:54.009622 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 16:14:54.009840 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 16:14:54.010056 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:14:54.010266 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Dec 16 16:14:54.010444 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 16:14:54.010620 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 16:14:54.010866 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 16:14:54.011108 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:14:54.011291 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Dec 16 16:14:54.011478 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 16:14:54.011653 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 16:14:54.011841 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 16:14:54.013094 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 16:14:54.013289 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Dec 16 16:14:54.013468 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Dec 16 16:14:54.013653 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Dec 16 16:14:54.013866 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Dec 16 16:14:54.015195 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 16:14:54.015442 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Dec 16 16:14:54.015628 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Dec 16 16:14:54.015826 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Dec 16 16:14:54.016019 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 16:14:54.016221 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 16:14:54.016461 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 16:14:54.016643 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Dec 16 16:14:54.016835 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Dec 16 16:14:54.018130 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 16:14:54.018316 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 16:14:54.018524 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Dec 16 16:14:54.018711 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Dec 16 16:14:54.018923 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 16:14:54.019161 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 16:14:54.019346 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 16:14:54.019570 kernel: pci_bus 0000:02: extended config space not accessible Dec 16 16:14:54.019824 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Dec 16 16:14:54.020019 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Dec 16 16:14:54.020238 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 16:14:54.020434 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 16 16:14:54.020620 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Dec 16 16:14:54.020818 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 16:14:54.021018 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 16 16:14:54.021222 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Dec 16 16:14:54.021404 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 16:14:54.021596 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 16:14:54.021792 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 16:14:54.021974 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 16:14:54.022179 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 16:14:54.022363 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 16:14:54.022385 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 16:14:54.022399 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 16:14:54.022420 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 16:14:54.022434 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 16:14:54.022447 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 16:14:54.022461 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 16:14:54.022474 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 16:14:54.022487 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 16:14:54.022500 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 16:14:54.022513 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 16:14:54.022526 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 16:14:54.022545 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 16:14:54.022558 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 16:14:54.022571 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 16:14:54.022584 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 16:14:54.022597 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 16:14:54.022610 kernel: iommu: Default domain type: Translated Dec 16 16:14:54.022623 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 16:14:54.022635 kernel: PCI: Using ACPI for IRQ routing Dec 16 16:14:54.022649 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 16:14:54.022667 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 16:14:54.022680 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 16 16:14:54.022868 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 16:14:54.023062 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 16:14:54.023239 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 16:14:54.023260 kernel: vgaarb: loaded Dec 16 16:14:54.023274 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 16:14:54.023287 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 16:14:54.023308 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 16:14:54.023321 kernel: pnp: PnP ACPI init Dec 16 16:14:54.023543 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 16:14:54.023566 kernel: pnp: PnP ACPI: found 5 devices Dec 16 16:14:54.023580 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 16:14:54.023593 kernel: NET: Registered PF_INET protocol family Dec 16 16:14:54.023606 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 16:14:54.023619 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 16:14:54.023633 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 16:14:54.023653 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 16:14:54.023666 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 16:14:54.023680 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 16:14:54.023693 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 16:14:54.023706 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 16:14:54.023719 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 16:14:54.023732 kernel: NET: Registered PF_XDP protocol family Dec 16 16:14:54.023920 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 16 16:14:54.024133 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 16 16:14:54.024313 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 16 16:14:54.024491 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 16 16:14:54.024675 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 16 16:14:54.024868 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 16:14:54.025068 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 16:14:54.025249 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 16:14:54.025427 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Dec 16 16:14:54.025613 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Dec 16 16:14:54.025809 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Dec 16 16:14:54.025988 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Dec 16 16:14:54.026192 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Dec 16 16:14:54.026371 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Dec 16 16:14:54.026549 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Dec 16 16:14:54.026726 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Dec 16 16:14:54.026924 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 16:14:54.027158 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 16:14:54.027336 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 16:14:54.027513 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 16 16:14:54.027690 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 16 16:14:54.027880 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 16:14:54.028076 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 16:14:54.028255 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 16 16:14:54.028434 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 16:14:54.028612 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 16:14:54.028813 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 16:14:54.028990 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 16 16:14:54.029194 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 16:14:54.029381 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 16:14:54.029560 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 16:14:54.029748 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 16 16:14:54.029938 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 16:14:54.030138 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 16:14:54.030317 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 16:14:54.030496 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 16 16:14:54.030674 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 16:14:54.030873 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 16:14:54.031075 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 16:14:54.031256 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 16 16:14:54.031433 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 16:14:54.031611 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 16:14:54.031805 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 16:14:54.031984 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 16 16:14:54.032184 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 16:14:54.032363 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 16:14:54.032557 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 16:14:54.032745 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 16 16:14:54.032936 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 16:14:54.033139 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 16:14:54.033313 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 16:14:54.033478 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 16:14:54.033640 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 16:14:54.033817 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 16 16:14:54.033981 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 16:14:54.034171 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 16 16:14:54.034376 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 16 16:14:54.034548 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 16 16:14:54.034717 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 16:14:54.034942 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 16 16:14:54.035172 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 16 16:14:54.035353 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 16 16:14:54.035531 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 16:14:54.035731 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 16 16:14:54.035915 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 16 16:14:54.036106 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 16:14:54.036321 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 16 16:14:54.036494 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 16 16:14:54.036672 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 16:14:54.036881 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 16 16:14:54.037079 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 16 16:14:54.037251 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 16:14:54.037431 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 16 16:14:54.037601 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 16 16:14:54.037784 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 16:14:54.037976 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 16 16:14:54.038167 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 16 16:14:54.038337 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 16:14:54.038535 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 16 16:14:54.038707 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 16 16:14:54.038888 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 16:14:54.038911 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 16:14:54.038933 kernel: PCI: CLS 0 bytes, default 64 Dec 16 16:14:54.038948 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 16:14:54.038962 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 16 16:14:54.038976 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 16:14:54.038990 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 16 16:14:54.039004 kernel: Initialise system trusted keyrings Dec 16 16:14:54.039018 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 16:14:54.039052 kernel: Key type asymmetric registered Dec 16 16:14:54.039068 kernel: Asymmetric key parser 'x509' registered Dec 16 16:14:54.039088 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 16:14:54.039102 kernel: io scheduler mq-deadline registered Dec 16 16:14:54.039116 kernel: io scheduler kyber registered Dec 16 16:14:54.039130 kernel: io scheduler bfq registered Dec 16 16:14:54.039310 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 16 16:14:54.039491 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 16 16:14:54.039670 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:14:54.039872 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 16 16:14:54.040070 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 16 16:14:54.040250 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:14:54.040430 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 16 16:14:54.040613 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 16 16:14:54.040808 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:14:54.040999 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 16 16:14:54.041199 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 16 16:14:54.041378 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:14:54.041558 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 16 16:14:54.041750 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 16 16:14:54.041934 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:14:54.042147 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 16 16:14:54.042326 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 16 16:14:54.042506 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:14:54.042687 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 16 16:14:54.042877 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 16 16:14:54.043076 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:14:54.043265 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 16 16:14:54.043443 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 16 16:14:54.043621 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:14:54.043643 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 16:14:54.043658 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 16:14:54.043672 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 16:14:54.043685 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 16:14:54.043699 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 16:14:54.043720 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 16:14:54.043744 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 16:14:54.043760 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 16:14:54.043976 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 16:14:54.044007 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 16:14:54.044200 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 16:14:54.044371 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T16:14:53 UTC (1765901693) Dec 16 16:14:54.044539 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 16 16:14:54.044568 kernel: intel_pstate: CPU model not supported Dec 16 16:14:54.044582 kernel: NET: Registered PF_INET6 protocol family Dec 16 16:14:54.044596 kernel: Segment Routing with IPv6 Dec 16 16:14:54.044610 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 16:14:54.044623 kernel: NET: Registered PF_PACKET protocol family Dec 16 16:14:54.044637 kernel: Key type dns_resolver registered Dec 16 16:14:54.044651 kernel: IPI shorthand broadcast: enabled Dec 16 16:14:54.044664 kernel: sched_clock: Marking stable (3812003575, 230401951)->(4197678290, -155272764) Dec 16 16:14:54.044678 kernel: registered taskstats version 1 Dec 16 16:14:54.044697 kernel: Loading compiled-in X.509 certificates Dec 16 16:14:54.044711 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 16:14:54.044725 kernel: Demotion targets for Node 0: null Dec 16 16:14:54.044749 kernel: Key type .fscrypt registered Dec 16 16:14:54.044764 kernel: Key type fscrypt-provisioning registered Dec 16 16:14:54.044777 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 16:14:54.044791 kernel: ima: Allocated hash algorithm: sha1 Dec 16 16:14:54.044804 kernel: ima: No architecture policies found Dec 16 16:14:54.044818 kernel: clk: Disabling unused clocks Dec 16 16:14:54.044838 kernel: Warning: unable to open an initial console. Dec 16 16:14:54.044852 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 16:14:54.044865 kernel: Write protecting the kernel read-only data: 40960k Dec 16 16:14:54.044879 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 16:14:54.044893 kernel: Run /init as init process Dec 16 16:14:54.044907 kernel: with arguments: Dec 16 16:14:54.044920 kernel: /init Dec 16 16:14:54.044933 kernel: with environment: Dec 16 16:14:54.044947 kernel: HOME=/ Dec 16 16:14:54.044965 kernel: TERM=linux Dec 16 16:14:54.044981 systemd[1]: Successfully made /usr/ read-only. Dec 16 16:14:54.044999 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 16:14:54.045014 systemd[1]: Detected virtualization kvm. Dec 16 16:14:54.045028 systemd[1]: Detected architecture x86-64. Dec 16 16:14:54.045070 systemd[1]: Running in initrd. Dec 16 16:14:54.045084 systemd[1]: No hostname configured, using default hostname. Dec 16 16:14:54.045106 systemd[1]: Hostname set to . Dec 16 16:14:54.045121 systemd[1]: Initializing machine ID from VM UUID. Dec 16 16:14:54.045135 systemd[1]: Queued start job for default target initrd.target. Dec 16 16:14:54.045149 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 16:14:54.045164 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 16:14:54.045180 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 16:14:54.045195 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 16:14:54.045209 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 16:14:54.045230 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 16:14:54.045246 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 16:14:54.045260 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 16:14:54.045275 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 16:14:54.045290 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 16:14:54.045304 systemd[1]: Reached target paths.target - Path Units. Dec 16 16:14:54.045319 systemd[1]: Reached target slices.target - Slice Units. Dec 16 16:14:54.045338 systemd[1]: Reached target swap.target - Swaps. Dec 16 16:14:54.045352 systemd[1]: Reached target timers.target - Timer Units. Dec 16 16:14:54.045367 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 16:14:54.045382 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 16:14:54.045396 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 16:14:54.045411 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 16:14:54.045430 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 16:14:54.045445 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 16:14:54.045459 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 16:14:54.045479 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 16:14:54.045494 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 16:14:54.045508 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 16:14:54.045523 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 16:14:54.045538 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 16:14:54.045553 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 16:14:54.045567 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 16:14:54.045582 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 16:14:54.045601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 16:14:54.045616 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 16:14:54.045631 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 16:14:54.045646 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 16:14:54.045661 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 16:14:54.045731 systemd-journald[209]: Collecting audit messages is disabled. Dec 16 16:14:54.045776 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 16:14:54.045791 kernel: Bridge firewalling registered Dec 16 16:14:54.045813 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 16:14:54.045828 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 16:14:54.045845 systemd-journald[209]: Journal started Dec 16 16:14:54.045871 systemd-journald[209]: Runtime Journal (/run/log/journal/68a44ef379e04e28ad7258e8a8ad0c4c) is 4.7M, max 37.8M, 33.1M free. Dec 16 16:14:53.952339 systemd-modules-load[211]: Inserted module 'overlay' Dec 16 16:14:54.084373 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 16:14:54.006139 systemd-modules-load[211]: Inserted module 'br_netfilter' Dec 16 16:14:54.085556 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 16:14:54.092677 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 16:14:54.095213 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 16:14:54.100249 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 16:14:54.119229 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 16:14:54.137696 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 16:14:54.140140 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 16:14:54.148223 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 16:14:54.150974 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 16:14:54.153211 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 16:14:54.159161 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 16:14:54.166452 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 16:14:54.183209 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 16:14:54.229908 systemd-resolved[249]: Positive Trust Anchors: Dec 16 16:14:54.229926 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 16:14:54.229971 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 16:14:54.234266 systemd-resolved[249]: Defaulting to hostname 'linux'. Dec 16 16:14:54.237619 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 16:14:54.238572 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 16:14:54.321081 kernel: SCSI subsystem initialized Dec 16 16:14:54.333089 kernel: Loading iSCSI transport class v2.0-870. Dec 16 16:14:54.347085 kernel: iscsi: registered transport (tcp) Dec 16 16:14:54.374445 kernel: iscsi: registered transport (qla4xxx) Dec 16 16:14:54.374520 kernel: QLogic iSCSI HBA Driver Dec 16 16:14:54.400588 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 16:14:54.419823 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 16:14:54.421220 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 16:14:54.487232 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 16:14:54.490007 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 16:14:54.553097 kernel: raid6: sse2x4 gen() 13823 MB/s Dec 16 16:14:54.571075 kernel: raid6: sse2x2 gen() 9689 MB/s Dec 16 16:14:54.589650 kernel: raid6: sse2x1 gen() 10070 MB/s Dec 16 16:14:54.589736 kernel: raid6: using algorithm sse2x4 gen() 13823 MB/s Dec 16 16:14:54.608707 kernel: raid6: .... xor() 7775 MB/s, rmw enabled Dec 16 16:14:54.608770 kernel: raid6: using ssse3x2 recovery algorithm Dec 16 16:14:54.634084 kernel: xor: automatically using best checksumming function avx Dec 16 16:14:54.873100 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 16:14:54.883176 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 16:14:54.887272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 16:14:54.923468 systemd-udevd[460]: Using default interface naming scheme 'v255'. Dec 16 16:14:54.933100 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 16:14:54.938064 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 16:14:54.969404 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Dec 16 16:14:55.006655 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 16:14:55.010821 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 16:14:55.140855 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 16:14:55.160191 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 16:14:55.300078 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 16 16:14:55.305240 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 16 16:14:55.320077 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 16:14:55.320154 kernel: GPT:17805311 != 125829119 Dec 16 16:14:55.320200 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 16:14:55.320245 kernel: GPT:17805311 != 125829119 Dec 16 16:14:55.321421 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 16:14:55.323680 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 16:14:55.324210 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 16:14:55.356089 kernel: AES CTR mode by8 optimization enabled Dec 16 16:14:55.376531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 16:14:55.376746 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 16:14:55.378423 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 16:14:55.399384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 16:14:55.403496 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 16:14:55.413493 kernel: ACPI: bus type USB registered Dec 16 16:14:55.413545 kernel: usbcore: registered new interface driver usbfs Dec 16 16:14:55.417085 kernel: usbcore: registered new interface driver hub Dec 16 16:14:55.417123 kernel: usbcore: registered new device driver usb Dec 16 16:14:55.458093 kernel: libata version 3.00 loaded. Dec 16 16:14:55.469060 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 16:14:55.476064 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 16:14:55.478062 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 16:14:55.497071 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 16:14:55.497486 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 16:14:55.497756 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 16:14:55.497972 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 16 16:14:55.498291 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 16 16:14:55.498510 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 16:14:55.507107 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 16 16:14:55.507414 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 16 16:14:55.507667 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 16 16:14:55.510070 kernel: hub 1-0:1.0: USB hub found Dec 16 16:14:55.510423 kernel: hub 1-0:1.0: 4 ports detected Dec 16 16:14:55.510862 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 16:14:55.514699 kernel: hub 2-0:1.0: USB hub found Dec 16 16:14:55.519294 kernel: hub 2-0:1.0: 4 ports detected Dec 16 16:14:55.530056 kernel: scsi host0: ahci Dec 16 16:14:55.530399 kernel: scsi host1: ahci Dec 16 16:14:55.533351 kernel: scsi host2: ahci Dec 16 16:14:55.535739 kernel: scsi host3: ahci Dec 16 16:14:55.539078 kernel: scsi host4: ahci Dec 16 16:14:55.539536 kernel: scsi host5: ahci Dec 16 16:14:55.541991 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 lpm-pol 1 Dec 16 16:14:55.542094 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 lpm-pol 1 Dec 16 16:14:55.542120 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 lpm-pol 1 Dec 16 16:14:55.542139 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 lpm-pol 1 Dec 16 16:14:55.542157 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 lpm-pol 1 Dec 16 16:14:55.542175 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 lpm-pol 1 Dec 16 16:14:55.601482 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 16:14:55.657374 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 16:14:55.658747 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 16:14:55.673347 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 16:14:55.686405 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 16:14:55.699281 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 16:14:55.701533 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 16:14:55.726626 disk-uuid[616]: Primary Header is updated. Dec 16 16:14:55.726626 disk-uuid[616]: Secondary Entries is updated. Dec 16 16:14:55.726626 disk-uuid[616]: Secondary Header is updated. Dec 16 16:14:55.732056 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 16:14:55.740060 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 16:14:55.747623 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 16:14:55.854119 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 16:14:55.854200 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 16:14:55.854270 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 16:14:55.854336 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 16:14:55.854360 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 16:14:55.859063 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 16:14:55.925065 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 16:14:55.934366 kernel: usbcore: registered new interface driver usbhid Dec 16 16:14:55.934414 kernel: usbhid: USB HID core driver Dec 16 16:14:55.943388 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Dec 16 16:14:55.943440 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 16 16:14:55.972730 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 16:14:55.980764 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 16:14:55.981779 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 16:14:55.983575 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 16:14:55.986812 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 16:14:56.017765 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 16:14:56.740073 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 16:14:56.746057 disk-uuid[617]: The operation has completed successfully. Dec 16 16:14:56.810670 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 16:14:56.810856 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 16:14:56.856378 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 16:14:56.891397 sh[644]: Success Dec 16 16:14:56.917577 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 16:14:56.917722 kernel: device-mapper: uevent: version 1.0.3 Dec 16 16:14:56.918634 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 16:14:56.932054 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Dec 16 16:14:56.989823 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 16:14:56.998137 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 16:14:57.000783 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 16:14:57.030100 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (656) Dec 16 16:14:57.034139 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 16:14:57.034191 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 16:14:57.046680 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 16:14:57.046764 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 16:14:57.049453 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 16:14:57.050891 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 16:14:57.051875 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 16:14:57.053309 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 16:14:57.057196 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 16:14:57.091065 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (689) Dec 16 16:14:57.096769 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 16:14:57.096805 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 16:14:57.104012 kernel: BTRFS info (device vda6): turning on async discard Dec 16 16:14:57.104071 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 16:14:57.112077 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 16:14:57.113857 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 16:14:57.117181 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 16:14:57.244087 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 16:14:57.248224 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 16:14:57.311230 systemd-networkd[826]: lo: Link UP Dec 16 16:14:57.311245 systemd-networkd[826]: lo: Gained carrier Dec 16 16:14:57.313668 systemd-networkd[826]: Enumeration completed Dec 16 16:14:57.314184 systemd-networkd[826]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 16:14:57.314190 systemd-networkd[826]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 16:14:57.315767 systemd-networkd[826]: eth0: Link UP Dec 16 16:14:57.316149 systemd-networkd[826]: eth0: Gained carrier Dec 16 16:14:57.316163 systemd-networkd[826]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 16:14:57.320190 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 16:14:57.322405 systemd[1]: Reached target network.target - Network. Dec 16 16:14:57.371133 systemd-networkd[826]: eth0: DHCPv4 address 10.230.59.10/30, gateway 10.230.59.9 acquired from 10.230.59.9 Dec 16 16:14:57.406777 ignition[736]: Ignition 2.22.0 Dec 16 16:14:57.406804 ignition[736]: Stage: fetch-offline Dec 16 16:14:57.409395 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 16:14:57.406861 ignition[736]: no configs at "/usr/lib/ignition/base.d" Dec 16 16:14:57.406878 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 16:14:57.407017 ignition[736]: parsed url from cmdline: "" Dec 16 16:14:57.413223 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 16:14:57.407024 ignition[736]: no config URL provided Dec 16 16:14:57.407057 ignition[736]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 16:14:57.407075 ignition[736]: no config at "/usr/lib/ignition/user.ign" Dec 16 16:14:57.407085 ignition[736]: failed to fetch config: resource requires networking Dec 16 16:14:57.407457 ignition[736]: Ignition finished successfully Dec 16 16:14:57.533375 ignition[836]: Ignition 2.22.0 Dec 16 16:14:57.533400 ignition[836]: Stage: fetch Dec 16 16:14:57.533746 ignition[836]: no configs at "/usr/lib/ignition/base.d" Dec 16 16:14:57.533767 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 16:14:57.533949 ignition[836]: parsed url from cmdline: "" Dec 16 16:14:57.533956 ignition[836]: no config URL provided Dec 16 16:14:57.533967 ignition[836]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 16:14:57.533984 ignition[836]: no config at "/usr/lib/ignition/user.ign" Dec 16 16:14:57.534189 ignition[836]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 16 16:14:57.534736 ignition[836]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 16 16:14:57.534773 ignition[836]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 16 16:14:57.555322 ignition[836]: GET result: OK Dec 16 16:14:57.556169 ignition[836]: parsing config with SHA512: 830cd687c2c4c0d850562b638f278bf71436d061748f3220d33628a5191d52d498e03aad962a14541ad51700d1cba57e6f5036485a191beb13bdd86df911b249 Dec 16 16:14:57.567723 unknown[836]: fetched base config from "system" Dec 16 16:14:57.567744 unknown[836]: fetched base config from "system" Dec 16 16:14:57.567755 unknown[836]: fetched user config from "openstack" Dec 16 16:14:57.568763 ignition[836]: fetch: fetch complete Dec 16 16:14:57.568773 ignition[836]: fetch: fetch passed Dec 16 16:14:57.572799 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 16:14:57.568840 ignition[836]: Ignition finished successfully Dec 16 16:14:57.576243 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 16:14:57.656336 ignition[842]: Ignition 2.22.0 Dec 16 16:14:57.656362 ignition[842]: Stage: kargs Dec 16 16:14:57.656566 ignition[842]: no configs at "/usr/lib/ignition/base.d" Dec 16 16:14:57.656585 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 16:14:57.657615 ignition[842]: kargs: kargs passed Dec 16 16:14:57.657703 ignition[842]: Ignition finished successfully Dec 16 16:14:57.662184 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 16:14:57.666575 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 16:14:57.710969 ignition[848]: Ignition 2.22.0 Dec 16 16:14:57.710996 ignition[848]: Stage: disks Dec 16 16:14:57.711208 ignition[848]: no configs at "/usr/lib/ignition/base.d" Dec 16 16:14:57.711227 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 16:14:57.712214 ignition[848]: disks: disks passed Dec 16 16:14:57.714656 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 16:14:57.712290 ignition[848]: Ignition finished successfully Dec 16 16:14:57.716465 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 16:14:57.717308 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 16:14:57.718702 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 16:14:57.720213 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 16:14:57.721744 systemd[1]: Reached target basic.target - Basic System. Dec 16 16:14:57.725236 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 16:14:57.750518 systemd-fsck[857]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Dec 16 16:14:57.754410 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 16:14:57.757213 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 16:14:57.894062 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 16:14:57.895685 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 16:14:57.898000 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 16:14:57.901506 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 16:14:57.904267 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 16:14:57.908300 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 16:14:57.913432 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 16 16:14:57.915431 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 16:14:57.915481 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 16:14:57.920296 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 16:14:57.923151 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 16:14:57.937400 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Dec 16 16:14:57.937473 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 16:14:57.939949 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 16:14:57.958775 kernel: BTRFS info (device vda6): turning on async discard Dec 16 16:14:57.958858 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 16:14:57.966021 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 16:14:58.018089 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:14:58.022831 initrd-setup-root[893]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 16:14:58.032983 initrd-setup-root[900]: cut: /sysroot/etc/group: No such file or directory Dec 16 16:14:58.039492 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 16:14:58.045956 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 16:14:58.166210 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 16:14:58.168399 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 16:14:58.170190 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 16:14:58.193233 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 16:14:58.195507 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 16:14:58.216701 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 16:14:58.238122 ignition[982]: INFO : Ignition 2.22.0 Dec 16 16:14:58.238122 ignition[982]: INFO : Stage: mount Dec 16 16:14:58.242188 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 16:14:58.242188 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 16:14:58.242188 ignition[982]: INFO : mount: mount passed Dec 16 16:14:58.242188 ignition[982]: INFO : Ignition finished successfully Dec 16 16:14:58.241368 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 16:14:58.496281 systemd-networkd[826]: eth0: Gained IPv6LL Dec 16 16:14:59.043075 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:15:00.005224 systemd-networkd[826]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8ec2:24:19ff:fee6:3b0a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8ec2:24:19ff:fee6:3b0a/64 assigned by NDisc. Dec 16 16:15:00.005242 systemd-networkd[826]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 16 16:15:01.052073 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:15:05.070579 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:15:05.083426 coreos-metadata[867]: Dec 16 16:15:05.083 WARN failed to locate config-drive, using the metadata service API instead Dec 16 16:15:05.105886 coreos-metadata[867]: Dec 16 16:15:05.105 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 16:15:05.119661 coreos-metadata[867]: Dec 16 16:15:05.119 INFO Fetch successful Dec 16 16:15:05.120504 coreos-metadata[867]: Dec 16 16:15:05.120 INFO wrote hostname srv-899vz.gb1.brightbox.com to /sysroot/etc/hostname Dec 16 16:15:05.123565 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 16 16:15:05.123937 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 16 16:15:05.129324 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 16:15:05.155732 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 16:15:05.184061 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (998) Dec 16 16:15:05.184145 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 16:15:05.186171 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 16:15:05.193514 kernel: BTRFS info (device vda6): turning on async discard Dec 16 16:15:05.193580 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 16:15:05.198656 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 16:15:05.249480 ignition[1016]: INFO : Ignition 2.22.0 Dec 16 16:15:05.249480 ignition[1016]: INFO : Stage: files Dec 16 16:15:05.251365 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 16:15:05.251365 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 16:15:05.251365 ignition[1016]: DEBUG : files: compiled without relabeling support, skipping Dec 16 16:15:05.254222 ignition[1016]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 16:15:05.254222 ignition[1016]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 16:15:05.261925 ignition[1016]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 16:15:05.261925 ignition[1016]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 16:15:05.261925 ignition[1016]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 16:15:05.260931 unknown[1016]: wrote ssh authorized keys file for user: core Dec 16 16:15:05.265943 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 16:15:05.265943 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 16:15:05.792069 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 16:15:06.235359 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 16:15:06.237617 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 16:15:06.237617 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 16 16:15:06.500541 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 16:15:06.840073 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 16:15:06.840073 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 16:15:06.842816 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 16:15:06.842816 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 16:15:06.842816 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 16:15:06.842816 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 16:15:06.847972 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 16:15:06.847972 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 16:15:06.847972 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 16:15:06.847972 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 16:15:06.847972 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 16:15:06.854492 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 16:15:06.854492 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 16:15:06.857969 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 16:15:06.863540 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 16:15:07.217431 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 16:15:09.377799 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 16:15:09.381182 ignition[1016]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 16:15:09.382374 ignition[1016]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 16:15:09.385507 ignition[1016]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 16:15:09.385507 ignition[1016]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 16:15:09.390902 ignition[1016]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 16 16:15:09.390902 ignition[1016]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 16:15:09.390902 ignition[1016]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 16:15:09.390902 ignition[1016]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 16:15:09.390902 ignition[1016]: INFO : files: files passed Dec 16 16:15:09.390902 ignition[1016]: INFO : Ignition finished successfully Dec 16 16:15:09.396160 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 16:15:09.404727 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 16:15:09.407954 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 16:15:09.437561 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 16:15:09.437783 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 16:15:09.447797 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 16:15:09.449421 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 16:15:09.451616 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 16:15:09.452988 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 16:15:09.454704 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 16:15:09.457112 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 16:15:09.521642 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 16:15:09.521847 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 16:15:09.523910 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 16:15:09.524621 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 16:15:09.526469 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 16:15:09.529198 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 16:15:09.556945 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 16:15:09.560278 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 16:15:09.583322 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 16:15:09.585284 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 16:15:09.587127 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 16:15:09.588599 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 16:15:09.588795 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 16:15:09.591369 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 16:15:09.592467 systemd[1]: Stopped target basic.target - Basic System. Dec 16 16:15:09.593741 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 16:15:09.595274 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 16:15:09.596884 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 16:15:09.598613 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 16:15:09.600156 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 16:15:09.601761 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 16:15:09.603437 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 16:15:09.604952 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 16:15:09.606462 systemd[1]: Stopped target swap.target - Swaps. Dec 16 16:15:09.607816 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 16:15:09.608146 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 16:15:09.609780 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 16:15:09.610755 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 16:15:09.612353 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 16:15:09.612584 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 16:15:09.619669 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 16:15:09.619877 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 16:15:09.621848 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 16:15:09.622138 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 16:15:09.624036 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 16:15:09.624343 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 16:15:09.628240 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 16:15:09.629458 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 16:15:09.629729 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 16:15:09.635359 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 16:15:09.636783 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 16:15:09.637008 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 16:15:09.637927 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 16:15:09.640882 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 16:15:09.648553 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 16:15:09.649539 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 16:15:09.674216 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 16:15:09.680623 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 16:15:09.681290 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 16:15:09.701218 ignition[1070]: INFO : Ignition 2.22.0 Dec 16 16:15:09.701218 ignition[1070]: INFO : Stage: umount Dec 16 16:15:09.703103 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 16:15:09.703103 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 16:15:09.704853 ignition[1070]: INFO : umount: umount passed Dec 16 16:15:09.704853 ignition[1070]: INFO : Ignition finished successfully Dec 16 16:15:09.706052 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 16:15:09.706291 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 16:15:09.708408 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 16:15:09.708573 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 16:15:09.709935 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 16:15:09.710012 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 16:15:09.711307 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 16:15:09.711401 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 16:15:09.712631 systemd[1]: Stopped target network.target - Network. Dec 16 16:15:09.713900 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 16:15:09.713989 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 16:15:09.715461 systemd[1]: Stopped target paths.target - Path Units. Dec 16 16:15:09.716910 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 16:15:09.720210 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 16:15:09.721067 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 16:15:09.722391 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 16:15:09.723950 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 16:15:09.724281 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 16:15:09.725629 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 16:15:09.725691 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 16:15:09.726987 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 16:15:09.727110 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 16:15:09.728358 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 16:15:09.728429 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 16:15:09.729698 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 16:15:09.729783 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 16:15:09.731502 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 16:15:09.734075 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 16:15:09.738343 systemd-networkd[826]: eth0: DHCPv6 lease lost Dec 16 16:15:09.743744 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 16:15:09.743999 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 16:15:09.749769 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 16:15:09.751310 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 16:15:09.752415 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 16:15:09.755259 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 16:15:09.756192 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 16:15:09.757057 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 16:15:09.757143 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 16:15:09.759942 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 16:15:09.762134 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 16:15:09.762223 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 16:15:09.764525 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 16:15:09.764606 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 16:15:09.769371 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 16:15:09.769447 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 16:15:09.770919 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 16:15:09.770990 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 16:15:09.774903 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 16:15:09.778913 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 16:15:09.779022 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 16:15:09.786710 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 16:15:09.787615 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 16:15:09.789920 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 16:15:09.790604 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 16:15:09.792650 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 16:15:09.792726 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 16:15:09.794301 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 16:15:09.794386 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 16:15:09.797724 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 16:15:09.797808 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 16:15:09.800389 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 16:15:09.800487 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 16:15:09.802999 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 16:15:09.804858 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 16:15:09.804950 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 16:15:09.813191 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 16:15:09.813322 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 16:15:09.815102 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 16:15:09.815187 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 16:15:09.820393 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 16:15:09.820492 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 16:15:09.820572 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 16:15:09.821274 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 16:15:09.821416 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 16:15:09.830312 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 16:15:09.831414 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 16:15:09.832674 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 16:15:09.835267 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 16:15:09.861157 systemd[1]: Switching root. Dec 16 16:15:09.894828 systemd-journald[209]: Journal stopped Dec 16 16:15:11.565843 systemd-journald[209]: Received SIGTERM from PID 1 (systemd). Dec 16 16:15:11.565951 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 16:15:11.565996 kernel: SELinux: policy capability open_perms=1 Dec 16 16:15:11.575141 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 16:15:11.575173 kernel: SELinux: policy capability always_check_network=0 Dec 16 16:15:11.575208 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 16:15:11.575239 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 16:15:11.575259 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 16:15:11.575279 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 16:15:11.575306 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 16:15:11.575326 kernel: audit: type=1403 audit(1765901710.180:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 16:15:11.575369 systemd[1]: Successfully loaded SELinux policy in 82.969ms. Dec 16 16:15:11.575404 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.933ms. Dec 16 16:15:11.575428 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 16:15:11.575452 systemd[1]: Detected virtualization kvm. Dec 16 16:15:11.575473 systemd[1]: Detected architecture x86-64. Dec 16 16:15:11.575501 systemd[1]: Detected first boot. Dec 16 16:15:11.575525 systemd[1]: Hostname set to . Dec 16 16:15:11.575548 systemd[1]: Initializing machine ID from VM UUID. Dec 16 16:15:11.575570 zram_generator::config[1113]: No configuration found. Dec 16 16:15:11.575618 kernel: Guest personality initialized and is inactive Dec 16 16:15:11.575640 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 16:15:11.575661 kernel: Initialized host personality Dec 16 16:15:11.575681 kernel: NET: Registered PF_VSOCK protocol family Dec 16 16:15:11.575709 systemd[1]: Populated /etc with preset unit settings. Dec 16 16:15:11.575732 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 16:15:11.575755 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 16:15:11.575776 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 16:15:11.575821 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 16:15:11.575847 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 16:15:11.575869 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 16:15:11.575890 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 16:15:11.575913 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 16:15:11.575965 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 16:15:11.576004 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 16:15:11.576029 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 16:15:11.576085 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 16:15:11.576108 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 16:15:11.576143 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 16:15:11.576168 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 16:15:11.576210 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 16:15:11.576254 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 16:15:11.576279 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 16:15:11.576301 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 16:15:11.576322 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 16:15:11.576344 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 16:15:11.576365 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 16:15:11.576387 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 16:15:11.576423 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 16:15:11.576447 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 16:15:11.576469 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 16:15:11.576491 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 16:15:11.576513 systemd[1]: Reached target slices.target - Slice Units. Dec 16 16:15:11.576534 systemd[1]: Reached target swap.target - Swaps. Dec 16 16:15:11.576556 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 16:15:11.576587 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 16:15:11.576625 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 16:15:11.576659 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 16:15:11.576697 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 16:15:11.576721 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 16:15:11.576752 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 16:15:11.576775 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 16:15:11.576797 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 16:15:11.576819 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 16:15:11.576840 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 16:15:11.576862 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 16:15:11.576898 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 16:15:11.576922 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 16:15:11.576945 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 16:15:11.576967 systemd[1]: Reached target machines.target - Containers. Dec 16 16:15:11.576989 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 16:15:11.577011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 16:15:11.579898 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 16:15:11.579938 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 16:15:11.579963 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 16:15:11.580006 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 16:15:11.580049 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 16:15:11.580086 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 16:15:11.580112 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 16:15:11.580135 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 16:15:11.580157 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 16:15:11.580178 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 16:15:11.580214 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 16:15:11.580252 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 16:15:11.580292 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 16:15:11.580317 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 16:15:11.580339 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 16:15:11.580363 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 16:15:11.580399 kernel: loop: module loaded Dec 16 16:15:11.580426 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 16:15:11.580464 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 16:15:11.580503 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 16:15:11.580527 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 16:15:11.580564 systemd[1]: Stopped verity-setup.service. Dec 16 16:15:11.580589 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 16:15:11.580612 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 16:15:11.580645 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 16:15:11.580669 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 16:15:11.580690 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 16:15:11.580713 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 16:15:11.580734 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 16:15:11.580770 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 16:15:11.580794 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 16:15:11.580816 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 16:15:11.580837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 16:15:11.580859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 16:15:11.580880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 16:15:11.580902 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 16:15:11.580924 kernel: fuse: init (API version 7.41) Dec 16 16:15:11.580947 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 16:15:11.580984 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 16:15:11.581009 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 16:15:11.581060 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 16:15:11.581088 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 16:15:11.581110 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 16:15:11.581192 systemd-journald[1200]: Collecting audit messages is disabled. Dec 16 16:15:11.581234 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 16:15:11.581275 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 16:15:11.581300 kernel: ACPI: bus type drm_connector registered Dec 16 16:15:11.581336 systemd-journald[1200]: Journal started Dec 16 16:15:11.581373 systemd-journald[1200]: Runtime Journal (/run/log/journal/68a44ef379e04e28ad7258e8a8ad0c4c) is 4.7M, max 37.8M, 33.1M free. Dec 16 16:15:11.596716 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 16:15:11.596814 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 16:15:11.596851 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 16:15:11.051350 systemd[1]: Queued start job for default target multi-user.target. Dec 16 16:15:11.067666 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 16:15:11.068583 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 16:15:11.607174 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 16:15:11.615059 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 16:15:11.622063 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 16:15:11.622132 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 16:15:11.627628 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 16:15:11.634056 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 16:15:11.640057 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 16:15:11.645489 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 16:15:11.648099 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 16:15:11.649557 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 16:15:11.650129 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 16:15:11.652178 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 16:15:11.653416 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 16:15:11.655319 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 16:15:11.656385 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 16:15:11.692541 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 16:15:11.700056 kernel: loop0: detected capacity change from 0 to 219144 Dec 16 16:15:11.703387 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 16:15:11.710639 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 16:15:11.718427 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 16:15:11.720693 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 16:15:11.724463 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 16:15:11.742120 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 16:15:11.762060 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 16:15:11.800961 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 16:15:11.815075 kernel: loop1: detected capacity change from 0 to 110984 Dec 16 16:15:11.820741 systemd-journald[1200]: Time spent on flushing to /var/log/journal/68a44ef379e04e28ad7258e8a8ad0c4c is 66.948ms for 1174 entries. Dec 16 16:15:11.820741 systemd-journald[1200]: System Journal (/var/log/journal/68a44ef379e04e28ad7258e8a8ad0c4c) is 8M, max 584.8M, 576.8M free. Dec 16 16:15:11.919057 systemd-journald[1200]: Received client request to flush runtime journal. Dec 16 16:15:11.911709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 16:15:11.923813 kernel: loop2: detected capacity change from 0 to 128560 Dec 16 16:15:11.923994 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 16:15:11.956134 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 16:15:11.971238 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 16:15:11.993326 kernel: loop3: detected capacity change from 0 to 8 Dec 16 16:15:12.047065 kernel: loop4: detected capacity change from 0 to 219144 Dec 16 16:15:12.073649 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 16:15:12.080764 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Dec 16 16:15:12.080792 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Dec 16 16:15:12.095067 kernel: loop5: detected capacity change from 0 to 110984 Dec 16 16:15:12.106777 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 16:15:12.110125 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 16:15:12.168492 kernel: loop6: detected capacity change from 0 to 128560 Dec 16 16:15:12.219858 kernel: loop7: detected capacity change from 0 to 8 Dec 16 16:15:12.226302 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 16 16:15:12.229732 (sd-merge)[1274]: Merged extensions into '/usr'. Dec 16 16:15:12.239512 systemd[1]: Reload requested from client PID 1231 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 16:15:12.239539 systemd[1]: Reloading... Dec 16 16:15:12.494082 zram_generator::config[1302]: No configuration found. Dec 16 16:15:12.772337 ldconfig[1227]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 16:15:12.997442 systemd[1]: Reloading finished in 757 ms. Dec 16 16:15:13.023764 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 16:15:13.030683 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 16:15:13.032170 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 16:15:13.056740 systemd[1]: Starting ensure-sysext.service... Dec 16 16:15:13.062247 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 16:15:13.065687 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 16:15:13.097275 systemd[1]: Reload requested from client PID 1359 ('systemctl') (unit ensure-sysext.service)... Dec 16 16:15:13.097348 systemd[1]: Reloading... Dec 16 16:15:13.126120 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 16:15:13.126454 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 16:15:13.126988 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 16:15:13.129834 systemd-udevd[1361]: Using default interface naming scheme 'v255'. Dec 16 16:15:13.131515 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 16:15:13.132999 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 16:15:13.134570 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Dec 16 16:15:13.135241 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Dec 16 16:15:13.149260 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 16:15:13.150630 systemd-tmpfiles[1360]: Skipping /boot Dec 16 16:15:13.178102 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 16:15:13.178656 systemd-tmpfiles[1360]: Skipping /boot Dec 16 16:15:13.217240 zram_generator::config[1385]: No configuration found. Dec 16 16:15:13.658133 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 16:15:13.686345 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 16:15:13.697520 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 16:15:13.697740 systemd[1]: Reloading finished in 599 ms. Dec 16 16:15:13.708752 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 16:15:13.710070 kernel: ACPI: button: Power Button [PWRF] Dec 16 16:15:13.721754 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 16:15:13.756969 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 16:15:13.773397 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 16:15:13.775945 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 16:15:13.780344 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 16:15:13.782337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 16:15:13.784432 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 16:15:13.787817 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 16:15:13.796603 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 16:15:13.797587 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 16:15:13.800980 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 16:15:13.802151 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 16:15:13.808365 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 16:15:13.814483 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 16:15:13.820372 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 16:15:13.842143 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 16:15:13.843103 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 16:15:13.855475 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 16:15:13.864273 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 16:15:13.871706 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 16:15:13.874481 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 16:15:13.875381 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 16:15:13.899290 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 16:15:13.900062 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 16:15:13.900519 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 16:15:13.909835 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 16:15:13.906152 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 16:15:13.906624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 16:15:13.911313 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 16:15:13.914163 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 16:15:13.927649 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 16:15:13.929655 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 16:15:13.929839 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 16:15:13.934601 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 16:15:13.935863 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 16:15:13.937600 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 16:15:13.939100 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 16:15:13.948248 systemd[1]: Finished ensure-sysext.service. Dec 16 16:15:13.951875 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 16:15:13.956297 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 16:15:13.995649 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 16:15:13.996057 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 16:15:13.997169 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 16:15:14.001664 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 16:15:14.007703 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 16:15:14.011222 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 16:15:14.017487 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 16:15:14.018317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 16:15:14.020706 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 16:15:14.059210 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 16:15:14.061923 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 16:15:14.069312 augenrules[1533]: No rules Dec 16 16:15:14.071932 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 16:15:14.073134 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 16:15:14.087327 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 16:15:14.184351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 16:15:14.430896 systemd-networkd[1483]: lo: Link UP Dec 16 16:15:14.432105 systemd-networkd[1483]: lo: Gained carrier Dec 16 16:15:14.439299 systemd-networkd[1483]: Enumeration completed Dec 16 16:15:14.439452 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 16:15:14.443629 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 16:15:14.443643 systemd-networkd[1483]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 16:15:14.447516 systemd-networkd[1483]: eth0: Link UP Dec 16 16:15:14.447778 systemd-networkd[1483]: eth0: Gained carrier Dec 16 16:15:14.447807 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 16:15:14.467375 systemd-resolved[1484]: Positive Trust Anchors: Dec 16 16:15:14.470074 systemd-resolved[1484]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 16:15:14.470136 systemd-resolved[1484]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 16:15:14.477667 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 16:15:14.479196 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 16:15:14.481127 systemd-networkd[1483]: eth0: DHCPv4 address 10.230.59.10/30, gateway 10.230.59.9 acquired from 10.230.59.9 Dec 16 16:15:14.482142 systemd-resolved[1484]: Using system hostname 'srv-899vz.gb1.brightbox.com'. Dec 16 16:15:14.482819 systemd-timesyncd[1504]: Network configuration changed, trying to establish connection. Dec 16 16:15:14.484550 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 16:15:14.489109 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 16:15:14.490239 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 16:15:14.492619 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 16:15:14.495402 systemd[1]: Reached target network.target - Network. Dec 16 16:15:14.496399 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 16:15:14.498353 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 16:15:14.499253 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 16:15:14.501228 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 16:15:14.502026 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 16:15:14.504421 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 16:15:14.505296 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 16:15:14.506146 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 16:15:14.508125 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 16:15:14.508178 systemd[1]: Reached target paths.target - Path Units. Dec 16 16:15:14.508835 systemd[1]: Reached target timers.target - Timer Units. Dec 16 16:15:14.511444 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 16:15:14.516752 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 16:15:14.524312 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 16:15:14.526340 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 16:15:14.528274 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 16:15:14.539236 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 16:15:14.541279 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 16:15:14.543995 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 16:15:14.546636 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 16:15:14.547735 systemd[1]: Reached target basic.target - Basic System. Dec 16 16:15:14.548670 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 16:15:14.548744 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 16:15:14.558168 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 16:15:14.563471 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 16:15:14.570287 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 16:15:14.576770 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 16:15:14.582259 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 16:15:14.587941 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 16:15:14.589397 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 16:15:14.601353 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 16:15:14.606556 jq[1564]: false Dec 16 16:15:14.605765 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 16:15:14.611929 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 16:15:14.634174 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:15:14.626481 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 16:15:14.634732 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 16:15:14.649339 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 16:15:14.652634 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 16:15:14.658667 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 16:15:14.662806 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 16:15:14.673209 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 16:15:14.676799 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 16:15:14.704133 update_engine[1578]: I20251216 16:15:14.697860 1578 main.cc:92] Flatcar Update Engine starting Dec 16 16:15:14.686119 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 16:15:14.687527 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 16:15:14.688125 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 16:15:14.718522 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing passwd entry cache Dec 16 16:15:14.713405 oslogin_cache_refresh[1566]: Refreshing passwd entry cache Dec 16 16:15:14.723259 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 16:15:14.727318 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 16:15:14.728630 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 16:15:14.728986 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 16:15:14.733931 extend-filesystems[1565]: Found /dev/vda6 Dec 16 16:15:14.744903 jq[1579]: true Dec 16 16:15:14.746108 dbus-daemon[1562]: [system] SELinux support is enabled Dec 16 16:15:14.748249 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 16:15:14.748789 dbus-daemon[1562]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1483 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 16:15:14.751723 update_engine[1578]: I20251216 16:15:14.751012 1578 update_check_scheduler.cc:74] Next update check in 5m45s Dec 16 16:15:14.755365 extend-filesystems[1565]: Found /dev/vda9 Dec 16 16:15:14.756546 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 16:15:14.756588 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 16:15:14.758725 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 16:15:14.758764 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 16:15:14.759685 systemd[1]: Started update-engine.service - Update Engine. Dec 16 16:15:14.761136 dbus-daemon[1562]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 16:15:14.761811 extend-filesystems[1565]: Checking size of /dev/vda9 Dec 16 16:15:14.767352 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 16:15:14.774356 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 16:15:14.777373 (ntainerd)[1594]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 16:15:14.796054 tar[1584]: linux-amd64/LICENSE Dec 16 16:15:14.796054 tar[1584]: linux-amd64/helm Dec 16 16:15:14.799813 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting users, quitting Dec 16 16:15:14.799813 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 16:15:14.799813 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing group entry cache Dec 16 16:15:14.799958 jq[1601]: true Dec 16 16:15:14.796965 oslogin_cache_refresh[1566]: Failure getting users, quitting Dec 16 16:15:14.796992 oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 16:15:14.797127 oslogin_cache_refresh[1566]: Refreshing group entry cache Dec 16 16:15:14.802722 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting groups, quitting Dec 16 16:15:14.802722 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 16:15:14.801787 oslogin_cache_refresh[1566]: Failure getting groups, quitting Dec 16 16:15:14.801804 oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 16:15:14.803255 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 16:15:14.807614 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 16:15:14.818330 extend-filesystems[1565]: Resized partition /dev/vda9 Dec 16 16:15:14.825018 extend-filesystems[1614]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 16:15:14.833910 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 16 16:15:14.996186 systemd-logind[1575]: Watching system buttons on /dev/input/event3 (Power Button) Dec 16 16:15:14.996249 systemd-logind[1575]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 16:15:15.002464 systemd-logind[1575]: New seat seat0. Dec 16 16:15:15.011441 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 16:15:15.120131 bash[1629]: Updated "/home/core/.ssh/authorized_keys" Dec 16 16:15:15.118098 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 16:15:15.131592 systemd[1]: Starting sshkeys.service... Dec 16 16:15:15.218742 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 16:15:15.222696 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 16:15:15.275060 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 16 16:15:15.314197 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:15:15.293348 dbus-daemon[1562]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 16:15:15.280883 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 16:15:15.297807 dbus-daemon[1562]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1605 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 16:15:15.315865 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 16:15:15.318461 extend-filesystems[1614]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 16:15:15.318461 extend-filesystems[1614]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 16 16:15:15.318461 extend-filesystems[1614]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 16 16:15:15.329363 extend-filesystems[1565]: Resized filesystem in /dev/vda9 Dec 16 16:15:15.320463 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 16:15:15.321210 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 16:15:15.386612 locksmithd[1607]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 16:15:15.402283 systemd-timesyncd[1504]: Contacted time server 82.219.4.30:123 (0.flatcar.pool.ntp.org). Dec 16 16:15:15.402390 systemd-timesyncd[1504]: Initial clock synchronization to Tue 2025-12-16 16:15:15.792030 UTC. Dec 16 16:15:15.582873 containerd[1594]: time="2025-12-16T16:15:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 16:15:15.588101 containerd[1594]: time="2025-12-16T16:15:15.587901156Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 16:15:15.631489 polkitd[1645]: Started polkitd version 126 Dec 16 16:15:15.641068 containerd[1594]: time="2025-12-16T16:15:15.640417301Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="24.595µs" Dec 16 16:15:15.644065 containerd[1594]: time="2025-12-16T16:15:15.643069984Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 16:15:15.644065 containerd[1594]: time="2025-12-16T16:15:15.643152100Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 16:15:15.644065 containerd[1594]: time="2025-12-16T16:15:15.643607220Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 16:15:15.644065 containerd[1594]: time="2025-12-16T16:15:15.643637438Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 16:15:15.644065 containerd[1594]: time="2025-12-16T16:15:15.643686908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 16:15:15.644065 containerd[1594]: time="2025-12-16T16:15:15.643853145Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 16:15:15.644065 containerd[1594]: time="2025-12-16T16:15:15.643877097Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 16:15:15.643270 polkitd[1645]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 16:15:15.643750 polkitd[1645]: Loading rules from directory /run/polkit-1/rules.d Dec 16 16:15:15.643816 polkitd[1645]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 16:15:15.645448 polkitd[1645]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 16:15:15.645504 polkitd[1645]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 16:15:15.645583 polkitd[1645]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 16:15:15.646001 containerd[1594]: time="2025-12-16T16:15:15.645962037Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 16:15:15.648198 containerd[1594]: time="2025-12-16T16:15:15.647069923Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 16:15:15.648198 containerd[1594]: time="2025-12-16T16:15:15.647111814Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 16:15:15.648198 containerd[1594]: time="2025-12-16T16:15:15.647129827Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 16:15:15.648198 containerd[1594]: time="2025-12-16T16:15:15.647402624Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 16:15:15.647749 polkitd[1645]: Finished loading, compiling and executing 2 rules Dec 16 16:15:15.648572 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 16:15:15.649319 dbus-daemon[1562]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 16:15:15.650967 polkitd[1645]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 16:15:15.653630 containerd[1594]: time="2025-12-16T16:15:15.653164227Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 16:15:15.653630 containerd[1594]: time="2025-12-16T16:15:15.653232753Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 16:15:15.653630 containerd[1594]: time="2025-12-16T16:15:15.653257148Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 16:15:15.653630 containerd[1594]: time="2025-12-16T16:15:15.653358483Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 16:15:15.654001 containerd[1594]: time="2025-12-16T16:15:15.653949977Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 16:15:15.654229 containerd[1594]: time="2025-12-16T16:15:15.654197464Z" level=info msg="metadata content store policy set" policy=shared Dec 16 16:15:15.661548 containerd[1594]: time="2025-12-16T16:15:15.661500057Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 16:15:15.661876 containerd[1594]: time="2025-12-16T16:15:15.661841217Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 16:15:15.663392 containerd[1594]: time="2025-12-16T16:15:15.661941121Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 16:15:15.663448 containerd[1594]: time="2025-12-16T16:15:15.663404950Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 16:15:15.663448 containerd[1594]: time="2025-12-16T16:15:15.663436441Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 16:15:15.663576 containerd[1594]: time="2025-12-16T16:15:15.663457999Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 16:15:15.663576 containerd[1594]: time="2025-12-16T16:15:15.663526330Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 16:15:15.663576 containerd[1594]: time="2025-12-16T16:15:15.663553915Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 16:15:15.663733 containerd[1594]: time="2025-12-16T16:15:15.663588982Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 16:15:15.663733 containerd[1594]: time="2025-12-16T16:15:15.663607676Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 16:15:15.663733 containerd[1594]: time="2025-12-16T16:15:15.663624926Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 16:15:15.663733 containerd[1594]: time="2025-12-16T16:15:15.663646195Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 16:15:15.666049 containerd[1594]: time="2025-12-16T16:15:15.663862685Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 16:15:15.666049 containerd[1594]: time="2025-12-16T16:15:15.663913938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 16:15:15.666049 containerd[1594]: time="2025-12-16T16:15:15.663957940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 16:15:15.666049 containerd[1594]: time="2025-12-16T16:15:15.664015988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 16:15:15.666049 containerd[1594]: time="2025-12-16T16:15:15.664073839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 16:15:15.666049 containerd[1594]: time="2025-12-16T16:15:15.664099642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 16:15:15.666049 containerd[1594]: time="2025-12-16T16:15:15.664125321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 16:15:15.666049 containerd[1594]: time="2025-12-16T16:15:15.664150307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 16:15:15.684167 containerd[1594]: time="2025-12-16T16:15:15.684102349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 16:15:15.684167 containerd[1594]: time="2025-12-16T16:15:15.684165489Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 16:15:15.684447 containerd[1594]: time="2025-12-16T16:15:15.684193689Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 16:15:15.684447 containerd[1594]: time="2025-12-16T16:15:15.684397534Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 16:15:15.684447 containerd[1594]: time="2025-12-16T16:15:15.684425330Z" level=info msg="Start snapshots syncer" Dec 16 16:15:15.684590 containerd[1594]: time="2025-12-16T16:15:15.684472678Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 16:15:15.689095 containerd[1594]: time="2025-12-16T16:15:15.684998431Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 16:15:15.689617 containerd[1594]: time="2025-12-16T16:15:15.689150805Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 16:15:15.695201 containerd[1594]: time="2025-12-16T16:15:15.695113303Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 16:15:15.695416 containerd[1594]: time="2025-12-16T16:15:15.695382058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 16:15:15.695752 containerd[1594]: time="2025-12-16T16:15:15.695436215Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 16:15:15.695752 containerd[1594]: time="2025-12-16T16:15:15.695469340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 16:15:15.695752 containerd[1594]: time="2025-12-16T16:15:15.695498754Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 16:15:15.695752 containerd[1594]: time="2025-12-16T16:15:15.695543688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 16:15:15.695752 containerd[1594]: time="2025-12-16T16:15:15.695597540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 16:15:15.695752 containerd[1594]: time="2025-12-16T16:15:15.695626420Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 16:15:15.695752 containerd[1594]: time="2025-12-16T16:15:15.695689477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 16:15:15.695752 containerd[1594]: time="2025-12-16T16:15:15.695730169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 16:15:15.696011 containerd[1594]: time="2025-12-16T16:15:15.695758737Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 16:15:15.696011 containerd[1594]: time="2025-12-16T16:15:15.695820210Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 16:15:15.696011 containerd[1594]: time="2025-12-16T16:15:15.695855295Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 16:15:15.696011 containerd[1594]: time="2025-12-16T16:15:15.695899237Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 16:15:15.696011 containerd[1594]: time="2025-12-16T16:15:15.695927897Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 16:15:15.696011 containerd[1594]: time="2025-12-16T16:15:15.695948861Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 16:15:15.696011 containerd[1594]: time="2025-12-16T16:15:15.695968531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 16:15:15.696011 containerd[1594]: time="2025-12-16T16:15:15.696001806Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 16:15:15.696324 containerd[1594]: time="2025-12-16T16:15:15.696075357Z" level=info msg="runtime interface created" Dec 16 16:15:15.696324 containerd[1594]: time="2025-12-16T16:15:15.696092695Z" level=info msg="created NRI interface" Dec 16 16:15:15.696324 containerd[1594]: time="2025-12-16T16:15:15.696109308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 16:15:15.696324 containerd[1594]: time="2025-12-16T16:15:15.696138554Z" level=info msg="Connect containerd service" Dec 16 16:15:15.696324 containerd[1594]: time="2025-12-16T16:15:15.696177982Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 16:15:15.697940 systemd-hostnamed[1605]: Hostname set to (static) Dec 16 16:15:15.700061 containerd[1594]: time="2025-12-16T16:15:15.699210763Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 16:15:15.925202 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:15:16.168811 containerd[1594]: time="2025-12-16T16:15:16.168334599Z" level=info msg="Start subscribing containerd event" Dec 16 16:15:16.171305 containerd[1594]: time="2025-12-16T16:15:16.170710870Z" level=info msg="Start recovering state" Dec 16 16:15:16.171305 containerd[1594]: time="2025-12-16T16:15:16.171183425Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 16:15:16.171479 containerd[1594]: time="2025-12-16T16:15:16.171330175Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 16:15:16.174108 containerd[1594]: time="2025-12-16T16:15:16.172705720Z" level=info msg="Start event monitor" Dec 16 16:15:16.174108 containerd[1594]: time="2025-12-16T16:15:16.172859040Z" level=info msg="Start cni network conf syncer for default" Dec 16 16:15:16.174108 containerd[1594]: time="2025-12-16T16:15:16.172912108Z" level=info msg="Start streaming server" Dec 16 16:15:16.174108 containerd[1594]: time="2025-12-16T16:15:16.172996212Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 16:15:16.174108 containerd[1594]: time="2025-12-16T16:15:16.173072078Z" level=info msg="runtime interface starting up..." Dec 16 16:15:16.174108 containerd[1594]: time="2025-12-16T16:15:16.173152852Z" level=info msg="starting plugins..." Dec 16 16:15:16.174108 containerd[1594]: time="2025-12-16T16:15:16.173228475Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 16:15:16.176790 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 16:15:16.187995 containerd[1594]: time="2025-12-16T16:15:16.187782662Z" level=info msg="containerd successfully booted in 0.608219s" Dec 16 16:15:16.224621 systemd-networkd[1483]: eth0: Gained IPv6LL Dec 16 16:15:16.240492 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 16:15:16.246063 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 16:15:16.255632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:15:16.272850 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 16:15:16.304199 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:15:16.389502 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 16:15:16.622595 sshd_keygen[1609]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 16:15:16.661661 tar[1584]: linux-amd64/README.md Dec 16 16:15:16.683251 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 16:15:16.688663 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 16:15:16.706487 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 16:15:16.724574 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 16:15:16.725430 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 16:15:16.730684 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 16:15:16.762346 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 16:15:16.766422 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 16:15:16.773588 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 16:15:16.774732 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 16:15:17.562248 systemd-networkd[1483]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8ec2:24:19ff:fee6:3b0a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8ec2:24:19ff:fee6:3b0a/64 assigned by NDisc. Dec 16 16:15:17.562269 systemd-networkd[1483]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 16 16:15:17.987208 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:15:18.121395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:15:18.137970 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 16:15:18.364489 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:15:18.852801 kubelet[1711]: E1216 16:15:18.852719 1711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 16:15:18.856432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 16:15:18.856705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 16:15:18.857807 systemd[1]: kubelet.service: Consumed 1.690s CPU time, 257.3M memory peak. Dec 16 16:15:20.112400 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 16:15:20.115879 systemd[1]: Started sshd@0-10.230.59.10:22-139.178.68.195:50086.service - OpenSSH per-connection server daemon (139.178.68.195:50086). Dec 16 16:15:21.081791 sshd[1719]: Accepted publickey for core from 139.178.68.195 port 50086 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:15:21.084846 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:15:21.109079 systemd-logind[1575]: New session 1 of user core. Dec 16 16:15:21.112773 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 16:15:21.115842 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 16:15:21.158218 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 16:15:21.164169 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 16:15:21.191408 (systemd)[1724]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 16:15:21.196580 systemd-logind[1575]: New session c1 of user core. Dec 16 16:15:21.400631 systemd[1724]: Queued start job for default target default.target. Dec 16 16:15:21.412198 systemd[1724]: Created slice app.slice - User Application Slice. Dec 16 16:15:21.412255 systemd[1724]: Reached target paths.target - Paths. Dec 16 16:15:21.412337 systemd[1724]: Reached target timers.target - Timers. Dec 16 16:15:21.414714 systemd[1724]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 16:15:21.430083 systemd[1724]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 16:15:21.430170 systemd[1724]: Reached target sockets.target - Sockets. Dec 16 16:15:21.430247 systemd[1724]: Reached target basic.target - Basic System. Dec 16 16:15:21.430328 systemd[1724]: Reached target default.target - Main User Target. Dec 16 16:15:21.430416 systemd[1724]: Startup finished in 221ms. Dec 16 16:15:21.430691 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 16:15:21.443390 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 16:15:21.848336 login[1703]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 16:15:21.872408 login[1702]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 16:15:21.880155 systemd-logind[1575]: New session 2 of user core. Dec 16 16:15:21.883719 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 16:15:21.893225 systemd-logind[1575]: New session 3 of user core. Dec 16 16:15:21.897588 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 16:15:22.007107 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:15:22.019439 coreos-metadata[1561]: Dec 16 16:15:22.019 WARN failed to locate config-drive, using the metadata service API instead Dec 16 16:15:22.052843 coreos-metadata[1561]: Dec 16 16:15:22.052 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 16 16:15:22.062495 coreos-metadata[1561]: Dec 16 16:15:22.062 INFO Fetch failed with 404: resource not found Dec 16 16:15:22.062717 coreos-metadata[1561]: Dec 16 16:15:22.062 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 16:15:22.063743 coreos-metadata[1561]: Dec 16 16:15:22.063 INFO Fetch successful Dec 16 16:15:22.064115 coreos-metadata[1561]: Dec 16 16:15:22.064 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 16 16:15:22.076838 coreos-metadata[1561]: Dec 16 16:15:22.076 INFO Fetch successful Dec 16 16:15:22.077002 coreos-metadata[1561]: Dec 16 16:15:22.076 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 16 16:15:22.094533 coreos-metadata[1561]: Dec 16 16:15:22.094 INFO Fetch successful Dec 16 16:15:22.094701 coreos-metadata[1561]: Dec 16 16:15:22.094 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 16 16:15:22.103727 systemd[1]: Started sshd@1-10.230.59.10:22-139.178.68.195:50088.service - OpenSSH per-connection server daemon (139.178.68.195:50088). Dec 16 16:15:22.110729 coreos-metadata[1561]: Dec 16 16:15:22.110 INFO Fetch successful Dec 16 16:15:22.111120 coreos-metadata[1561]: Dec 16 16:15:22.111 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 16 16:15:22.136076 coreos-metadata[1561]: Dec 16 16:15:22.135 INFO Fetch successful Dec 16 16:15:22.180570 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 16:15:22.181565 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 16:15:22.388155 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:15:22.400219 coreos-metadata[1636]: Dec 16 16:15:22.400 WARN failed to locate config-drive, using the metadata service API instead Dec 16 16:15:22.423341 coreos-metadata[1636]: Dec 16 16:15:22.423 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 16 16:15:22.451095 coreos-metadata[1636]: Dec 16 16:15:22.450 INFO Fetch successful Dec 16 16:15:22.451442 coreos-metadata[1636]: Dec 16 16:15:22.451 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 16:15:22.480441 coreos-metadata[1636]: Dec 16 16:15:22.480 INFO Fetch successful Dec 16 16:15:22.483367 unknown[1636]: wrote ssh authorized keys file for user: core Dec 16 16:15:22.517232 update-ssh-keys[1772]: Updated "/home/core/.ssh/authorized_keys" Dec 16 16:15:22.518620 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 16:15:22.524357 systemd[1]: Finished sshkeys.service. Dec 16 16:15:22.526294 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 16:15:22.529204 systemd[1]: Startup finished in 3.893s (kernel) + 16.502s (initrd) + 12.428s (userspace) = 32.825s. Dec 16 16:15:23.039690 sshd[1762]: Accepted publickey for core from 139.178.68.195 port 50088 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:15:23.041973 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:15:23.050931 systemd-logind[1575]: New session 4 of user core. Dec 16 16:15:23.060356 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 16:15:23.678096 sshd[1776]: Connection closed by 139.178.68.195 port 50088 Dec 16 16:15:23.678099 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Dec 16 16:15:23.683489 systemd[1]: sshd@1-10.230.59.10:22-139.178.68.195:50088.service: Deactivated successfully. Dec 16 16:15:23.686365 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 16:15:23.688901 systemd-logind[1575]: Session 4 logged out. Waiting for processes to exit. Dec 16 16:15:23.691227 systemd-logind[1575]: Removed session 4. Dec 16 16:15:23.840812 systemd[1]: Started sshd@2-10.230.59.10:22-139.178.68.195:50096.service - OpenSSH per-connection server daemon (139.178.68.195:50096). Dec 16 16:15:24.767252 sshd[1782]: Accepted publickey for core from 139.178.68.195 port 50096 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:15:24.769395 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:15:24.777441 systemd-logind[1575]: New session 5 of user core. Dec 16 16:15:24.786398 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 16:15:25.396086 sshd[1785]: Connection closed by 139.178.68.195 port 50096 Dec 16 16:15:25.397230 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Dec 16 16:15:25.403536 systemd[1]: sshd@2-10.230.59.10:22-139.178.68.195:50096.service: Deactivated successfully. Dec 16 16:15:25.406632 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 16:15:25.408913 systemd-logind[1575]: Session 5 logged out. Waiting for processes to exit. Dec 16 16:15:25.411026 systemd-logind[1575]: Removed session 5. Dec 16 16:15:25.563505 systemd[1]: Started sshd@3-10.230.59.10:22-139.178.68.195:50110.service - OpenSSH per-connection server daemon (139.178.68.195:50110). Dec 16 16:15:26.498721 sshd[1791]: Accepted publickey for core from 139.178.68.195 port 50110 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:15:26.500888 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:15:26.508446 systemd-logind[1575]: New session 6 of user core. Dec 16 16:15:26.518383 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 16:15:27.140475 sshd[1794]: Connection closed by 139.178.68.195 port 50110 Dec 16 16:15:27.141753 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Dec 16 16:15:27.148774 systemd-logind[1575]: Session 6 logged out. Waiting for processes to exit. Dec 16 16:15:27.149587 systemd[1]: sshd@3-10.230.59.10:22-139.178.68.195:50110.service: Deactivated successfully. Dec 16 16:15:27.152301 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 16:15:27.154954 systemd-logind[1575]: Removed session 6. Dec 16 16:15:27.305226 systemd[1]: Started sshd@4-10.230.59.10:22-139.178.68.195:50118.service - OpenSSH per-connection server daemon (139.178.68.195:50118). Dec 16 16:15:28.236174 sshd[1800]: Accepted publickey for core from 139.178.68.195 port 50118 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:15:28.238186 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:15:28.247083 systemd-logind[1575]: New session 7 of user core. Dec 16 16:15:28.253433 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 16:15:28.739746 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 16:15:28.740260 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 16:15:28.758735 sudo[1804]: pam_unix(sudo:session): session closed for user root Dec 16 16:15:28.906833 sshd[1803]: Connection closed by 139.178.68.195 port 50118 Dec 16 16:15:28.908268 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Dec 16 16:15:28.919338 systemd[1]: sshd@4-10.230.59.10:22-139.178.68.195:50118.service: Deactivated successfully. Dec 16 16:15:28.922499 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 16:15:28.924359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 16:15:28.925681 systemd-logind[1575]: Session 7 logged out. Waiting for processes to exit. Dec 16 16:15:28.929468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:15:28.931643 systemd-logind[1575]: Removed session 7. Dec 16 16:15:29.069380 systemd[1]: Started sshd@5-10.230.59.10:22-139.178.68.195:50126.service - OpenSSH per-connection server daemon (139.178.68.195:50126). Dec 16 16:15:29.183962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:15:29.196996 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 16:15:29.267816 kubelet[1821]: E1216 16:15:29.267715 1821 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 16:15:29.273386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 16:15:29.274070 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 16:15:29.275226 systemd[1]: kubelet.service: Consumed 282ms CPU time, 110.2M memory peak. Dec 16 16:15:30.003545 sshd[1813]: Accepted publickey for core from 139.178.68.195 port 50126 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:15:30.005688 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:15:30.013386 systemd-logind[1575]: New session 8 of user core. Dec 16 16:15:30.026736 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 16:15:30.560548 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 16:15:30.561099 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 16:15:30.569406 sudo[1830]: pam_unix(sudo:session): session closed for user root Dec 16 16:15:30.578913 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 16:15:30.580057 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 16:15:30.597067 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 16:15:30.659539 augenrules[1852]: No rules Dec 16 16:15:30.660590 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 16:15:30.660997 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 16:15:30.662582 sudo[1829]: pam_unix(sudo:session): session closed for user root Dec 16 16:15:30.837411 sshd[1828]: Connection closed by 139.178.68.195 port 50126 Dec 16 16:15:30.837113 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Dec 16 16:15:30.844232 systemd[1]: sshd@5-10.230.59.10:22-139.178.68.195:50126.service: Deactivated successfully. Dec 16 16:15:30.846950 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 16:15:30.848494 systemd-logind[1575]: Session 8 logged out. Waiting for processes to exit. Dec 16 16:15:30.851051 systemd-logind[1575]: Removed session 8. Dec 16 16:15:31.036709 systemd[1]: Started sshd@6-10.230.59.10:22-139.178.68.195:41914.service - OpenSSH per-connection server daemon (139.178.68.195:41914). Dec 16 16:15:32.066140 sshd[1861]: Accepted publickey for core from 139.178.68.195 port 41914 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:15:32.068319 sshd-session[1861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:15:32.077284 systemd-logind[1575]: New session 9 of user core. Dec 16 16:15:32.084304 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 16:15:32.600865 sudo[1865]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 16:15:32.601412 sudo[1865]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 16:15:33.779150 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 16:15:33.815165 (dockerd)[1883]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 16:15:34.412107 dockerd[1883]: time="2025-12-16T16:15:34.411392509Z" level=info msg="Starting up" Dec 16 16:15:34.415292 dockerd[1883]: time="2025-12-16T16:15:34.415232179Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 16:15:34.535426 dockerd[1883]: time="2025-12-16T16:15:34.535278285Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 16:15:34.610523 dockerd[1883]: time="2025-12-16T16:15:34.610402292Z" level=info msg="Loading containers: start." Dec 16 16:15:34.634118 kernel: Initializing XFRM netlink socket Dec 16 16:15:35.007984 systemd-networkd[1483]: docker0: Link UP Dec 16 16:15:35.013659 dockerd[1883]: time="2025-12-16T16:15:35.013511377Z" level=info msg="Loading containers: done." Dec 16 16:15:35.047093 dockerd[1883]: time="2025-12-16T16:15:35.045305975Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 16:15:35.047093 dockerd[1883]: time="2025-12-16T16:15:35.045445418Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 16:15:35.047093 dockerd[1883]: time="2025-12-16T16:15:35.045589905Z" level=info msg="Initializing buildkit" Dec 16 16:15:35.046389 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2117856359-merged.mount: Deactivated successfully. Dec 16 16:15:35.120396 dockerd[1883]: time="2025-12-16T16:15:35.120327302Z" level=info msg="Completed buildkit initialization" Dec 16 16:15:35.132136 dockerd[1883]: time="2025-12-16T16:15:35.132026690Z" level=info msg="Daemon has completed initialization" Dec 16 16:15:35.133073 dockerd[1883]: time="2025-12-16T16:15:35.132434907Z" level=info msg="API listen on /run/docker.sock" Dec 16 16:15:35.134176 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 16:15:36.294270 containerd[1594]: time="2025-12-16T16:15:36.294143206Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 16:15:37.270871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3137805327.mount: Deactivated successfully. Dec 16 16:15:39.302949 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 16:15:39.308108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:15:39.727349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:15:39.747647 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 16:15:39.901818 kubelet[2159]: E1216 16:15:39.901688 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 16:15:39.905007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 16:15:39.905339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 16:15:39.906032 systemd[1]: kubelet.service: Consumed 509ms CPU time, 110.3M memory peak. Dec 16 16:15:39.996486 containerd[1594]: time="2025-12-16T16:15:39.995793811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:39.998373 containerd[1594]: time="2025-12-16T16:15:39.998331040Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068081" Dec 16 16:15:39.998647 containerd[1594]: time="2025-12-16T16:15:39.998612787Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:40.003698 containerd[1594]: time="2025-12-16T16:15:40.003646023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:40.005321 containerd[1594]: time="2025-12-16T16:15:40.005277871Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 3.710984064s" Dec 16 16:15:40.005538 containerd[1594]: time="2025-12-16T16:15:40.005506648Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 16 16:15:40.006891 containerd[1594]: time="2025-12-16T16:15:40.006840819Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 16:15:42.130428 containerd[1594]: time="2025-12-16T16:15:42.130333703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:42.132598 containerd[1594]: time="2025-12-16T16:15:42.132549712Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162448" Dec 16 16:15:42.133852 containerd[1594]: time="2025-12-16T16:15:42.133787206Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:42.136843 containerd[1594]: time="2025-12-16T16:15:42.136775581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:42.139137 containerd[1594]: time="2025-12-16T16:15:42.138305561Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 2.131417782s" Dec 16 16:15:42.139137 containerd[1594]: time="2025-12-16T16:15:42.138369054Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 16 16:15:42.139833 containerd[1594]: time="2025-12-16T16:15:42.139750233Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 16:15:43.840376 containerd[1594]: time="2025-12-16T16:15:43.840286202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:43.842424 containerd[1594]: time="2025-12-16T16:15:43.842010337Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725935" Dec 16 16:15:43.843618 containerd[1594]: time="2025-12-16T16:15:43.843574764Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:43.847365 containerd[1594]: time="2025-12-16T16:15:43.847329389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:43.848864 containerd[1594]: time="2025-12-16T16:15:43.848819568Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.70898189s" Dec 16 16:15:43.848941 containerd[1594]: time="2025-12-16T16:15:43.848868933Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 16 16:15:43.850505 containerd[1594]: time="2025-12-16T16:15:43.850473821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 16:15:45.969170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2944592768.mount: Deactivated successfully. Dec 16 16:15:46.654290 containerd[1594]: time="2025-12-16T16:15:46.654184038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:46.655761 containerd[1594]: time="2025-12-16T16:15:46.655704721Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965301" Dec 16 16:15:46.657486 containerd[1594]: time="2025-12-16T16:15:46.657417092Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:46.661067 containerd[1594]: time="2025-12-16T16:15:46.660315179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:46.661271 containerd[1594]: time="2025-12-16T16:15:46.661235352Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 2.81061087s" Dec 16 16:15:46.661424 containerd[1594]: time="2025-12-16T16:15:46.661397103Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 16:15:46.662525 containerd[1594]: time="2025-12-16T16:15:46.662439674Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 16:15:47.338489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551688591.mount: Deactivated successfully. Dec 16 16:15:47.597123 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 16:15:49.040980 containerd[1594]: time="2025-12-16T16:15:49.040903099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:49.042529 containerd[1594]: time="2025-12-16T16:15:49.042379288Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Dec 16 16:15:49.043495 containerd[1594]: time="2025-12-16T16:15:49.043391226Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:49.048821 containerd[1594]: time="2025-12-16T16:15:49.047019381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:49.048821 containerd[1594]: time="2025-12-16T16:15:49.048632780Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.385989255s" Dec 16 16:15:49.048821 containerd[1594]: time="2025-12-16T16:15:49.048673485Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 16 16:15:49.049804 containerd[1594]: time="2025-12-16T16:15:49.049754877Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 16:15:49.727966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2700730942.mount: Deactivated successfully. Dec 16 16:15:49.735516 containerd[1594]: time="2025-12-16T16:15:49.735379484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:49.737336 containerd[1594]: time="2025-12-16T16:15:49.737056314Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Dec 16 16:15:49.738557 containerd[1594]: time="2025-12-16T16:15:49.738484179Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:49.741406 containerd[1594]: time="2025-12-16T16:15:49.741371397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:49.743026 containerd[1594]: time="2025-12-16T16:15:49.742362837Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 692.553164ms" Dec 16 16:15:49.743026 containerd[1594]: time="2025-12-16T16:15:49.742410283Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 16 16:15:49.743523 containerd[1594]: time="2025-12-16T16:15:49.743484741Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 16:15:50.052231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 16:15:50.056343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:15:50.266971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:15:50.279797 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 16:15:50.358370 kubelet[2249]: E1216 16:15:50.358126 2249 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 16:15:50.361885 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 16:15:50.362209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 16:15:50.363136 systemd[1]: kubelet.service: Consumed 245ms CPU time, 108M memory peak. Dec 16 16:15:50.868489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3263715003.mount: Deactivated successfully. Dec 16 16:15:55.865317 containerd[1594]: time="2025-12-16T16:15:55.865193203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:55.867436 containerd[1594]: time="2025-12-16T16:15:55.867069686Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166822" Dec 16 16:15:55.868331 containerd[1594]: time="2025-12-16T16:15:55.868286797Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:55.872710 containerd[1594]: time="2025-12-16T16:15:55.872669869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:15:55.874949 containerd[1594]: time="2025-12-16T16:15:55.874903128Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 6.131376463s" Dec 16 16:15:55.875075 containerd[1594]: time="2025-12-16T16:15:55.874972800Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 16 16:15:59.717169 update_engine[1578]: I20251216 16:15:59.716204 1578 update_attempter.cc:509] Updating boot flags... Dec 16 16:16:00.172591 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:16:00.173073 systemd[1]: kubelet.service: Consumed 245ms CPU time, 108M memory peak. Dec 16 16:16:00.182826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:16:00.219238 systemd[1]: Reload requested from client PID 2353 ('systemctl') (unit session-9.scope)... Dec 16 16:16:00.219284 systemd[1]: Reloading... Dec 16 16:16:00.454460 zram_generator::config[2404]: No configuration found. Dec 16 16:16:00.783115 systemd[1]: Reloading finished in 563 ms. Dec 16 16:16:00.862847 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 16:16:00.862984 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 16:16:00.863669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:16:00.863774 systemd[1]: kubelet.service: Consumed 183ms CPU time, 97.5M memory peak. Dec 16 16:16:00.867355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:16:01.255965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:16:01.273700 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 16:16:01.373255 kubelet[2465]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 16:16:01.373255 kubelet[2465]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 16:16:01.375989 kubelet[2465]: I1216 16:16:01.375347 2465 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 16:16:02.046167 kubelet[2465]: I1216 16:16:02.046098 2465 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 16:16:02.046541 kubelet[2465]: I1216 16:16:02.046408 2465 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 16:16:02.048536 kubelet[2465]: I1216 16:16:02.048507 2465 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 16:16:02.049897 kubelet[2465]: I1216 16:16:02.048806 2465 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 16:16:02.049897 kubelet[2465]: I1216 16:16:02.049170 2465 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 16:16:02.065116 kubelet[2465]: I1216 16:16:02.065089 2465 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 16:16:02.075076 kubelet[2465]: E1216 16:16:02.074097 2465 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.59.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.59.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 16:16:02.094373 kubelet[2465]: I1216 16:16:02.094174 2465 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 16:16:02.107760 kubelet[2465]: I1216 16:16:02.107684 2465 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 16:16:02.110357 kubelet[2465]: I1216 16:16:02.110294 2465 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 16:16:02.112060 kubelet[2465]: I1216 16:16:02.110350 2465 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-899vz.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 16:16:02.112421 kubelet[2465]: I1216 16:16:02.112130 2465 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 16:16:02.112421 kubelet[2465]: I1216 16:16:02.112155 2465 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 16:16:02.112421 kubelet[2465]: I1216 16:16:02.112360 2465 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 16:16:02.115335 kubelet[2465]: I1216 16:16:02.115295 2465 state_mem.go:36] "Initialized new in-memory state store" Dec 16 16:16:02.116548 kubelet[2465]: I1216 16:16:02.116517 2465 kubelet.go:475] "Attempting to sync node with API server" Dec 16 16:16:02.116650 kubelet[2465]: I1216 16:16:02.116551 2465 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 16:16:02.118471 kubelet[2465]: I1216 16:16:02.118357 2465 kubelet.go:387] "Adding apiserver pod source" Dec 16 16:16:02.118471 kubelet[2465]: I1216 16:16:02.118420 2465 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 16:16:02.122062 kubelet[2465]: E1216 16:16:02.121440 2465 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.59.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-899vz.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.59.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 16:16:02.122453 kubelet[2465]: E1216 16:16:02.122417 2465 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.59.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.59.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 16:16:02.123109 kubelet[2465]: I1216 16:16:02.123080 2465 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 16:16:02.127070 kubelet[2465]: I1216 16:16:02.127015 2465 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 16:16:02.127154 kubelet[2465]: I1216 16:16:02.127098 2465 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 16:16:02.131195 kubelet[2465]: W1216 16:16:02.131168 2465 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 16:16:02.148958 kubelet[2465]: I1216 16:16:02.148927 2465 server.go:1262] "Started kubelet" Dec 16 16:16:02.154899 kubelet[2465]: I1216 16:16:02.154336 2465 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 16:16:02.163364 kubelet[2465]: E1216 16:16:02.157815 2465 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.59.10:6443/api/v1/namespaces/default/events\": dial tcp 10.230.59.10:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-899vz.gb1.brightbox.com.1881be4be91e6b30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-899vz.gb1.brightbox.com,UID:srv-899vz.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-899vz.gb1.brightbox.com,},FirstTimestamp:2025-12-16 16:16:02.148854576 +0000 UTC m=+0.865547275,LastTimestamp:2025-12-16 16:16:02.148854576 +0000 UTC m=+0.865547275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-899vz.gb1.brightbox.com,}" Dec 16 16:16:02.164736 kubelet[2465]: I1216 16:16:02.164629 2465 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 16:16:02.165086 kubelet[2465]: I1216 16:16:02.165013 2465 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 16:16:02.185687 kubelet[2465]: E1216 16:16:02.185614 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.59.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-899vz.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.59.10:6443: connect: connection refused" interval="200ms" Dec 16 16:16:02.185687 kubelet[2465]: E1216 16:16:02.167105 2465 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-899vz.gb1.brightbox.com\" not found" Dec 16 16:16:02.185687 kubelet[2465]: I1216 16:16:02.167350 2465 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 16:16:02.185928 kubelet[2465]: I1216 16:16:02.185735 2465 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 16:16:02.186764 kubelet[2465]: I1216 16:16:02.186028 2465 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 16:16:02.188259 kubelet[2465]: I1216 16:16:02.173403 2465 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 16:16:02.188259 kubelet[2465]: E1216 16:16:02.187756 2465 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.59.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.59.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 16:16:02.188259 kubelet[2465]: I1216 16:16:02.188025 2465 server.go:310] "Adding debug handlers to kubelet server" Dec 16 16:16:02.189168 kubelet[2465]: I1216 16:16:02.189141 2465 factory.go:223] Registration of the systemd container factory successfully Dec 16 16:16:02.189420 kubelet[2465]: I1216 16:16:02.189392 2465 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 16:16:02.191373 kubelet[2465]: I1216 16:16:02.165274 2465 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 16:16:02.191665 kubelet[2465]: I1216 16:16:02.191640 2465 reconciler.go:29] "Reconciler: start to sync state" Dec 16 16:16:02.193596 kubelet[2465]: I1216 16:16:02.192832 2465 factory.go:223] Registration of the containerd container factory successfully Dec 16 16:16:02.215971 kubelet[2465]: E1216 16:16:02.215943 2465 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 16:16:02.219337 kubelet[2465]: I1216 16:16:02.219264 2465 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 16:16:02.221022 kubelet[2465]: I1216 16:16:02.220996 2465 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 16:16:02.221237 kubelet[2465]: I1216 16:16:02.221216 2465 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 16:16:02.221456 kubelet[2465]: I1216 16:16:02.221420 2465 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 16:16:02.221680 kubelet[2465]: E1216 16:16:02.221647 2465 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 16:16:02.237436 kubelet[2465]: E1216 16:16:02.237345 2465 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.59.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.59.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 16:16:02.243096 kubelet[2465]: I1216 16:16:02.242899 2465 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 16:16:02.243096 kubelet[2465]: I1216 16:16:02.242949 2465 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 16:16:02.243096 kubelet[2465]: I1216 16:16:02.242988 2465 state_mem.go:36] "Initialized new in-memory state store" Dec 16 16:16:02.244797 kubelet[2465]: I1216 16:16:02.244769 2465 policy_none.go:49] "None policy: Start" Dec 16 16:16:02.244909 kubelet[2465]: I1216 16:16:02.244824 2465 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 16:16:02.244909 kubelet[2465]: I1216 16:16:02.244854 2465 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 16:16:02.246184 kubelet[2465]: I1216 16:16:02.246149 2465 policy_none.go:47] "Start" Dec 16 16:16:02.254325 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 16:16:02.286303 kubelet[2465]: E1216 16:16:02.286253 2465 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-899vz.gb1.brightbox.com\" not found" Dec 16 16:16:02.286506 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 16:16:02.292695 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 16:16:02.306723 kubelet[2465]: E1216 16:16:02.306638 2465 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 16:16:02.307003 kubelet[2465]: I1216 16:16:02.306970 2465 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 16:16:02.307121 kubelet[2465]: I1216 16:16:02.307005 2465 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 16:16:02.314084 kubelet[2465]: I1216 16:16:02.312896 2465 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 16:16:02.314715 kubelet[2465]: E1216 16:16:02.314349 2465 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 16:16:02.314939 kubelet[2465]: E1216 16:16:02.314884 2465 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-899vz.gb1.brightbox.com\" not found" Dec 16 16:16:02.348914 systemd[1]: Created slice kubepods-burstable-podd1dddcc10effa2348b305248fa7a58d2.slice - libcontainer container kubepods-burstable-podd1dddcc10effa2348b305248fa7a58d2.slice. Dec 16 16:16:02.370703 kubelet[2465]: E1216 16:16:02.370646 2465 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-899vz.gb1.brightbox.com\" not found" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.378698 systemd[1]: Created slice kubepods-burstable-pod0af56e037925817434fb9a28da637c33.slice - libcontainer container kubepods-burstable-pod0af56e037925817434fb9a28da637c33.slice. Dec 16 16:16:02.384514 kubelet[2465]: E1216 16:16:02.384478 2465 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-899vz.gb1.brightbox.com\" not found" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.388928 kubelet[2465]: E1216 16:16:02.388682 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.59.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-899vz.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.59.10:6443: connect: connection refused" interval="400ms" Dec 16 16:16:02.390373 systemd[1]: Created slice kubepods-burstable-pod33c2ec0b96b3d2e6de18b51a7afcd519.slice - libcontainer container kubepods-burstable-pod33c2ec0b96b3d2e6de18b51a7afcd519.slice. Dec 16 16:16:02.394080 kubelet[2465]: E1216 16:16:02.394018 2465 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-899vz.gb1.brightbox.com\" not found" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.410549 kubelet[2465]: I1216 16:16:02.410519 2465 kubelet_node_status.go:75] "Attempting to register node" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.411005 kubelet[2465]: E1216 16:16:02.410957 2465 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.59.10:6443/api/v1/nodes\": dial tcp 10.230.59.10:6443: connect: connection refused" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.493227 kubelet[2465]: I1216 16:16:02.493026 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0af56e037925817434fb9a28da637c33-ca-certs\") pod \"kube-apiserver-srv-899vz.gb1.brightbox.com\" (UID: \"0af56e037925817434fb9a28da637c33\") " pod="kube-system/kube-apiserver-srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.493227 kubelet[2465]: I1216 16:16:02.493139 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0af56e037925817434fb9a28da637c33-k8s-certs\") pod \"kube-apiserver-srv-899vz.gb1.brightbox.com\" (UID: \"0af56e037925817434fb9a28da637c33\") " pod="kube-system/kube-apiserver-srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.493227 kubelet[2465]: I1216 16:16:02.493179 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1dddcc10effa2348b305248fa7a58d2-k8s-certs\") pod \"kube-controller-manager-srv-899vz.gb1.brightbox.com\" (UID: \"d1dddcc10effa2348b305248fa7a58d2\") " pod="kube-system/kube-controller-manager-srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.493227 kubelet[2465]: I1216 16:16:02.493213 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1dddcc10effa2348b305248fa7a58d2-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-899vz.gb1.brightbox.com\" (UID: \"d1dddcc10effa2348b305248fa7a58d2\") " pod="kube-system/kube-controller-manager-srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.493679 kubelet[2465]: I1216 16:16:02.493252 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0af56e037925817434fb9a28da637c33-usr-share-ca-certificates\") pod \"kube-apiserver-srv-899vz.gb1.brightbox.com\" (UID: \"0af56e037925817434fb9a28da637c33\") " pod="kube-system/kube-apiserver-srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.493679 kubelet[2465]: I1216 16:16:02.493278 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1dddcc10effa2348b305248fa7a58d2-ca-certs\") pod \"kube-controller-manager-srv-899vz.gb1.brightbox.com\" (UID: \"d1dddcc10effa2348b305248fa7a58d2\") " pod="kube-system/kube-controller-manager-srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.493679 kubelet[2465]: I1216 16:16:02.493304 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1dddcc10effa2348b305248fa7a58d2-flexvolume-dir\") pod \"kube-controller-manager-srv-899vz.gb1.brightbox.com\" (UID: \"d1dddcc10effa2348b305248fa7a58d2\") " pod="kube-system/kube-controller-manager-srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.493679 kubelet[2465]: I1216 16:16:02.493332 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1dddcc10effa2348b305248fa7a58d2-kubeconfig\") pod \"kube-controller-manager-srv-899vz.gb1.brightbox.com\" (UID: \"d1dddcc10effa2348b305248fa7a58d2\") " pod="kube-system/kube-controller-manager-srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.493679 kubelet[2465]: I1216 16:16:02.493359 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33c2ec0b96b3d2e6de18b51a7afcd519-kubeconfig\") pod \"kube-scheduler-srv-899vz.gb1.brightbox.com\" (UID: \"33c2ec0b96b3d2e6de18b51a7afcd519\") " pod="kube-system/kube-scheduler-srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.614389 kubelet[2465]: I1216 16:16:02.614208 2465 kubelet_node_status.go:75] "Attempting to register node" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.615489 kubelet[2465]: E1216 16:16:02.615446 2465 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.59.10:6443/api/v1/nodes\": dial tcp 10.230.59.10:6443: connect: connection refused" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:02.689170 containerd[1594]: time="2025-12-16T16:16:02.688785145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-899vz.gb1.brightbox.com,Uid:d1dddcc10effa2348b305248fa7a58d2,Namespace:kube-system,Attempt:0,}" Dec 16 16:16:02.703074 containerd[1594]: time="2025-12-16T16:16:02.702389845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-899vz.gb1.brightbox.com,Uid:33c2ec0b96b3d2e6de18b51a7afcd519,Namespace:kube-system,Attempt:0,}" Dec 16 16:16:02.703074 containerd[1594]: time="2025-12-16T16:16:02.702691438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-899vz.gb1.brightbox.com,Uid:0af56e037925817434fb9a28da637c33,Namespace:kube-system,Attempt:0,}" Dec 16 16:16:02.789704 kubelet[2465]: E1216 16:16:02.789648 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.59.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-899vz.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.59.10:6443: connect: connection refused" interval="800ms" Dec 16 16:16:03.018595 kubelet[2465]: I1216 16:16:03.018445 2465 kubelet_node_status.go:75] "Attempting to register node" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:03.019254 kubelet[2465]: E1216 16:16:03.019217 2465 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.59.10:6443/api/v1/nodes\": dial tcp 10.230.59.10:6443: connect: connection refused" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:03.350933 kubelet[2465]: E1216 16:16:03.350803 2465 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.59.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.59.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 16:16:03.438677 kubelet[2465]: E1216 16:16:03.438527 2465 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.59.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.59.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 16:16:03.447009 kubelet[2465]: E1216 16:16:03.446961 2465 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.59.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.59.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 16:16:03.481729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3738287050.mount: Deactivated successfully. Dec 16 16:16:03.488249 containerd[1594]: time="2025-12-16T16:16:03.488166788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 16:16:03.490530 containerd[1594]: time="2025-12-16T16:16:03.490461476Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 16:16:03.491992 containerd[1594]: time="2025-12-16T16:16:03.491919756Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 16:16:03.493872 containerd[1594]: time="2025-12-16T16:16:03.493790403Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Dec 16 16:16:03.495642 containerd[1594]: time="2025-12-16T16:16:03.495578305Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 16:16:03.499465 containerd[1594]: time="2025-12-16T16:16:03.499128870Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 16:16:03.499465 containerd[1594]: time="2025-12-16T16:16:03.499279860Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 16:16:03.502790 containerd[1594]: time="2025-12-16T16:16:03.502743366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 16:16:03.504073 containerd[1594]: time="2025-12-16T16:16:03.504010207Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 796.360989ms" Dec 16 16:16:03.505820 containerd[1594]: time="2025-12-16T16:16:03.505707135Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 790.513595ms" Dec 16 16:16:03.506404 containerd[1594]: time="2025-12-16T16:16:03.506357772Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 791.200287ms" Dec 16 16:16:03.593341 kubelet[2465]: E1216 16:16:03.592222 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.59.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-899vz.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.59.10:6443: connect: connection refused" interval="1.6s" Dec 16 16:16:03.634800 kubelet[2465]: E1216 16:16:03.629338 2465 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.59.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-899vz.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.59.10:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 16:16:03.656206 containerd[1594]: time="2025-12-16T16:16:03.656017850Z" level=info msg="connecting to shim cdad38a7e37eddcaea92890ba716d137008f5138199971fcf563bf2bc357284e" address="unix:///run/containerd/s/aa977928f51d6922c931c2441eb2faae69453076b248ada38652bcb0ea2578cb" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:16:03.666086 containerd[1594]: time="2025-12-16T16:16:03.665998841Z" level=info msg="connecting to shim c48dbf312603b921d391460cc392633ca539a062252aa0673316d5469d87e972" address="unix:///run/containerd/s/eb14c43db8cbd1d910e5af5b848adc940dc022ae8ccdecf94f1e06b759301fc8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:16:03.680389 containerd[1594]: time="2025-12-16T16:16:03.680320973Z" level=info msg="connecting to shim 6f071aff5866a968c367a254523551ca097699916bec1bba0993957e30747a86" address="unix:///run/containerd/s/9d17cf6c0c8dd68de70622671c64552a5fbdce31707491c7b44706080854430c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:16:03.806286 systemd[1]: Started cri-containerd-cdad38a7e37eddcaea92890ba716d137008f5138199971fcf563bf2bc357284e.scope - libcontainer container cdad38a7e37eddcaea92890ba716d137008f5138199971fcf563bf2bc357284e. Dec 16 16:16:03.823288 systemd[1]: Started cri-containerd-6f071aff5866a968c367a254523551ca097699916bec1bba0993957e30747a86.scope - libcontainer container 6f071aff5866a968c367a254523551ca097699916bec1bba0993957e30747a86. Dec 16 16:16:03.825916 kubelet[2465]: I1216 16:16:03.825733 2465 kubelet_node_status.go:75] "Attempting to register node" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:03.826354 kubelet[2465]: E1216 16:16:03.826314 2465 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.59.10:6443/api/v1/nodes\": dial tcp 10.230.59.10:6443: connect: connection refused" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:03.828180 systemd[1]: Started cri-containerd-c48dbf312603b921d391460cc392633ca539a062252aa0673316d5469d87e972.scope - libcontainer container c48dbf312603b921d391460cc392633ca539a062252aa0673316d5469d87e972. Dec 16 16:16:03.993444 containerd[1594]: time="2025-12-16T16:16:03.993183267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-899vz.gb1.brightbox.com,Uid:0af56e037925817434fb9a28da637c33,Namespace:kube-system,Attempt:0,} returns sandbox id \"c48dbf312603b921d391460cc392633ca539a062252aa0673316d5469d87e972\"" Dec 16 16:16:03.998867 containerd[1594]: time="2025-12-16T16:16:03.998468267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-899vz.gb1.brightbox.com,Uid:d1dddcc10effa2348b305248fa7a58d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdad38a7e37eddcaea92890ba716d137008f5138199971fcf563bf2bc357284e\"" Dec 16 16:16:04.011597 containerd[1594]: time="2025-12-16T16:16:04.011516254Z" level=info msg="CreateContainer within sandbox \"cdad38a7e37eddcaea92890ba716d137008f5138199971fcf563bf2bc357284e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 16:16:04.012838 containerd[1594]: time="2025-12-16T16:16:04.012640156Z" level=info msg="CreateContainer within sandbox \"c48dbf312603b921d391460cc392633ca539a062252aa0673316d5469d87e972\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 16:16:04.024413 containerd[1594]: time="2025-12-16T16:16:04.023090127Z" level=info msg="Container 9087afbaa5fe0aea876cc55c3794e40518930aeb12206d4020d07439a80f4b35: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:16:04.033911 containerd[1594]: time="2025-12-16T16:16:04.033834188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-899vz.gb1.brightbox.com,Uid:33c2ec0b96b3d2e6de18b51a7afcd519,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f071aff5866a968c367a254523551ca097699916bec1bba0993957e30747a86\"" Dec 16 16:16:04.042304 containerd[1594]: time="2025-12-16T16:16:04.042247876Z" level=info msg="CreateContainer within sandbox \"6f071aff5866a968c367a254523551ca097699916bec1bba0993957e30747a86\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 16:16:04.045663 containerd[1594]: time="2025-12-16T16:16:04.045623989Z" level=info msg="CreateContainer within sandbox \"c48dbf312603b921d391460cc392633ca539a062252aa0673316d5469d87e972\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9087afbaa5fe0aea876cc55c3794e40518930aeb12206d4020d07439a80f4b35\"" Dec 16 16:16:04.046628 containerd[1594]: time="2025-12-16T16:16:04.046582789Z" level=info msg="StartContainer for \"9087afbaa5fe0aea876cc55c3794e40518930aeb12206d4020d07439a80f4b35\"" Dec 16 16:16:04.053078 containerd[1594]: time="2025-12-16T16:16:04.053015405Z" level=info msg="connecting to shim 9087afbaa5fe0aea876cc55c3794e40518930aeb12206d4020d07439a80f4b35" address="unix:///run/containerd/s/eb14c43db8cbd1d910e5af5b848adc940dc022ae8ccdecf94f1e06b759301fc8" protocol=ttrpc version=3 Dec 16 16:16:04.059136 containerd[1594]: time="2025-12-16T16:16:04.059090141Z" level=info msg="Container cc1caa032bd4d25ca4fa718883c07fba52cbb22209f79c889ea7616893f6adc9: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:16:04.065295 containerd[1594]: time="2025-12-16T16:16:04.064359080Z" level=info msg="Container e32389471183409662b9ddc185749addf87e559ff993b547fab705f9cc57c130: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:16:04.069943 containerd[1594]: time="2025-12-16T16:16:04.069898836Z" level=info msg="CreateContainer within sandbox \"cdad38a7e37eddcaea92890ba716d137008f5138199971fcf563bf2bc357284e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cc1caa032bd4d25ca4fa718883c07fba52cbb22209f79c889ea7616893f6adc9\"" Dec 16 16:16:04.072522 containerd[1594]: time="2025-12-16T16:16:04.072476427Z" level=info msg="StartContainer for \"cc1caa032bd4d25ca4fa718883c07fba52cbb22209f79c889ea7616893f6adc9\"" Dec 16 16:16:04.078575 containerd[1594]: time="2025-12-16T16:16:04.078366031Z" level=info msg="connecting to shim cc1caa032bd4d25ca4fa718883c07fba52cbb22209f79c889ea7616893f6adc9" address="unix:///run/containerd/s/aa977928f51d6922c931c2441eb2faae69453076b248ada38652bcb0ea2578cb" protocol=ttrpc version=3 Dec 16 16:16:04.081121 containerd[1594]: time="2025-12-16T16:16:04.081080503Z" level=info msg="CreateContainer within sandbox \"6f071aff5866a968c367a254523551ca097699916bec1bba0993957e30747a86\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e32389471183409662b9ddc185749addf87e559ff993b547fab705f9cc57c130\"" Dec 16 16:16:04.084252 containerd[1594]: time="2025-12-16T16:16:04.084122921Z" level=info msg="StartContainer for \"e32389471183409662b9ddc185749addf87e559ff993b547fab705f9cc57c130\"" Dec 16 16:16:04.087370 systemd[1]: Started cri-containerd-9087afbaa5fe0aea876cc55c3794e40518930aeb12206d4020d07439a80f4b35.scope - libcontainer container 9087afbaa5fe0aea876cc55c3794e40518930aeb12206d4020d07439a80f4b35. Dec 16 16:16:04.091062 containerd[1594]: time="2025-12-16T16:16:04.090291697Z" level=info msg="connecting to shim e32389471183409662b9ddc185749addf87e559ff993b547fab705f9cc57c130" address="unix:///run/containerd/s/9d17cf6c0c8dd68de70622671c64552a5fbdce31707491c7b44706080854430c" protocol=ttrpc version=3 Dec 16 16:16:04.126346 systemd[1]: Started cri-containerd-cc1caa032bd4d25ca4fa718883c07fba52cbb22209f79c889ea7616893f6adc9.scope - libcontainer container cc1caa032bd4d25ca4fa718883c07fba52cbb22209f79c889ea7616893f6adc9. Dec 16 16:16:04.140361 systemd[1]: Started cri-containerd-e32389471183409662b9ddc185749addf87e559ff993b547fab705f9cc57c130.scope - libcontainer container e32389471183409662b9ddc185749addf87e559ff993b547fab705f9cc57c130. Dec 16 16:16:04.240541 kubelet[2465]: E1216 16:16:04.240442 2465 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.59.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.59.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 16:16:04.264895 containerd[1594]: time="2025-12-16T16:16:04.264575298Z" level=info msg="StartContainer for \"9087afbaa5fe0aea876cc55c3794e40518930aeb12206d4020d07439a80f4b35\" returns successfully" Dec 16 16:16:04.339514 containerd[1594]: time="2025-12-16T16:16:04.339395974Z" level=info msg="StartContainer for \"cc1caa032bd4d25ca4fa718883c07fba52cbb22209f79c889ea7616893f6adc9\" returns successfully" Dec 16 16:16:04.353251 containerd[1594]: time="2025-12-16T16:16:04.353214631Z" level=info msg="StartContainer for \"e32389471183409662b9ddc185749addf87e559ff993b547fab705f9cc57c130\" returns successfully" Dec 16 16:16:04.719641 kubelet[2465]: E1216 16:16:04.719467 2465 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.59.10:6443/api/v1/namespaces/default/events\": dial tcp 10.230.59.10:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-899vz.gb1.brightbox.com.1881be4be91e6b30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-899vz.gb1.brightbox.com,UID:srv-899vz.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-899vz.gb1.brightbox.com,},FirstTimestamp:2025-12-16 16:16:02.148854576 +0000 UTC m=+0.865547275,LastTimestamp:2025-12-16 16:16:02.148854576 +0000 UTC m=+0.865547275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-899vz.gb1.brightbox.com,}" Dec 16 16:16:05.281153 kubelet[2465]: E1216 16:16:05.281105 2465 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-899vz.gb1.brightbox.com\" not found" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:05.284051 kubelet[2465]: E1216 16:16:05.284009 2465 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-899vz.gb1.brightbox.com\" not found" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:05.290563 kubelet[2465]: E1216 16:16:05.290538 2465 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-899vz.gb1.brightbox.com\" not found" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:05.429848 kubelet[2465]: I1216 16:16:05.429775 2465 kubelet_node_status.go:75] "Attempting to register node" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:06.297084 kubelet[2465]: E1216 16:16:06.296558 2465 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-899vz.gb1.brightbox.com\" not found" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:06.299697 kubelet[2465]: E1216 16:16:06.298959 2465 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-899vz.gb1.brightbox.com\" not found" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:06.299697 kubelet[2465]: E1216 16:16:06.299474 2465 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-899vz.gb1.brightbox.com\" not found" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:07.299725 kubelet[2465]: E1216 16:16:07.299609 2465 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-899vz.gb1.brightbox.com\" not found" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:07.301721 kubelet[2465]: E1216 16:16:07.301678 2465 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-899vz.gb1.brightbox.com\" not found" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:07.855475 kubelet[2465]: E1216 16:16:07.855408 2465 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-899vz.gb1.brightbox.com\" not found" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:07.928086 kubelet[2465]: I1216 16:16:07.927674 2465 kubelet_node_status.go:78] "Successfully registered node" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:07.984608 kubelet[2465]: I1216 16:16:07.984537 2465 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-899vz.gb1.brightbox.com" Dec 16 16:16:07.992375 kubelet[2465]: E1216 16:16:07.992011 2465 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-899vz.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-899vz.gb1.brightbox.com" Dec 16 16:16:07.992375 kubelet[2465]: I1216 16:16:07.992331 2465 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-899vz.gb1.brightbox.com" Dec 16 16:16:07.998066 kubelet[2465]: E1216 16:16:07.997986 2465 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-899vz.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-899vz.gb1.brightbox.com" Dec 16 16:16:07.998066 kubelet[2465]: I1216 16:16:07.998055 2465 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-899vz.gb1.brightbox.com" Dec 16 16:16:08.003287 kubelet[2465]: E1216 16:16:08.001249 2465 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-899vz.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-899vz.gb1.brightbox.com" Dec 16 16:16:08.129970 kubelet[2465]: I1216 16:16:08.129324 2465 apiserver.go:52] "Watching apiserver" Dec 16 16:16:08.192843 kubelet[2465]: I1216 16:16:08.192693 2465 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 16:16:10.016985 kubelet[2465]: I1216 16:16:10.016869 2465 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-899vz.gb1.brightbox.com" Dec 16 16:16:10.025977 kubelet[2465]: I1216 16:16:10.025912 2465 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 16:16:10.285663 systemd[1]: Reload requested from client PID 2750 ('systemctl') (unit session-9.scope)... Dec 16 16:16:10.285692 systemd[1]: Reloading... Dec 16 16:16:10.516317 zram_generator::config[2799]: No configuration found. Dec 16 16:16:10.964932 systemd[1]: Reloading finished in 678 ms. Dec 16 16:16:11.031440 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:16:11.044806 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 16:16:11.045455 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:16:11.045679 systemd[1]: kubelet.service: Consumed 1.578s CPU time, 123.7M memory peak. Dec 16 16:16:11.052619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:16:11.391896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:16:11.404566 (kubelet)[2860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 16:16:11.498954 kubelet[2860]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 16:16:11.501104 kubelet[2860]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 16:16:11.501104 kubelet[2860]: I1216 16:16:11.499913 2860 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 16:16:11.512332 kubelet[2860]: I1216 16:16:11.512159 2860 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 16:16:11.512626 kubelet[2860]: I1216 16:16:11.512606 2860 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 16:16:11.512772 kubelet[2860]: I1216 16:16:11.512754 2860 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 16:16:11.512886 kubelet[2860]: I1216 16:16:11.512867 2860 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 16:16:11.515067 kubelet[2860]: I1216 16:16:11.513543 2860 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 16:16:11.515537 kubelet[2860]: I1216 16:16:11.515512 2860 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 16:16:11.522204 kubelet[2860]: I1216 16:16:11.522150 2860 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 16:16:11.535522 kubelet[2860]: I1216 16:16:11.534879 2860 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 16:16:11.553900 kubelet[2860]: I1216 16:16:11.553797 2860 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 16:16:11.554988 kubelet[2860]: I1216 16:16:11.554528 2860 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 16:16:11.554988 kubelet[2860]: I1216 16:16:11.554573 2860 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-899vz.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 16:16:11.554988 kubelet[2860]: I1216 16:16:11.554809 2860 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 16:16:11.554988 kubelet[2860]: I1216 16:16:11.554828 2860 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 16:16:11.555418 kubelet[2860]: I1216 16:16:11.554869 2860 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 16:16:11.556467 kubelet[2860]: I1216 16:16:11.556236 2860 state_mem.go:36] "Initialized new in-memory state store" Dec 16 16:16:11.557074 kubelet[2860]: I1216 16:16:11.556556 2860 kubelet.go:475] "Attempting to sync node with API server" Dec 16 16:16:11.557074 kubelet[2860]: I1216 16:16:11.556588 2860 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 16:16:11.557074 kubelet[2860]: I1216 16:16:11.556624 2860 kubelet.go:387] "Adding apiserver pod source" Dec 16 16:16:11.557074 kubelet[2860]: I1216 16:16:11.556671 2860 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 16:16:11.562755 kubelet[2860]: I1216 16:16:11.562703 2860 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 16:16:11.563515 kubelet[2860]: I1216 16:16:11.563484 2860 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 16:16:11.563606 kubelet[2860]: I1216 16:16:11.563530 2860 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 16:16:11.566265 sudo[2875]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 16:16:11.567868 sudo[2875]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 16:16:11.571591 kubelet[2860]: I1216 16:16:11.571565 2860 server.go:1262] "Started kubelet" Dec 16 16:16:11.578133 kubelet[2860]: I1216 16:16:11.577678 2860 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 16:16:11.579749 kubelet[2860]: I1216 16:16:11.579720 2860 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 16:16:11.588723 kubelet[2860]: I1216 16:16:11.586402 2860 server.go:310] "Adding debug handlers to kubelet server" Dec 16 16:16:11.595074 kubelet[2860]: I1216 16:16:11.594462 2860 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 16:16:11.598646 kubelet[2860]: I1216 16:16:11.597898 2860 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 16:16:11.599087 kubelet[2860]: I1216 16:16:11.598878 2860 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 16:16:11.602695 kubelet[2860]: I1216 16:16:11.602662 2860 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 16:16:11.604217 kubelet[2860]: E1216 16:16:11.604185 2860 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-899vz.gb1.brightbox.com\" not found" Dec 16 16:16:11.607087 kubelet[2860]: I1216 16:16:11.606991 2860 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 16:16:11.608351 kubelet[2860]: I1216 16:16:11.607962 2860 reconciler.go:29] "Reconciler: start to sync state" Dec 16 16:16:11.617845 kubelet[2860]: I1216 16:16:11.617803 2860 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 16:16:11.651735 kubelet[2860]: E1216 16:16:11.651212 2860 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 16:16:11.651735 kubelet[2860]: I1216 16:16:11.651245 2860 factory.go:223] Registration of the containerd container factory successfully Dec 16 16:16:11.651735 kubelet[2860]: I1216 16:16:11.651271 2860 factory.go:223] Registration of the systemd container factory successfully Dec 16 16:16:11.651735 kubelet[2860]: I1216 16:16:11.651450 2860 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 16:16:11.726252 kubelet[2860]: I1216 16:16:11.726177 2860 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 16:16:11.743588 kubelet[2860]: I1216 16:16:11.742924 2860 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 16:16:11.746799 kubelet[2860]: I1216 16:16:11.745876 2860 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 16:16:11.746799 kubelet[2860]: I1216 16:16:11.745938 2860 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 16:16:11.746799 kubelet[2860]: E1216 16:16:11.746007 2860 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 16:16:11.835069 kubelet[2860]: I1216 16:16:11.834924 2860 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 16:16:11.835265 kubelet[2860]: I1216 16:16:11.835083 2860 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 16:16:11.836143 kubelet[2860]: I1216 16:16:11.835132 2860 state_mem.go:36] "Initialized new in-memory state store" Dec 16 16:16:11.836536 kubelet[2860]: I1216 16:16:11.836495 2860 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 16:16:11.836633 kubelet[2860]: I1216 16:16:11.836526 2860 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 16:16:11.836633 kubelet[2860]: I1216 16:16:11.836592 2860 policy_none.go:49] "None policy: Start" Dec 16 16:16:11.836738 kubelet[2860]: I1216 16:16:11.836638 2860 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 16:16:11.836738 kubelet[2860]: I1216 16:16:11.836681 2860 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 16:16:11.837780 kubelet[2860]: I1216 16:16:11.836884 2860 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 16:16:11.837780 kubelet[2860]: I1216 16:16:11.836928 2860 policy_none.go:47] "Start" Dec 16 16:16:11.846633 kubelet[2860]: E1216 16:16:11.846554 2860 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 16:16:11.852838 kubelet[2860]: E1216 16:16:11.852797 2860 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 16:16:11.858384 kubelet[2860]: I1216 16:16:11.858118 2860 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 16:16:11.858384 kubelet[2860]: I1216 16:16:11.858162 2860 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 16:16:11.863640 kubelet[2860]: I1216 16:16:11.863303 2860 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 16:16:11.871698 kubelet[2860]: E1216 16:16:11.871581 2860 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 16:16:12.003717 kubelet[2860]: I1216 16:16:12.003588 2860 kubelet_node_status.go:75] "Attempting to register node" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.016353 kubelet[2860]: I1216 16:16:12.016310 2860 kubelet_node_status.go:124] "Node was previously registered" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.016475 kubelet[2860]: I1216 16:16:12.016452 2860 kubelet_node_status.go:78] "Successfully registered node" node="srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.049601 kubelet[2860]: I1216 16:16:12.048289 2860 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.049601 kubelet[2860]: I1216 16:16:12.048366 2860 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.049801 kubelet[2860]: I1216 16:16:12.049632 2860 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.068947 kubelet[2860]: I1216 16:16:12.068910 2860 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 16:16:12.072226 kubelet[2860]: I1216 16:16:12.072177 2860 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 16:16:12.073105 kubelet[2860]: I1216 16:16:12.073074 2860 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 16:16:12.073369 kubelet[2860]: E1216 16:16:12.073314 2860 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-899vz.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.113313 kubelet[2860]: I1216 16:16:12.113131 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1dddcc10effa2348b305248fa7a58d2-kubeconfig\") pod \"kube-controller-manager-srv-899vz.gb1.brightbox.com\" (UID: \"d1dddcc10effa2348b305248fa7a58d2\") " pod="kube-system/kube-controller-manager-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.113313 kubelet[2860]: I1216 16:16:12.113315 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0af56e037925817434fb9a28da637c33-ca-certs\") pod \"kube-apiserver-srv-899vz.gb1.brightbox.com\" (UID: \"0af56e037925817434fb9a28da637c33\") " pod="kube-system/kube-apiserver-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.113593 kubelet[2860]: I1216 16:16:12.113352 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0af56e037925817434fb9a28da637c33-k8s-certs\") pod \"kube-apiserver-srv-899vz.gb1.brightbox.com\" (UID: \"0af56e037925817434fb9a28da637c33\") " pod="kube-system/kube-apiserver-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.113593 kubelet[2860]: I1216 16:16:12.113381 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1dddcc10effa2348b305248fa7a58d2-flexvolume-dir\") pod \"kube-controller-manager-srv-899vz.gb1.brightbox.com\" (UID: \"d1dddcc10effa2348b305248fa7a58d2\") " pod="kube-system/kube-controller-manager-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.113593 kubelet[2860]: I1216 16:16:12.113409 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1dddcc10effa2348b305248fa7a58d2-k8s-certs\") pod \"kube-controller-manager-srv-899vz.gb1.brightbox.com\" (UID: \"d1dddcc10effa2348b305248fa7a58d2\") " pod="kube-system/kube-controller-manager-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.113593 kubelet[2860]: I1216 16:16:12.113441 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1dddcc10effa2348b305248fa7a58d2-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-899vz.gb1.brightbox.com\" (UID: \"d1dddcc10effa2348b305248fa7a58d2\") " pod="kube-system/kube-controller-manager-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.113593 kubelet[2860]: I1216 16:16:12.113477 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33c2ec0b96b3d2e6de18b51a7afcd519-kubeconfig\") pod \"kube-scheduler-srv-899vz.gb1.brightbox.com\" (UID: \"33c2ec0b96b3d2e6de18b51a7afcd519\") " pod="kube-system/kube-scheduler-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.113839 kubelet[2860]: I1216 16:16:12.113505 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0af56e037925817434fb9a28da637c33-usr-share-ca-certificates\") pod \"kube-apiserver-srv-899vz.gb1.brightbox.com\" (UID: \"0af56e037925817434fb9a28da637c33\") " pod="kube-system/kube-apiserver-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.113839 kubelet[2860]: I1216 16:16:12.113531 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1dddcc10effa2348b305248fa7a58d2-ca-certs\") pod \"kube-controller-manager-srv-899vz.gb1.brightbox.com\" (UID: \"d1dddcc10effa2348b305248fa7a58d2\") " pod="kube-system/kube-controller-manager-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.343690 sudo[2875]: pam_unix(sudo:session): session closed for user root Dec 16 16:16:12.561026 kubelet[2860]: I1216 16:16:12.560949 2860 apiserver.go:52] "Watching apiserver" Dec 16 16:16:12.619390 kubelet[2860]: I1216 16:16:12.619169 2860 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 16:16:12.788081 kubelet[2860]: I1216 16:16:12.787419 2860 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.793450 kubelet[2860]: I1216 16:16:12.793406 2860 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.814276 kubelet[2860]: I1216 16:16:12.814219 2860 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 16:16:12.815084 kubelet[2860]: I1216 16:16:12.814583 2860 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 16:16:12.815084 kubelet[2860]: E1216 16:16:12.814637 2860 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-899vz.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.815380 kubelet[2860]: E1216 16:16:12.814319 2860 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-899vz.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-899vz.gb1.brightbox.com" Dec 16 16:16:12.852389 kubelet[2860]: I1216 16:16:12.852294 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-899vz.gb1.brightbox.com" podStartSLOduration=2.852247678 podStartE2EDuration="2.852247678s" podCreationTimestamp="2025-12-16 16:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 16:16:12.836968954 +0000 UTC m=+1.419357766" watchObservedRunningTime="2025-12-16 16:16:12.852247678 +0000 UTC m=+1.434636471" Dec 16 16:16:12.868052 kubelet[2860]: I1216 16:16:12.867090 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-899vz.gb1.brightbox.com" podStartSLOduration=0.867070759 podStartE2EDuration="867.070759ms" podCreationTimestamp="2025-12-16 16:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 16:16:12.853217166 +0000 UTC m=+1.435606004" watchObservedRunningTime="2025-12-16 16:16:12.867070759 +0000 UTC m=+1.449459573" Dec 16 16:16:12.883311 kubelet[2860]: I1216 16:16:12.883158 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-899vz.gb1.brightbox.com" podStartSLOduration=0.883139036 podStartE2EDuration="883.139036ms" podCreationTimestamp="2025-12-16 16:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 16:16:12.867534925 +0000 UTC m=+1.449923771" watchObservedRunningTime="2025-12-16 16:16:12.883139036 +0000 UTC m=+1.465527849" Dec 16 16:16:14.829177 sudo[1865]: pam_unix(sudo:session): session closed for user root Dec 16 16:16:14.993153 sshd[1864]: Connection closed by 139.178.68.195 port 41914 Dec 16 16:16:14.996101 sshd-session[1861]: pam_unix(sshd:session): session closed for user core Dec 16 16:16:15.004409 systemd[1]: sshd@6-10.230.59.10:22-139.178.68.195:41914.service: Deactivated successfully. Dec 16 16:16:15.008922 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 16:16:15.011202 systemd[1]: session-9.scope: Consumed 7.698s CPU time, 219.5M memory peak. Dec 16 16:16:15.014747 systemd-logind[1575]: Session 9 logged out. Waiting for processes to exit. Dec 16 16:16:15.017703 systemd-logind[1575]: Removed session 9. Dec 16 16:16:15.641092 kubelet[2860]: I1216 16:16:15.641017 2860 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 16:16:15.642429 containerd[1594]: time="2025-12-16T16:16:15.642303125Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 16:16:15.642871 kubelet[2860]: I1216 16:16:15.642589 2860 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 16:16:16.448634 kubelet[2860]: I1216 16:16:16.448229 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf74d435-5b09-43fe-b846-dd55ef43c091-kube-proxy\") pod \"kube-proxy-ffjmm\" (UID: \"bf74d435-5b09-43fe-b846-dd55ef43c091\") " pod="kube-system/kube-proxy-ffjmm" Dec 16 16:16:16.448634 kubelet[2860]: I1216 16:16:16.448281 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgcpx\" (UniqueName: \"kubernetes.io/projected/bf74d435-5b09-43fe-b846-dd55ef43c091-kube-api-access-hgcpx\") pod \"kube-proxy-ffjmm\" (UID: \"bf74d435-5b09-43fe-b846-dd55ef43c091\") " pod="kube-system/kube-proxy-ffjmm" Dec 16 16:16:16.448634 kubelet[2860]: I1216 16:16:16.448322 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf74d435-5b09-43fe-b846-dd55ef43c091-xtables-lock\") pod \"kube-proxy-ffjmm\" (UID: \"bf74d435-5b09-43fe-b846-dd55ef43c091\") " pod="kube-system/kube-proxy-ffjmm" Dec 16 16:16:16.448634 kubelet[2860]: I1216 16:16:16.448348 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf74d435-5b09-43fe-b846-dd55ef43c091-lib-modules\") pod \"kube-proxy-ffjmm\" (UID: \"bf74d435-5b09-43fe-b846-dd55ef43c091\") " pod="kube-system/kube-proxy-ffjmm" Dec 16 16:16:16.463179 systemd[1]: Created slice kubepods-besteffort-podbf74d435_5b09_43fe_b846_dd55ef43c091.slice - libcontainer container kubepods-besteffort-podbf74d435_5b09_43fe_b846_dd55ef43c091.slice. Dec 16 16:16:16.489471 systemd[1]: Created slice kubepods-burstable-pod96f3a87d_c857_4a7a_aa8b_4a40191468c4.slice - libcontainer container kubepods-burstable-pod96f3a87d_c857_4a7a_aa8b_4a40191468c4.slice. Dec 16 16:16:16.619322 systemd[1]: Created slice kubepods-besteffort-podadef91e5_15a0_4ce8_a00d_bcff575fd802.slice - libcontainer container kubepods-besteffort-podadef91e5_15a0_4ce8_a00d_bcff575fd802.slice. Dec 16 16:16:16.649913 kubelet[2860]: I1216 16:16:16.649854 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cilium-cgroup\") pod \"cilium-rhk2j\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " pod="kube-system/cilium-rhk2j" Dec 16 16:16:16.651295 kubelet[2860]: I1216 16:16:16.650814 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-etc-cni-netd\") pod \"cilium-rhk2j\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " pod="kube-system/cilium-rhk2j" Dec 16 16:16:16.651295 kubelet[2860]: I1216 16:16:16.650857 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cilium-config-path\") pod \"cilium-rhk2j\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " pod="kube-system/cilium-rhk2j" Dec 16 16:16:16.651295 kubelet[2860]: I1216 16:16:16.650890 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-host-proc-sys-kernel\") pod \"cilium-rhk2j\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " pod="kube-system/cilium-rhk2j" Dec 16 16:16:16.651295 kubelet[2860]: I1216 16:16:16.650922 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cilium-run\") pod \"cilium-rhk2j\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " pod="kube-system/cilium-rhk2j" Dec 16 16:16:16.651295 kubelet[2860]: I1216 16:16:16.650950 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-bpf-maps\") pod \"cilium-rhk2j\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " pod="kube-system/cilium-rhk2j" Dec 16 16:16:16.651295 kubelet[2860]: I1216 16:16:16.650973 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-lib-modules\") pod \"cilium-rhk2j\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " pod="kube-system/cilium-rhk2j" Dec 16 16:16:16.651819 kubelet[2860]: I1216 16:16:16.650999 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-xtables-lock\") pod \"cilium-rhk2j\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " pod="kube-system/cilium-rhk2j" Dec 16 16:16:16.651819 kubelet[2860]: I1216 16:16:16.651028 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cni-path\") pod \"cilium-rhk2j\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " pod="kube-system/cilium-rhk2j" Dec 16 16:16:16.651819 kubelet[2860]: I1216 16:16:16.651085 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96f3a87d-c857-4a7a-aa8b-4a40191468c4-clustermesh-secrets\") pod \"cilium-rhk2j\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " pod="kube-system/cilium-rhk2j" Dec 16 16:16:16.651819 kubelet[2860]: I1216 16:16:16.651117 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-hostproc\") pod \"cilium-rhk2j\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " pod="kube-system/cilium-rhk2j" Dec 16 16:16:16.651819 kubelet[2860]: I1216 16:16:16.651144 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-host-proc-sys-net\") pod \"cilium-rhk2j\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " pod="kube-system/cilium-rhk2j" Dec 16 16:16:16.651819 kubelet[2860]: I1216 16:16:16.651173 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96f3a87d-c857-4a7a-aa8b-4a40191468c4-hubble-tls\") pod \"cilium-rhk2j\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " pod="kube-system/cilium-rhk2j" Dec 16 16:16:16.652183 kubelet[2860]: I1216 16:16:16.651213 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96vxq\" (UniqueName: \"kubernetes.io/projected/96f3a87d-c857-4a7a-aa8b-4a40191468c4-kube-api-access-96vxq\") pod \"cilium-rhk2j\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " pod="kube-system/cilium-rhk2j" Dec 16 16:16:16.752534 kubelet[2860]: I1216 16:16:16.752199 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adef91e5-15a0-4ce8-a00d-bcff575fd802-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-57vjq\" (UID: \"adef91e5-15a0-4ce8-a00d-bcff575fd802\") " pod="kube-system/cilium-operator-6f9c7c5859-57vjq" Dec 16 16:16:16.752750 kubelet[2860]: I1216 16:16:16.752647 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f9rr\" (UniqueName: \"kubernetes.io/projected/adef91e5-15a0-4ce8-a00d-bcff575fd802-kube-api-access-2f9rr\") pod \"cilium-operator-6f9c7c5859-57vjq\" (UID: \"adef91e5-15a0-4ce8-a00d-bcff575fd802\") " pod="kube-system/cilium-operator-6f9c7c5859-57vjq" Dec 16 16:16:16.795954 containerd[1594]: time="2025-12-16T16:16:16.795782172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ffjmm,Uid:bf74d435-5b09-43fe-b846-dd55ef43c091,Namespace:kube-system,Attempt:0,}" Dec 16 16:16:16.803493 containerd[1594]: time="2025-12-16T16:16:16.802913142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rhk2j,Uid:96f3a87d-c857-4a7a-aa8b-4a40191468c4,Namespace:kube-system,Attempt:0,}" Dec 16 16:16:16.841810 containerd[1594]: time="2025-12-16T16:16:16.841717404Z" level=info msg="connecting to shim ba8d7687c5519d29f5d64d14dc4d4565bc8e7f696f5e9a3a9ec937c60e04937b" address="unix:///run/containerd/s/12af60d25043c957741079fa564f1269757b47ea6b54cbaa3fcff9eb3263400e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:16:16.843218 containerd[1594]: time="2025-12-16T16:16:16.843182123Z" level=info msg="connecting to shim 37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984" address="unix:///run/containerd/s/9975f754ebaddf110af9a93eb59bd8ddb4d5d9e0033545abf9e13a8e6e385daf" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:16:16.918459 systemd[1]: Started cri-containerd-ba8d7687c5519d29f5d64d14dc4d4565bc8e7f696f5e9a3a9ec937c60e04937b.scope - libcontainer container ba8d7687c5519d29f5d64d14dc4d4565bc8e7f696f5e9a3a9ec937c60e04937b. Dec 16 16:16:16.932409 containerd[1594]: time="2025-12-16T16:16:16.932328149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-57vjq,Uid:adef91e5-15a0-4ce8-a00d-bcff575fd802,Namespace:kube-system,Attempt:0,}" Dec 16 16:16:16.933298 systemd[1]: Started cri-containerd-37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984.scope - libcontainer container 37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984. Dec 16 16:16:16.992919 containerd[1594]: time="2025-12-16T16:16:16.992234782Z" level=info msg="connecting to shim 3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0" address="unix:///run/containerd/s/9103f1b2b18a61b086832b3accbee12f64c06e7fe2b56f57d8174bad27772f0a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:16:17.031102 containerd[1594]: time="2025-12-16T16:16:17.030912033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ffjmm,Uid:bf74d435-5b09-43fe-b846-dd55ef43c091,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba8d7687c5519d29f5d64d14dc4d4565bc8e7f696f5e9a3a9ec937c60e04937b\"" Dec 16 16:16:17.041103 containerd[1594]: time="2025-12-16T16:16:17.040656204Z" level=info msg="CreateContainer within sandbox \"ba8d7687c5519d29f5d64d14dc4d4565bc8e7f696f5e9a3a9ec937c60e04937b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 16:16:17.047490 containerd[1594]: time="2025-12-16T16:16:17.047450115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rhk2j,Uid:96f3a87d-c857-4a7a-aa8b-4a40191468c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\"" Dec 16 16:16:17.054771 containerd[1594]: time="2025-12-16T16:16:17.054718219Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 16:16:17.073788 containerd[1594]: time="2025-12-16T16:16:17.073722295Z" level=info msg="Container 46f201756749539430670510394f2abf63960549c0392e784ea249e9aaea0615: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:16:17.076336 systemd[1]: Started cri-containerd-3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0.scope - libcontainer container 3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0. Dec 16 16:16:17.086870 containerd[1594]: time="2025-12-16T16:16:17.086696407Z" level=info msg="CreateContainer within sandbox \"ba8d7687c5519d29f5d64d14dc4d4565bc8e7f696f5e9a3a9ec937c60e04937b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"46f201756749539430670510394f2abf63960549c0392e784ea249e9aaea0615\"" Dec 16 16:16:17.089979 containerd[1594]: time="2025-12-16T16:16:17.089946137Z" level=info msg="StartContainer for \"46f201756749539430670510394f2abf63960549c0392e784ea249e9aaea0615\"" Dec 16 16:16:17.096326 containerd[1594]: time="2025-12-16T16:16:17.096267051Z" level=info msg="connecting to shim 46f201756749539430670510394f2abf63960549c0392e784ea249e9aaea0615" address="unix:///run/containerd/s/12af60d25043c957741079fa564f1269757b47ea6b54cbaa3fcff9eb3263400e" protocol=ttrpc version=3 Dec 16 16:16:17.150634 systemd[1]: Started cri-containerd-46f201756749539430670510394f2abf63960549c0392e784ea249e9aaea0615.scope - libcontainer container 46f201756749539430670510394f2abf63960549c0392e784ea249e9aaea0615. Dec 16 16:16:17.210980 containerd[1594]: time="2025-12-16T16:16:17.210909695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-57vjq,Uid:adef91e5-15a0-4ce8-a00d-bcff575fd802,Namespace:kube-system,Attempt:0,} returns sandbox id \"3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0\"" Dec 16 16:16:17.283399 containerd[1594]: time="2025-12-16T16:16:17.283120907Z" level=info msg="StartContainer for \"46f201756749539430670510394f2abf63960549c0392e784ea249e9aaea0615\" returns successfully" Dec 16 16:16:17.841980 kubelet[2860]: I1216 16:16:17.841628 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ffjmm" podStartSLOduration=1.841600543 podStartE2EDuration="1.841600543s" podCreationTimestamp="2025-12-16 16:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 16:16:17.841449263 +0000 UTC m=+6.423838099" watchObservedRunningTime="2025-12-16 16:16:17.841600543 +0000 UTC m=+6.423989356" Dec 16 16:16:24.343161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3414730004.mount: Deactivated successfully. Dec 16 16:16:27.881515 containerd[1594]: time="2025-12-16T16:16:27.881402828Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:16:27.883780 containerd[1594]: time="2025-12-16T16:16:27.883720077Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 16 16:16:27.884840 containerd[1594]: time="2025-12-16T16:16:27.884782804Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:16:27.887671 containerd[1594]: time="2025-12-16T16:16:27.887632368Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.832854129s" Dec 16 16:16:27.887980 containerd[1594]: time="2025-12-16T16:16:27.887820995Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 16 16:16:27.892728 containerd[1594]: time="2025-12-16T16:16:27.892171325Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 16:16:27.900854 containerd[1594]: time="2025-12-16T16:16:27.900800025Z" level=info msg="CreateContainer within sandbox \"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 16:16:27.932077 containerd[1594]: time="2025-12-16T16:16:27.930376488Z" level=info msg="Container c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:16:27.940963 containerd[1594]: time="2025-12-16T16:16:27.940888409Z" level=info msg="CreateContainer within sandbox \"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a\"" Dec 16 16:16:27.941842 containerd[1594]: time="2025-12-16T16:16:27.941795317Z" level=info msg="StartContainer for \"c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a\"" Dec 16 16:16:27.945620 containerd[1594]: time="2025-12-16T16:16:27.945497184Z" level=info msg="connecting to shim c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a" address="unix:///run/containerd/s/9975f754ebaddf110af9a93eb59bd8ddb4d5d9e0033545abf9e13a8e6e385daf" protocol=ttrpc version=3 Dec 16 16:16:27.994285 systemd[1]: Started cri-containerd-c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a.scope - libcontainer container c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a. Dec 16 16:16:28.071261 containerd[1594]: time="2025-12-16T16:16:28.071189858Z" level=info msg="StartContainer for \"c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a\" returns successfully" Dec 16 16:16:28.092605 systemd[1]: cri-containerd-c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a.scope: Deactivated successfully. Dec 16 16:16:28.139505 containerd[1594]: time="2025-12-16T16:16:28.139154617Z" level=info msg="received container exit event container_id:\"c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a\" id:\"c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a\" pid:3286 exited_at:{seconds:1765901788 nanos:99626339}" Dec 16 16:16:28.182216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a-rootfs.mount: Deactivated successfully. Dec 16 16:16:28.867786 containerd[1594]: time="2025-12-16T16:16:28.867699256Z" level=info msg="CreateContainer within sandbox \"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 16:16:28.877285 containerd[1594]: time="2025-12-16T16:16:28.877232994Z" level=info msg="Container e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:16:28.883606 containerd[1594]: time="2025-12-16T16:16:28.883564529Z" level=info msg="CreateContainer within sandbox \"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed\"" Dec 16 16:16:28.886100 containerd[1594]: time="2025-12-16T16:16:28.886065217Z" level=info msg="StartContainer for \"e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed\"" Dec 16 16:16:28.887415 containerd[1594]: time="2025-12-16T16:16:28.887303581Z" level=info msg="connecting to shim e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed" address="unix:///run/containerd/s/9975f754ebaddf110af9a93eb59bd8ddb4d5d9e0033545abf9e13a8e6e385daf" protocol=ttrpc version=3 Dec 16 16:16:28.926237 systemd[1]: Started cri-containerd-e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed.scope - libcontainer container e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed. Dec 16 16:16:28.983390 containerd[1594]: time="2025-12-16T16:16:28.983341187Z" level=info msg="StartContainer for \"e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed\" returns successfully" Dec 16 16:16:29.003249 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 16:16:29.004112 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 16:16:29.004388 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 16:16:29.008341 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 16:16:29.012394 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 16:16:29.013643 containerd[1594]: time="2025-12-16T16:16:29.013470007Z" level=info msg="received container exit event container_id:\"e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed\" id:\"e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed\" pid:3331 exited_at:{seconds:1765901789 nanos:10753688}" Dec 16 16:16:29.016147 systemd[1]: cri-containerd-e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed.scope: Deactivated successfully. Dec 16 16:16:29.059900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed-rootfs.mount: Deactivated successfully. Dec 16 16:16:29.065569 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 16:16:29.879069 containerd[1594]: time="2025-12-16T16:16:29.878723986Z" level=info msg="CreateContainer within sandbox \"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 16:16:29.897933 containerd[1594]: time="2025-12-16T16:16:29.897883709Z" level=info msg="Container cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:16:29.906685 containerd[1594]: time="2025-12-16T16:16:29.906626855Z" level=info msg="CreateContainer within sandbox \"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724\"" Dec 16 16:16:29.908741 containerd[1594]: time="2025-12-16T16:16:29.908678019Z" level=info msg="StartContainer for \"cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724\"" Dec 16 16:16:29.912259 containerd[1594]: time="2025-12-16T16:16:29.912030095Z" level=info msg="connecting to shim cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724" address="unix:///run/containerd/s/9975f754ebaddf110af9a93eb59bd8ddb4d5d9e0033545abf9e13a8e6e385daf" protocol=ttrpc version=3 Dec 16 16:16:29.951252 systemd[1]: Started cri-containerd-cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724.scope - libcontainer container cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724. Dec 16 16:16:30.046499 containerd[1594]: time="2025-12-16T16:16:30.046352249Z" level=info msg="StartContainer for \"cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724\" returns successfully" Dec 16 16:16:30.052748 systemd[1]: cri-containerd-cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724.scope: Deactivated successfully. Dec 16 16:16:30.058551 containerd[1594]: time="2025-12-16T16:16:30.058480035Z" level=info msg="received container exit event container_id:\"cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724\" id:\"cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724\" pid:3379 exited_at:{seconds:1765901790 nanos:58135941}" Dec 16 16:16:30.093593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724-rootfs.mount: Deactivated successfully. Dec 16 16:16:30.886436 containerd[1594]: time="2025-12-16T16:16:30.886279398Z" level=info msg="CreateContainer within sandbox \"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 16:16:30.913377 containerd[1594]: time="2025-12-16T16:16:30.910643067Z" level=info msg="Container aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:16:30.925935 containerd[1594]: time="2025-12-16T16:16:30.925770103Z" level=info msg="CreateContainer within sandbox \"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64\"" Dec 16 16:16:30.930319 containerd[1594]: time="2025-12-16T16:16:30.930234019Z" level=info msg="StartContainer for \"aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64\"" Dec 16 16:16:30.931609 containerd[1594]: time="2025-12-16T16:16:30.931556268Z" level=info msg="connecting to shim aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64" address="unix:///run/containerd/s/9975f754ebaddf110af9a93eb59bd8ddb4d5d9e0033545abf9e13a8e6e385daf" protocol=ttrpc version=3 Dec 16 16:16:30.986415 systemd[1]: Started cri-containerd-aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64.scope - libcontainer container aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64. Dec 16 16:16:31.084291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3337134908.mount: Deactivated successfully. Dec 16 16:16:31.088224 systemd[1]: cri-containerd-aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64.scope: Deactivated successfully. Dec 16 16:16:31.101228 containerd[1594]: time="2025-12-16T16:16:31.101158451Z" level=info msg="received container exit event container_id:\"aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64\" id:\"aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64\" pid:3418 exited_at:{seconds:1765901791 nanos:90683802}" Dec 16 16:16:31.103280 containerd[1594]: time="2025-12-16T16:16:31.103239567Z" level=info msg="StartContainer for \"aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64\" returns successfully" Dec 16 16:16:31.142892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64-rootfs.mount: Deactivated successfully. Dec 16 16:16:31.903682 containerd[1594]: time="2025-12-16T16:16:31.903245593Z" level=info msg="CreateContainer within sandbox \"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 16:16:31.930515 containerd[1594]: time="2025-12-16T16:16:31.930455455Z" level=info msg="Container 7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:16:31.940631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958709774.mount: Deactivated successfully. Dec 16 16:16:31.955952 containerd[1594]: time="2025-12-16T16:16:31.955897075Z" level=info msg="CreateContainer within sandbox \"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\"" Dec 16 16:16:31.957324 containerd[1594]: time="2025-12-16T16:16:31.957238573Z" level=info msg="StartContainer for \"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\"" Dec 16 16:16:31.962495 containerd[1594]: time="2025-12-16T16:16:31.962398490Z" level=info msg="connecting to shim 7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b" address="unix:///run/containerd/s/9975f754ebaddf110af9a93eb59bd8ddb4d5d9e0033545abf9e13a8e6e385daf" protocol=ttrpc version=3 Dec 16 16:16:32.029593 systemd[1]: Started cri-containerd-7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b.scope - libcontainer container 7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b. Dec 16 16:16:32.164741 containerd[1594]: time="2025-12-16T16:16:32.164169378Z" level=info msg="StartContainer for \"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\" returns successfully" Dec 16 16:16:32.523108 kubelet[2860]: I1216 16:16:32.520693 2860 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 16:16:32.639092 systemd[1]: Created slice kubepods-burstable-pod450a4c7a_b8ec_48bd_8439_75f6f427c850.slice - libcontainer container kubepods-burstable-pod450a4c7a_b8ec_48bd_8439_75f6f427c850.slice. Dec 16 16:16:32.655676 systemd[1]: Created slice kubepods-burstable-pod0145d04f_ceb3_4ae4_9ce9_19f98ad7d000.slice - libcontainer container kubepods-burstable-pod0145d04f_ceb3_4ae4_9ce9_19f98ad7d000.slice. Dec 16 16:16:32.696485 kubelet[2860]: I1216 16:16:32.696410 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0145d04f-ceb3-4ae4-9ce9-19f98ad7d000-config-volume\") pod \"coredns-66bc5c9577-x4s6g\" (UID: \"0145d04f-ceb3-4ae4-9ce9-19f98ad7d000\") " pod="kube-system/coredns-66bc5c9577-x4s6g" Dec 16 16:16:32.696485 kubelet[2860]: I1216 16:16:32.696480 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450a4c7a-b8ec-48bd-8439-75f6f427c850-config-volume\") pod \"coredns-66bc5c9577-qc42j\" (UID: \"450a4c7a-b8ec-48bd-8439-75f6f427c850\") " pod="kube-system/coredns-66bc5c9577-qc42j" Dec 16 16:16:32.696814 kubelet[2860]: I1216 16:16:32.696524 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrzbt\" (UniqueName: \"kubernetes.io/projected/0145d04f-ceb3-4ae4-9ce9-19f98ad7d000-kube-api-access-rrzbt\") pod \"coredns-66bc5c9577-x4s6g\" (UID: \"0145d04f-ceb3-4ae4-9ce9-19f98ad7d000\") " pod="kube-system/coredns-66bc5c9577-x4s6g" Dec 16 16:16:32.696814 kubelet[2860]: I1216 16:16:32.696585 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfxfk\" (UniqueName: \"kubernetes.io/projected/450a4c7a-b8ec-48bd-8439-75f6f427c850-kube-api-access-kfxfk\") pod \"coredns-66bc5c9577-qc42j\" (UID: \"450a4c7a-b8ec-48bd-8439-75f6f427c850\") " pod="kube-system/coredns-66bc5c9577-qc42j" Dec 16 16:16:32.954259 containerd[1594]: time="2025-12-16T16:16:32.953901960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qc42j,Uid:450a4c7a-b8ec-48bd-8439-75f6f427c850,Namespace:kube-system,Attempt:0,}" Dec 16 16:16:32.971015 containerd[1594]: time="2025-12-16T16:16:32.970928386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x4s6g,Uid:0145d04f-ceb3-4ae4-9ce9-19f98ad7d000,Namespace:kube-system,Attempt:0,}" Dec 16 16:16:32.977413 kubelet[2860]: I1216 16:16:32.977255 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rhk2j" podStartSLOduration=6.138872118 podStartE2EDuration="16.977229336s" podCreationTimestamp="2025-12-16 16:16:16 +0000 UTC" firstStartedPulling="2025-12-16 16:16:17.050689183 +0000 UTC m=+5.633077976" lastFinishedPulling="2025-12-16 16:16:27.889046384 +0000 UTC m=+16.471435194" observedRunningTime="2025-12-16 16:16:32.975863428 +0000 UTC m=+21.558252266" watchObservedRunningTime="2025-12-16 16:16:32.977229336 +0000 UTC m=+21.559618148" Dec 16 16:16:33.411058 containerd[1594]: time="2025-12-16T16:16:33.410976858Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:16:33.413687 containerd[1594]: time="2025-12-16T16:16:33.413654638Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 16 16:16:33.417060 containerd[1594]: time="2025-12-16T16:16:33.414888538Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:16:33.417790 containerd[1594]: time="2025-12-16T16:16:33.416918706Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.524693957s" Dec 16 16:16:33.418267 containerd[1594]: time="2025-12-16T16:16:33.418231887Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 16 16:16:33.425114 containerd[1594]: time="2025-12-16T16:16:33.424986835Z" level=info msg="CreateContainer within sandbox \"3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 16:16:33.444024 containerd[1594]: time="2025-12-16T16:16:33.443977808Z" level=info msg="Container 8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:16:33.463865 containerd[1594]: time="2025-12-16T16:16:33.463808577Z" level=info msg="CreateContainer within sandbox \"3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\"" Dec 16 16:16:33.465543 containerd[1594]: time="2025-12-16T16:16:33.465504685Z" level=info msg="StartContainer for \"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\"" Dec 16 16:16:33.468434 containerd[1594]: time="2025-12-16T16:16:33.468394501Z" level=info msg="connecting to shim 8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935" address="unix:///run/containerd/s/9103f1b2b18a61b086832b3accbee12f64c06e7fe2b56f57d8174bad27772f0a" protocol=ttrpc version=3 Dec 16 16:16:33.529273 systemd[1]: Started cri-containerd-8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935.scope - libcontainer container 8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935. Dec 16 16:16:33.616122 containerd[1594]: time="2025-12-16T16:16:33.615689344Z" level=info msg="StartContainer for \"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\" returns successfully" Dec 16 16:16:37.022269 systemd-networkd[1483]: cilium_host: Link UP Dec 16 16:16:37.023607 systemd-networkd[1483]: cilium_net: Link UP Dec 16 16:16:37.023959 systemd-networkd[1483]: cilium_net: Gained carrier Dec 16 16:16:37.028227 systemd-networkd[1483]: cilium_host: Gained carrier Dec 16 16:16:37.113613 systemd-networkd[1483]: cilium_host: Gained IPv6LL Dec 16 16:16:37.210816 systemd-networkd[1483]: cilium_vxlan: Link UP Dec 16 16:16:37.212443 systemd-networkd[1483]: cilium_vxlan: Gained carrier Dec 16 16:16:37.440379 systemd-networkd[1483]: cilium_net: Gained IPv6LL Dec 16 16:16:37.870148 kernel: NET: Registered PF_ALG protocol family Dec 16 16:16:38.849626 systemd-networkd[1483]: cilium_vxlan: Gained IPv6LL Dec 16 16:16:39.104592 systemd-networkd[1483]: lxc_health: Link UP Dec 16 16:16:39.131577 systemd-networkd[1483]: lxc_health: Gained carrier Dec 16 16:16:39.584996 systemd-networkd[1483]: lxc73ca982a5e6d: Link UP Dec 16 16:16:39.608072 kernel: eth0: renamed from tmp342af Dec 16 16:16:39.620327 systemd-networkd[1483]: lxc73ca982a5e6d: Gained carrier Dec 16 16:16:39.709952 systemd-networkd[1483]: lxc33a673bf88ec: Link UP Dec 16 16:16:39.732570 kernel: eth0: renamed from tmp0d4df Dec 16 16:16:39.741450 systemd-networkd[1483]: lxc33a673bf88ec: Gained carrier Dec 16 16:16:40.768477 systemd-networkd[1483]: lxc73ca982a5e6d: Gained IPv6LL Dec 16 16:16:40.871232 kubelet[2860]: I1216 16:16:40.869300 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-57vjq" podStartSLOduration=8.665401078 podStartE2EDuration="24.868187749s" podCreationTimestamp="2025-12-16 16:16:16 +0000 UTC" firstStartedPulling="2025-12-16 16:16:17.217302375 +0000 UTC m=+5.799691174" lastFinishedPulling="2025-12-16 16:16:33.420089051 +0000 UTC m=+22.002477845" observedRunningTime="2025-12-16 16:16:33.96265184 +0000 UTC m=+22.545040685" watchObservedRunningTime="2025-12-16 16:16:40.868187749 +0000 UTC m=+29.450576564" Dec 16 16:16:41.089377 systemd-networkd[1483]: lxc33a673bf88ec: Gained IPv6LL Dec 16 16:16:41.152487 systemd-networkd[1483]: lxc_health: Gained IPv6LL Dec 16 16:16:45.869065 containerd[1594]: time="2025-12-16T16:16:45.866526116Z" level=info msg="connecting to shim 0d4df3769d6ea89cca3934041c974999d901ac6a50ff3605b8aa77fad2dddde1" address="unix:///run/containerd/s/00def824d9d5e55b4fcc83eee5f4165df3047d8421d25834ed4e13804916f62d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:16:45.911917 containerd[1594]: time="2025-12-16T16:16:45.911837334Z" level=info msg="connecting to shim 342af0b6b0d5241b90e538b351868093d15cdb48d9b887357453293a3155f1ae" address="unix:///run/containerd/s/e0e00c204f7fc620bd943ca5f9b9e8f37ba549d9efc6be4356b8d255e2a36e63" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:16:45.999499 systemd[1]: Started cri-containerd-0d4df3769d6ea89cca3934041c974999d901ac6a50ff3605b8aa77fad2dddde1.scope - libcontainer container 0d4df3769d6ea89cca3934041c974999d901ac6a50ff3605b8aa77fad2dddde1. Dec 16 16:16:46.010339 systemd[1]: Started cri-containerd-342af0b6b0d5241b90e538b351868093d15cdb48d9b887357453293a3155f1ae.scope - libcontainer container 342af0b6b0d5241b90e538b351868093d15cdb48d9b887357453293a3155f1ae. Dec 16 16:16:46.164171 containerd[1594]: time="2025-12-16T16:16:46.163891277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qc42j,Uid:450a4c7a-b8ec-48bd-8439-75f6f427c850,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d4df3769d6ea89cca3934041c974999d901ac6a50ff3605b8aa77fad2dddde1\"" Dec 16 16:16:46.183848 containerd[1594]: time="2025-12-16T16:16:46.183776922Z" level=info msg="CreateContainer within sandbox \"0d4df3769d6ea89cca3934041c974999d901ac6a50ff3605b8aa77fad2dddde1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 16:16:46.190718 containerd[1594]: time="2025-12-16T16:16:46.190674874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-x4s6g,Uid:0145d04f-ceb3-4ae4-9ce9-19f98ad7d000,Namespace:kube-system,Attempt:0,} returns sandbox id \"342af0b6b0d5241b90e538b351868093d15cdb48d9b887357453293a3155f1ae\"" Dec 16 16:16:46.198181 containerd[1594]: time="2025-12-16T16:16:46.198136068Z" level=info msg="CreateContainer within sandbox \"342af0b6b0d5241b90e538b351868093d15cdb48d9b887357453293a3155f1ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 16:16:46.214114 containerd[1594]: time="2025-12-16T16:16:46.213135238Z" level=info msg="Container d3d7f06fc4620d712ea6bf2f60a65eeff0e4618c2abc7a680c772227ffa0398b: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:16:46.218383 containerd[1594]: time="2025-12-16T16:16:46.218315022Z" level=info msg="Container 8bf73b073eb652ebe58a6da0fcb07d16c2a81fa4ba90c429409d30e2ec488427: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:16:46.223048 containerd[1594]: time="2025-12-16T16:16:46.222867354Z" level=info msg="CreateContainer within sandbox \"0d4df3769d6ea89cca3934041c974999d901ac6a50ff3605b8aa77fad2dddde1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d3d7f06fc4620d712ea6bf2f60a65eeff0e4618c2abc7a680c772227ffa0398b\"" Dec 16 16:16:46.224078 containerd[1594]: time="2025-12-16T16:16:46.223946612Z" level=info msg="StartContainer for \"d3d7f06fc4620d712ea6bf2f60a65eeff0e4618c2abc7a680c772227ffa0398b\"" Dec 16 16:16:46.229843 containerd[1594]: time="2025-12-16T16:16:46.229790174Z" level=info msg="CreateContainer within sandbox \"342af0b6b0d5241b90e538b351868093d15cdb48d9b887357453293a3155f1ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8bf73b073eb652ebe58a6da0fcb07d16c2a81fa4ba90c429409d30e2ec488427\"" Dec 16 16:16:46.230750 containerd[1594]: time="2025-12-16T16:16:46.230649974Z" level=info msg="connecting to shim d3d7f06fc4620d712ea6bf2f60a65eeff0e4618c2abc7a680c772227ffa0398b" address="unix:///run/containerd/s/00def824d9d5e55b4fcc83eee5f4165df3047d8421d25834ed4e13804916f62d" protocol=ttrpc version=3 Dec 16 16:16:46.232081 containerd[1594]: time="2025-12-16T16:16:46.230895921Z" level=info msg="StartContainer for \"8bf73b073eb652ebe58a6da0fcb07d16c2a81fa4ba90c429409d30e2ec488427\"" Dec 16 16:16:46.238557 containerd[1594]: time="2025-12-16T16:16:46.238522035Z" level=info msg="connecting to shim 8bf73b073eb652ebe58a6da0fcb07d16c2a81fa4ba90c429409d30e2ec488427" address="unix:///run/containerd/s/e0e00c204f7fc620bd943ca5f9b9e8f37ba549d9efc6be4356b8d255e2a36e63" protocol=ttrpc version=3 Dec 16 16:16:46.272255 systemd[1]: Started cri-containerd-d3d7f06fc4620d712ea6bf2f60a65eeff0e4618c2abc7a680c772227ffa0398b.scope - libcontainer container d3d7f06fc4620d712ea6bf2f60a65eeff0e4618c2abc7a680c772227ffa0398b. Dec 16 16:16:46.285238 systemd[1]: Started cri-containerd-8bf73b073eb652ebe58a6da0fcb07d16c2a81fa4ba90c429409d30e2ec488427.scope - libcontainer container 8bf73b073eb652ebe58a6da0fcb07d16c2a81fa4ba90c429409d30e2ec488427. Dec 16 16:16:46.354411 containerd[1594]: time="2025-12-16T16:16:46.354343207Z" level=info msg="StartContainer for \"8bf73b073eb652ebe58a6da0fcb07d16c2a81fa4ba90c429409d30e2ec488427\" returns successfully" Dec 16 16:16:46.356442 containerd[1594]: time="2025-12-16T16:16:46.356383819Z" level=info msg="StartContainer for \"d3d7f06fc4620d712ea6bf2f60a65eeff0e4618c2abc7a680c772227ffa0398b\" returns successfully" Dec 16 16:16:46.831146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677110088.mount: Deactivated successfully. Dec 16 16:16:47.016514 kubelet[2860]: I1216 16:16:47.016356 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x4s6g" podStartSLOduration=31.016299981 podStartE2EDuration="31.016299981s" podCreationTimestamp="2025-12-16 16:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 16:16:47.01508857 +0000 UTC m=+35.597477403" watchObservedRunningTime="2025-12-16 16:16:47.016299981 +0000 UTC m=+35.598688788" Dec 16 16:16:47.040802 kubelet[2860]: I1216 16:16:47.040718 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qc42j" podStartSLOduration=31.040687257 podStartE2EDuration="31.040687257s" podCreationTimestamp="2025-12-16 16:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 16:16:47.038467558 +0000 UTC m=+35.620856365" watchObservedRunningTime="2025-12-16 16:16:47.040687257 +0000 UTC m=+35.623076076" Dec 16 16:17:29.920813 systemd[1]: Started sshd@7-10.230.59.10:22-139.178.68.195:60114.service - OpenSSH per-connection server daemon (139.178.68.195:60114). Dec 16 16:17:30.880298 sshd[4187]: Accepted publickey for core from 139.178.68.195 port 60114 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:17:30.882781 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:17:30.900180 systemd-logind[1575]: New session 10 of user core. Dec 16 16:17:30.908474 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 16:17:32.066061 sshd[4190]: Connection closed by 139.178.68.195 port 60114 Dec 16 16:17:32.065350 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Dec 16 16:17:32.071297 systemd[1]: sshd@7-10.230.59.10:22-139.178.68.195:60114.service: Deactivated successfully. Dec 16 16:17:32.074525 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 16:17:32.076406 systemd-logind[1575]: Session 10 logged out. Waiting for processes to exit. Dec 16 16:17:32.078594 systemd-logind[1575]: Removed session 10. Dec 16 16:17:37.225643 systemd[1]: Started sshd@8-10.230.59.10:22-139.178.68.195:52890.service - OpenSSH per-connection server daemon (139.178.68.195:52890). Dec 16 16:17:38.157338 sshd[4203]: Accepted publickey for core from 139.178.68.195 port 52890 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:17:38.159387 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:17:38.168970 systemd-logind[1575]: New session 11 of user core. Dec 16 16:17:38.178284 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 16:17:38.914249 sshd[4206]: Connection closed by 139.178.68.195 port 52890 Dec 16 16:17:38.915273 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Dec 16 16:17:38.921197 systemd[1]: sshd@8-10.230.59.10:22-139.178.68.195:52890.service: Deactivated successfully. Dec 16 16:17:38.924330 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 16:17:38.925931 systemd-logind[1575]: Session 11 logged out. Waiting for processes to exit. Dec 16 16:17:38.928268 systemd-logind[1575]: Removed session 11. Dec 16 16:17:44.074132 systemd[1]: Started sshd@9-10.230.59.10:22-139.178.68.195:36588.service - OpenSSH per-connection server daemon (139.178.68.195:36588). Dec 16 16:17:45.021198 sshd[4220]: Accepted publickey for core from 139.178.68.195 port 36588 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:17:45.023364 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:17:45.031078 systemd-logind[1575]: New session 12 of user core. Dec 16 16:17:45.038285 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 16:17:45.745589 sshd[4223]: Connection closed by 139.178.68.195 port 36588 Dec 16 16:17:45.747246 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Dec 16 16:17:45.755166 systemd[1]: sshd@9-10.230.59.10:22-139.178.68.195:36588.service: Deactivated successfully. Dec 16 16:17:45.759625 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 16:17:45.763096 systemd-logind[1575]: Session 12 logged out. Waiting for processes to exit. Dec 16 16:17:45.765060 systemd-logind[1575]: Removed session 12. Dec 16 16:17:50.909921 systemd[1]: Started sshd@10-10.230.59.10:22-139.178.68.195:55338.service - OpenSSH per-connection server daemon (139.178.68.195:55338). Dec 16 16:17:51.833996 sshd[4238]: Accepted publickey for core from 139.178.68.195 port 55338 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:17:51.836904 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:17:51.846114 systemd-logind[1575]: New session 13 of user core. Dec 16 16:17:51.850692 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 16:17:52.564288 sshd[4241]: Connection closed by 139.178.68.195 port 55338 Dec 16 16:17:52.564893 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Dec 16 16:17:52.571979 systemd[1]: sshd@10-10.230.59.10:22-139.178.68.195:55338.service: Deactivated successfully. Dec 16 16:17:52.575210 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 16:17:52.576514 systemd-logind[1575]: Session 13 logged out. Waiting for processes to exit. Dec 16 16:17:52.579174 systemd-logind[1575]: Removed session 13. Dec 16 16:17:52.722149 systemd[1]: Started sshd@11-10.230.59.10:22-139.178.68.195:55342.service - OpenSSH per-connection server daemon (139.178.68.195:55342). Dec 16 16:17:53.639681 sshd[4253]: Accepted publickey for core from 139.178.68.195 port 55342 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:17:53.641862 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:17:53.650383 systemd-logind[1575]: New session 14 of user core. Dec 16 16:17:53.659262 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 16:17:54.446073 sshd[4256]: Connection closed by 139.178.68.195 port 55342 Dec 16 16:17:54.445866 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Dec 16 16:17:54.451065 systemd[1]: sshd@11-10.230.59.10:22-139.178.68.195:55342.service: Deactivated successfully. Dec 16 16:17:54.453952 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 16:17:54.459095 systemd-logind[1575]: Session 14 logged out. Waiting for processes to exit. Dec 16 16:17:54.461190 systemd-logind[1575]: Removed session 14. Dec 16 16:17:54.608940 systemd[1]: Started sshd@12-10.230.59.10:22-139.178.68.195:55356.service - OpenSSH per-connection server daemon (139.178.68.195:55356). Dec 16 16:17:55.538796 sshd[4266]: Accepted publickey for core from 139.178.68.195 port 55356 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:17:55.541335 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:17:55.550132 systemd-logind[1575]: New session 15 of user core. Dec 16 16:17:55.555238 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 16:17:56.250856 sshd[4269]: Connection closed by 139.178.68.195 port 55356 Dec 16 16:17:56.251950 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Dec 16 16:17:56.257122 systemd[1]: sshd@12-10.230.59.10:22-139.178.68.195:55356.service: Deactivated successfully. Dec 16 16:17:56.260561 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 16:17:56.263572 systemd-logind[1575]: Session 15 logged out. Waiting for processes to exit. Dec 16 16:17:56.266565 systemd-logind[1575]: Removed session 15. Dec 16 16:18:01.412777 systemd[1]: Started sshd@13-10.230.59.10:22-139.178.68.195:51810.service - OpenSSH per-connection server daemon (139.178.68.195:51810). Dec 16 16:18:02.333682 sshd[4281]: Accepted publickey for core from 139.178.68.195 port 51810 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:18:02.336042 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:18:02.344714 systemd-logind[1575]: New session 16 of user core. Dec 16 16:18:02.351220 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 16:18:03.058817 sshd[4284]: Connection closed by 139.178.68.195 port 51810 Dec 16 16:18:03.060132 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Dec 16 16:18:03.067049 systemd-logind[1575]: Session 16 logged out. Waiting for processes to exit. Dec 16 16:18:03.067448 systemd[1]: sshd@13-10.230.59.10:22-139.178.68.195:51810.service: Deactivated successfully. Dec 16 16:18:03.071184 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 16:18:03.073972 systemd-logind[1575]: Removed session 16. Dec 16 16:18:08.218433 systemd[1]: Started sshd@14-10.230.59.10:22-139.178.68.195:51822.service - OpenSSH per-connection server daemon (139.178.68.195:51822). Dec 16 16:18:09.142222 sshd[4297]: Accepted publickey for core from 139.178.68.195 port 51822 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:18:09.144242 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:18:09.153086 systemd-logind[1575]: New session 17 of user core. Dec 16 16:18:09.159309 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 16:18:09.860101 sshd[4300]: Connection closed by 139.178.68.195 port 51822 Dec 16 16:18:09.861142 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Dec 16 16:18:09.868204 systemd[1]: sshd@14-10.230.59.10:22-139.178.68.195:51822.service: Deactivated successfully. Dec 16 16:18:09.871772 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 16:18:09.873365 systemd-logind[1575]: Session 17 logged out. Waiting for processes to exit. Dec 16 16:18:09.876282 systemd-logind[1575]: Removed session 17. Dec 16 16:18:10.030465 systemd[1]: Started sshd@15-10.230.59.10:22-139.178.68.195:51826.service - OpenSSH per-connection server daemon (139.178.68.195:51826). Dec 16 16:18:10.963900 sshd[4312]: Accepted publickey for core from 139.178.68.195 port 51826 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:18:10.965915 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:18:10.974361 systemd-logind[1575]: New session 18 of user core. Dec 16 16:18:10.979290 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 16:18:12.110028 sshd[4315]: Connection closed by 139.178.68.195 port 51826 Dec 16 16:18:12.111521 sshd-session[4312]: pam_unix(sshd:session): session closed for user core Dec 16 16:18:12.116856 systemd-logind[1575]: Session 18 logged out. Waiting for processes to exit. Dec 16 16:18:12.118422 systemd[1]: sshd@15-10.230.59.10:22-139.178.68.195:51826.service: Deactivated successfully. Dec 16 16:18:12.121329 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 16:18:12.125690 systemd-logind[1575]: Removed session 18. Dec 16 16:18:12.275216 systemd[1]: Started sshd@16-10.230.59.10:22-139.178.68.195:41216.service - OpenSSH per-connection server daemon (139.178.68.195:41216). Dec 16 16:18:13.208918 sshd[4327]: Accepted publickey for core from 139.178.68.195 port 41216 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:18:13.210882 sshd-session[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:18:13.219850 systemd-logind[1575]: New session 19 of user core. Dec 16 16:18:13.224533 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 16:18:14.690678 sshd[4330]: Connection closed by 139.178.68.195 port 41216 Dec 16 16:18:14.691586 sshd-session[4327]: pam_unix(sshd:session): session closed for user core Dec 16 16:18:14.697603 systemd[1]: sshd@16-10.230.59.10:22-139.178.68.195:41216.service: Deactivated successfully. Dec 16 16:18:14.700578 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 16:18:14.702021 systemd-logind[1575]: Session 19 logged out. Waiting for processes to exit. Dec 16 16:18:14.704475 systemd-logind[1575]: Removed session 19. Dec 16 16:18:14.846496 systemd[1]: Started sshd@17-10.230.59.10:22-139.178.68.195:41226.service - OpenSSH per-connection server daemon (139.178.68.195:41226). Dec 16 16:18:15.760710 sshd[4345]: Accepted publickey for core from 139.178.68.195 port 41226 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:18:15.764184 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:18:15.773412 systemd-logind[1575]: New session 20 of user core. Dec 16 16:18:15.779256 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 16:18:16.708386 sshd[4350]: Connection closed by 139.178.68.195 port 41226 Dec 16 16:18:16.709021 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Dec 16 16:18:16.716264 systemd[1]: sshd@17-10.230.59.10:22-139.178.68.195:41226.service: Deactivated successfully. Dec 16 16:18:16.718836 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 16:18:16.720862 systemd-logind[1575]: Session 20 logged out. Waiting for processes to exit. Dec 16 16:18:16.723654 systemd-logind[1575]: Removed session 20. Dec 16 16:18:16.868482 systemd[1]: Started sshd@18-10.230.59.10:22-139.178.68.195:41238.service - OpenSSH per-connection server daemon (139.178.68.195:41238). Dec 16 16:18:17.790681 sshd[4360]: Accepted publickey for core from 139.178.68.195 port 41238 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:18:17.793089 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:18:17.804119 systemd-logind[1575]: New session 21 of user core. Dec 16 16:18:17.814311 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 16:18:18.505291 sshd[4363]: Connection closed by 139.178.68.195 port 41238 Dec 16 16:18:18.505902 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Dec 16 16:18:18.513582 systemd[1]: sshd@18-10.230.59.10:22-139.178.68.195:41238.service: Deactivated successfully. Dec 16 16:18:18.518524 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 16:18:18.520957 systemd-logind[1575]: Session 21 logged out. Waiting for processes to exit. Dec 16 16:18:18.523605 systemd-logind[1575]: Removed session 21. Dec 16 16:18:23.668983 systemd[1]: Started sshd@19-10.230.59.10:22-139.178.68.195:38718.service - OpenSSH per-connection server daemon (139.178.68.195:38718). Dec 16 16:18:24.601155 sshd[4379]: Accepted publickey for core from 139.178.68.195 port 38718 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:18:24.603625 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:18:24.611546 systemd-logind[1575]: New session 22 of user core. Dec 16 16:18:24.618291 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 16:18:25.316715 sshd[4382]: Connection closed by 139.178.68.195 port 38718 Dec 16 16:18:25.315881 sshd-session[4379]: pam_unix(sshd:session): session closed for user core Dec 16 16:18:25.321767 systemd-logind[1575]: Session 22 logged out. Waiting for processes to exit. Dec 16 16:18:25.322902 systemd[1]: sshd@19-10.230.59.10:22-139.178.68.195:38718.service: Deactivated successfully. Dec 16 16:18:25.326368 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 16:18:25.329799 systemd-logind[1575]: Removed session 22. Dec 16 16:18:30.513412 systemd[1]: Started sshd@20-10.230.59.10:22-139.178.68.195:42078.service - OpenSSH per-connection server daemon (139.178.68.195:42078). Dec 16 16:18:31.202159 systemd[1]: Started sshd@21-10.230.59.10:22-62.221.114.157:35676.service - OpenSSH per-connection server daemon (62.221.114.157:35676). Dec 16 16:18:31.526545 sshd[4395]: Accepted publickey for core from 139.178.68.195 port 42078 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:18:31.528129 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:18:31.537165 systemd-logind[1575]: New session 23 of user core. Dec 16 16:18:31.547429 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 16:18:32.033939 sshd[4399]: Invalid user guest from 62.221.114.157 port 35676 Dec 16 16:18:32.223884 sshd-session[4409]: pam_faillock(sshd:auth): User unknown Dec 16 16:18:32.229575 sshd[4399]: Postponed keyboard-interactive for invalid user guest from 62.221.114.157 port 35676 ssh2 [preauth] Dec 16 16:18:32.301320 sshd[4402]: Connection closed by 139.178.68.195 port 42078 Dec 16 16:18:32.301864 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Dec 16 16:18:32.310229 systemd[1]: sshd@20-10.230.59.10:22-139.178.68.195:42078.service: Deactivated successfully. Dec 16 16:18:32.314022 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 16:18:32.319263 systemd-logind[1575]: Session 23 logged out. Waiting for processes to exit. Dec 16 16:18:32.322251 systemd-logind[1575]: Removed session 23. Dec 16 16:18:32.412468 sshd-session[4409]: pam_unix(sshd:auth): check pass; user unknown Dec 16 16:18:32.412528 sshd-session[4409]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=62.221.114.157 Dec 16 16:18:32.413587 sshd-session[4409]: pam_faillock(sshd:auth): User unknown Dec 16 16:18:32.444663 systemd[1]: Started sshd@22-10.230.59.10:22-139.178.68.195:42084.service - OpenSSH per-connection server daemon (139.178.68.195:42084). Dec 16 16:18:33.361241 sshd[4414]: Accepted publickey for core from 139.178.68.195 port 42084 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:18:33.363135 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:18:33.370246 systemd-logind[1575]: New session 24 of user core. Dec 16 16:18:33.382363 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 16:18:34.130652 sshd[4399]: PAM: Permission denied for illegal user guest from 62.221.114.157 Dec 16 16:18:34.131392 sshd[4399]: Failed keyboard-interactive/pam for invalid user guest from 62.221.114.157 port 35676 ssh2 Dec 16 16:18:34.353515 sshd[4399]: Connection closed by invalid user guest 62.221.114.157 port 35676 [preauth] Dec 16 16:18:34.356854 systemd[1]: sshd@21-10.230.59.10:22-62.221.114.157:35676.service: Deactivated successfully. Dec 16 16:18:35.323922 containerd[1594]: time="2025-12-16T16:18:35.323771691Z" level=info msg="StopContainer for \"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\" with timeout 30 (s)" Dec 16 16:18:35.342207 containerd[1594]: time="2025-12-16T16:18:35.342012700Z" level=info msg="Stop container \"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\" with signal terminated" Dec 16 16:18:35.415656 containerd[1594]: time="2025-12-16T16:18:35.415569643Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 16:18:35.426324 systemd[1]: cri-containerd-8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935.scope: Deactivated successfully. Dec 16 16:18:35.432414 containerd[1594]: time="2025-12-16T16:18:35.432367211Z" level=info msg="StopContainer for \"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\" with timeout 2 (s)" Dec 16 16:18:35.432839 containerd[1594]: time="2025-12-16T16:18:35.432796154Z" level=info msg="Stop container \"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\" with signal terminated" Dec 16 16:18:35.436272 containerd[1594]: time="2025-12-16T16:18:35.435894251Z" level=info msg="received container exit event container_id:\"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\" id:\"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\" pid:3604 exited_at:{seconds:1765901915 nanos:430567475}" Dec 16 16:18:35.459918 systemd-networkd[1483]: lxc_health: Link DOWN Dec 16 16:18:35.459935 systemd-networkd[1483]: lxc_health: Lost carrier Dec 16 16:18:35.491099 systemd[1]: cri-containerd-7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b.scope: Deactivated successfully. Dec 16 16:18:35.492530 containerd[1594]: time="2025-12-16T16:18:35.491602106Z" level=info msg="received container exit event container_id:\"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\" id:\"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\" pid:3467 exited_at:{seconds:1765901915 nanos:491018875}" Dec 16 16:18:35.491650 systemd[1]: cri-containerd-7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b.scope: Consumed 10.882s CPU time, 201.9M memory peak, 80.3M read from disk, 13.3M written to disk. Dec 16 16:18:35.504647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935-rootfs.mount: Deactivated successfully. Dec 16 16:18:35.518927 containerd[1594]: time="2025-12-16T16:18:35.518660523Z" level=info msg="StopContainer for \"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\" returns successfully" Dec 16 16:18:35.524286 containerd[1594]: time="2025-12-16T16:18:35.524236983Z" level=info msg="StopPodSandbox for \"3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0\"" Dec 16 16:18:35.526436 containerd[1594]: time="2025-12-16T16:18:35.526404682Z" level=info msg="Container to stop \"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 16:18:35.547483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b-rootfs.mount: Deactivated successfully. Dec 16 16:18:35.552393 systemd[1]: cri-containerd-3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0.scope: Deactivated successfully. Dec 16 16:18:35.562997 containerd[1594]: time="2025-12-16T16:18:35.562819472Z" level=info msg="StopContainer for \"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\" returns successfully" Dec 16 16:18:35.563562 containerd[1594]: time="2025-12-16T16:18:35.563529753Z" level=info msg="StopPodSandbox for \"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\"" Dec 16 16:18:35.563809 containerd[1594]: time="2025-12-16T16:18:35.563775788Z" level=info msg="Container to stop \"c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 16:18:35.563945 containerd[1594]: time="2025-12-16T16:18:35.563919506Z" level=info msg="Container to stop \"e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 16:18:35.564190 containerd[1594]: time="2025-12-16T16:18:35.564118200Z" level=info msg="Container to stop \"aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 16:18:35.564190 containerd[1594]: time="2025-12-16T16:18:35.564150770Z" level=info msg="Container to stop \"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 16:18:35.564666 containerd[1594]: time="2025-12-16T16:18:35.564172226Z" level=info msg="Container to stop \"cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 16:18:35.565232 containerd[1594]: time="2025-12-16T16:18:35.565135769Z" level=info msg="received sandbox exit event container_id:\"3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0\" id:\"3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0\" exit_status:137 exited_at:{seconds:1765901915 nanos:564561616}" monitor_name=podsandbox Dec 16 16:18:35.577153 systemd[1]: cri-containerd-37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984.scope: Deactivated successfully. Dec 16 16:18:35.583960 containerd[1594]: time="2025-12-16T16:18:35.583876473Z" level=info msg="received sandbox exit event container_id:\"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" id:\"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" exit_status:137 exited_at:{seconds:1765901915 nanos:583380394}" monitor_name=podsandbox Dec 16 16:18:35.619341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0-rootfs.mount: Deactivated successfully. Dec 16 16:18:35.629168 containerd[1594]: time="2025-12-16T16:18:35.628886264Z" level=info msg="shim disconnected" id=3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0 namespace=k8s.io Dec 16 16:18:35.629345 containerd[1594]: time="2025-12-16T16:18:35.629169463Z" level=warning msg="cleaning up after shim disconnected" id=3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0 namespace=k8s.io Dec 16 16:18:35.654765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984-rootfs.mount: Deactivated successfully. Dec 16 16:18:35.659599 containerd[1594]: time="2025-12-16T16:18:35.629212727Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 16:18:35.660454 containerd[1594]: time="2025-12-16T16:18:35.660167027Z" level=info msg="shim disconnected" id=37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984 namespace=k8s.io Dec 16 16:18:35.660454 containerd[1594]: time="2025-12-16T16:18:35.660203064Z" level=warning msg="cleaning up after shim disconnected" id=37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984 namespace=k8s.io Dec 16 16:18:35.660454 containerd[1594]: time="2025-12-16T16:18:35.660216843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 16:18:35.687864 containerd[1594]: time="2025-12-16T16:18:35.687788415Z" level=info msg="received sandbox container exit event sandbox_id:\"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" exit_status:137 exited_at:{seconds:1765901915 nanos:583380394}" monitor_name=criService Dec 16 16:18:35.690356 containerd[1594]: time="2025-12-16T16:18:35.690307524Z" level=info msg="TearDown network for sandbox \"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" successfully" Dec 16 16:18:35.690356 containerd[1594]: time="2025-12-16T16:18:35.690344314Z" level=info msg="StopPodSandbox for \"37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984\" returns successfully" Dec 16 16:18:35.694003 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37e9d15e7e2a112bf001f3297af22fcbe8e8f2c8756891cfa67e0f977cec3984-shm.mount: Deactivated successfully. Dec 16 16:18:35.696521 containerd[1594]: time="2025-12-16T16:18:35.696463148Z" level=info msg="received sandbox container exit event sandbox_id:\"3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0\" exit_status:137 exited_at:{seconds:1765901915 nanos:564561616}" monitor_name=criService Dec 16 16:18:35.697434 containerd[1594]: time="2025-12-16T16:18:35.697018239Z" level=info msg="TearDown network for sandbox \"3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0\" successfully" Dec 16 16:18:35.697581 containerd[1594]: time="2025-12-16T16:18:35.697554409Z" level=info msg="StopPodSandbox for \"3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0\" returns successfully" Dec 16 16:18:35.810057 kubelet[2860]: I1216 16:18:35.809966 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-xtables-lock\") pod \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " Dec 16 16:18:35.811590 kubelet[2860]: I1216 16:18:35.809970 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "96f3a87d-c857-4a7a-aa8b-4a40191468c4" (UID: "96f3a87d-c857-4a7a-aa8b-4a40191468c4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 16:18:35.811590 kubelet[2860]: I1216 16:18:35.810982 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f9rr\" (UniqueName: \"kubernetes.io/projected/adef91e5-15a0-4ce8-a00d-bcff575fd802-kube-api-access-2f9rr\") pod \"adef91e5-15a0-4ce8-a00d-bcff575fd802\" (UID: \"adef91e5-15a0-4ce8-a00d-bcff575fd802\") " Dec 16 16:18:35.811590 kubelet[2860]: I1216 16:18:35.811144 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cilium-cgroup\") pod \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " Dec 16 16:18:35.811590 kubelet[2860]: I1216 16:18:35.811183 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-host-proc-sys-kernel\") pod \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " Dec 16 16:18:35.811590 kubelet[2860]: I1216 16:18:35.811270 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cni-path\") pod \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " Dec 16 16:18:35.811590 kubelet[2860]: I1216 16:18:35.811322 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-hostproc\") pod \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " Dec 16 16:18:35.811924 kubelet[2860]: I1216 16:18:35.811348 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-host-proc-sys-net\") pod \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " Dec 16 16:18:35.811924 kubelet[2860]: I1216 16:18:35.811403 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adef91e5-15a0-4ce8-a00d-bcff575fd802-cilium-config-path\") pod \"adef91e5-15a0-4ce8-a00d-bcff575fd802\" (UID: \"adef91e5-15a0-4ce8-a00d-bcff575fd802\") " Dec 16 16:18:35.811924 kubelet[2860]: I1216 16:18:35.811444 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cilium-config-path\") pod \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " Dec 16 16:18:35.811924 kubelet[2860]: I1216 16:18:35.811592 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-bpf-maps\") pod \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " Dec 16 16:18:35.811924 kubelet[2860]: I1216 16:18:35.811625 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cilium-run\") pod \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " Dec 16 16:18:35.811924 kubelet[2860]: I1216 16:18:35.811774 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-etc-cni-netd\") pod \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " Dec 16 16:18:35.812367 kubelet[2860]: I1216 16:18:35.811920 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96vxq\" (UniqueName: \"kubernetes.io/projected/96f3a87d-c857-4a7a-aa8b-4a40191468c4-kube-api-access-96vxq\") pod \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " Dec 16 16:18:35.812367 kubelet[2860]: I1216 16:18:35.811957 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-lib-modules\") pod \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " Dec 16 16:18:35.812367 kubelet[2860]: I1216 16:18:35.812084 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96f3a87d-c857-4a7a-aa8b-4a40191468c4-clustermesh-secrets\") pod \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " Dec 16 16:18:35.812527 kubelet[2860]: I1216 16:18:35.812438 2860 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96f3a87d-c857-4a7a-aa8b-4a40191468c4-hubble-tls\") pod \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\" (UID: \"96f3a87d-c857-4a7a-aa8b-4a40191468c4\") " Dec 16 16:18:35.812580 kubelet[2860]: I1216 16:18:35.812535 2860 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-xtables-lock\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.813293 kubelet[2860]: I1216 16:18:35.813264 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "96f3a87d-c857-4a7a-aa8b-4a40191468c4" (UID: "96f3a87d-c857-4a7a-aa8b-4a40191468c4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 16:18:35.814583 kubelet[2860]: I1216 16:18:35.814086 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "96f3a87d-c857-4a7a-aa8b-4a40191468c4" (UID: "96f3a87d-c857-4a7a-aa8b-4a40191468c4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 16:18:35.814684 kubelet[2860]: I1216 16:18:35.814160 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cni-path" (OuterVolumeSpecName: "cni-path") pod "96f3a87d-c857-4a7a-aa8b-4a40191468c4" (UID: "96f3a87d-c857-4a7a-aa8b-4a40191468c4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 16:18:35.820143 kubelet[2860]: I1216 16:18:35.814795 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-hostproc" (OuterVolumeSpecName: "hostproc") pod "96f3a87d-c857-4a7a-aa8b-4a40191468c4" (UID: "96f3a87d-c857-4a7a-aa8b-4a40191468c4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 16:18:35.820143 kubelet[2860]: I1216 16:18:35.814818 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "96f3a87d-c857-4a7a-aa8b-4a40191468c4" (UID: "96f3a87d-c857-4a7a-aa8b-4a40191468c4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 16:18:35.820143 kubelet[2860]: I1216 16:18:35.818790 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "96f3a87d-c857-4a7a-aa8b-4a40191468c4" (UID: "96f3a87d-c857-4a7a-aa8b-4a40191468c4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 16:18:35.820143 kubelet[2860]: I1216 16:18:35.818835 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "96f3a87d-c857-4a7a-aa8b-4a40191468c4" (UID: "96f3a87d-c857-4a7a-aa8b-4a40191468c4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 16:18:35.820143 kubelet[2860]: I1216 16:18:35.818859 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "96f3a87d-c857-4a7a-aa8b-4a40191468c4" (UID: "96f3a87d-c857-4a7a-aa8b-4a40191468c4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 16:18:35.823220 kubelet[2860]: I1216 16:18:35.823006 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "96f3a87d-c857-4a7a-aa8b-4a40191468c4" (UID: "96f3a87d-c857-4a7a-aa8b-4a40191468c4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 16:18:35.833437 kubelet[2860]: I1216 16:18:35.833289 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adef91e5-15a0-4ce8-a00d-bcff575fd802-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "adef91e5-15a0-4ce8-a00d-bcff575fd802" (UID: "adef91e5-15a0-4ce8-a00d-bcff575fd802"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 16:18:35.835516 kubelet[2860]: I1216 16:18:35.833454 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adef91e5-15a0-4ce8-a00d-bcff575fd802-kube-api-access-2f9rr" (OuterVolumeSpecName: "kube-api-access-2f9rr") pod "adef91e5-15a0-4ce8-a00d-bcff575fd802" (UID: "adef91e5-15a0-4ce8-a00d-bcff575fd802"). InnerVolumeSpecName "kube-api-access-2f9rr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 16:18:35.835690 kubelet[2860]: I1216 16:18:35.834341 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96f3a87d-c857-4a7a-aa8b-4a40191468c4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "96f3a87d-c857-4a7a-aa8b-4a40191468c4" (UID: "96f3a87d-c857-4a7a-aa8b-4a40191468c4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 16:18:35.841327 kubelet[2860]: I1216 16:18:35.841277 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96f3a87d-c857-4a7a-aa8b-4a40191468c4-kube-api-access-96vxq" (OuterVolumeSpecName: "kube-api-access-96vxq") pod "96f3a87d-c857-4a7a-aa8b-4a40191468c4" (UID: "96f3a87d-c857-4a7a-aa8b-4a40191468c4"). InnerVolumeSpecName "kube-api-access-96vxq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 16:18:35.844351 kubelet[2860]: I1216 16:18:35.843698 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "96f3a87d-c857-4a7a-aa8b-4a40191468c4" (UID: "96f3a87d-c857-4a7a-aa8b-4a40191468c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 16:18:35.845993 kubelet[2860]: I1216 16:18:35.845957 2860 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f3a87d-c857-4a7a-aa8b-4a40191468c4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "96f3a87d-c857-4a7a-aa8b-4a40191468c4" (UID: "96f3a87d-c857-4a7a-aa8b-4a40191468c4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 16:18:35.913308 kubelet[2860]: I1216 16:18:35.913237 2860 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cilium-config-path\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.913683 kubelet[2860]: I1216 16:18:35.913656 2860 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-bpf-maps\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.913806 kubelet[2860]: I1216 16:18:35.913786 2860 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cilium-run\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.914174 kubelet[2860]: I1216 16:18:35.913902 2860 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-etc-cni-netd\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.914174 kubelet[2860]: I1216 16:18:35.913936 2860 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-96vxq\" (UniqueName: \"kubernetes.io/projected/96f3a87d-c857-4a7a-aa8b-4a40191468c4-kube-api-access-96vxq\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.914174 kubelet[2860]: I1216 16:18:35.913955 2860 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-lib-modules\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.914174 kubelet[2860]: I1216 16:18:35.913971 2860 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96f3a87d-c857-4a7a-aa8b-4a40191468c4-clustermesh-secrets\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.914174 kubelet[2860]: I1216 16:18:35.913987 2860 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96f3a87d-c857-4a7a-aa8b-4a40191468c4-hubble-tls\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.914174 kubelet[2860]: I1216 16:18:35.914006 2860 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2f9rr\" (UniqueName: \"kubernetes.io/projected/adef91e5-15a0-4ce8-a00d-bcff575fd802-kube-api-access-2f9rr\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.914174 kubelet[2860]: I1216 16:18:35.914022 2860 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cilium-cgroup\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.914498 kubelet[2860]: I1216 16:18:35.914054 2860 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-host-proc-sys-kernel\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.914498 kubelet[2860]: I1216 16:18:35.914074 2860 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-cni-path\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.914498 kubelet[2860]: I1216 16:18:35.914089 2860 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-hostproc\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.914498 kubelet[2860]: I1216 16:18:35.914119 2860 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96f3a87d-c857-4a7a-aa8b-4a40191468c4-host-proc-sys-net\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:35.914498 kubelet[2860]: I1216 16:18:35.914140 2860 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adef91e5-15a0-4ce8-a00d-bcff575fd802-cilium-config-path\") on node \"srv-899vz.gb1.brightbox.com\" DevicePath \"\"" Dec 16 16:18:36.346562 kubelet[2860]: I1216 16:18:36.346145 2860 scope.go:117] "RemoveContainer" containerID="7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b" Dec 16 16:18:36.359294 systemd[1]: Removed slice kubepods-burstable-pod96f3a87d_c857_4a7a_aa8b_4a40191468c4.slice - libcontainer container kubepods-burstable-pod96f3a87d_c857_4a7a_aa8b_4a40191468c4.slice. Dec 16 16:18:36.359958 systemd[1]: kubepods-burstable-pod96f3a87d_c857_4a7a_aa8b_4a40191468c4.slice: Consumed 11.056s CPU time, 202.2M memory peak, 80.3M read from disk, 13.3M written to disk. Dec 16 16:18:36.364709 containerd[1594]: time="2025-12-16T16:18:36.363468467Z" level=info msg="RemoveContainer for \"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\"" Dec 16 16:18:36.364196 systemd[1]: Removed slice kubepods-besteffort-podadef91e5_15a0_4ce8_a00d_bcff575fd802.slice - libcontainer container kubepods-besteffort-podadef91e5_15a0_4ce8_a00d_bcff575fd802.slice. Dec 16 16:18:36.377846 containerd[1594]: time="2025-12-16T16:18:36.377704705Z" level=info msg="RemoveContainer for \"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\" returns successfully" Dec 16 16:18:36.381140 kubelet[2860]: I1216 16:18:36.381021 2860 scope.go:117] "RemoveContainer" containerID="aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64" Dec 16 16:18:36.385111 containerd[1594]: time="2025-12-16T16:18:36.384918975Z" level=info msg="RemoveContainer for \"aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64\"" Dec 16 16:18:36.398120 containerd[1594]: time="2025-12-16T16:18:36.397912270Z" level=info msg="RemoveContainer for \"aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64\" returns successfully" Dec 16 16:18:36.399084 kubelet[2860]: I1216 16:18:36.399004 2860 scope.go:117] "RemoveContainer" containerID="cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724" Dec 16 16:18:36.407251 containerd[1594]: time="2025-12-16T16:18:36.407111579Z" level=info msg="RemoveContainer for \"cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724\"" Dec 16 16:18:36.417256 containerd[1594]: time="2025-12-16T16:18:36.416675549Z" level=info msg="RemoveContainer for \"cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724\" returns successfully" Dec 16 16:18:36.417990 kubelet[2860]: I1216 16:18:36.417799 2860 scope.go:117] "RemoveContainer" containerID="e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed" Dec 16 16:18:36.422289 containerd[1594]: time="2025-12-16T16:18:36.422146642Z" level=info msg="RemoveContainer for \"e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed\"" Dec 16 16:18:36.429967 containerd[1594]: time="2025-12-16T16:18:36.429922648Z" level=info msg="RemoveContainer for \"e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed\" returns successfully" Dec 16 16:18:36.431307 kubelet[2860]: I1216 16:18:36.431275 2860 scope.go:117] "RemoveContainer" containerID="c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a" Dec 16 16:18:36.435349 containerd[1594]: time="2025-12-16T16:18:36.435313808Z" level=info msg="RemoveContainer for \"c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a\"" Dec 16 16:18:36.441374 containerd[1594]: time="2025-12-16T16:18:36.441335510Z" level=info msg="RemoveContainer for \"c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a\" returns successfully" Dec 16 16:18:36.441649 kubelet[2860]: I1216 16:18:36.441599 2860 scope.go:117] "RemoveContainer" containerID="7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b" Dec 16 16:18:36.442262 containerd[1594]: time="2025-12-16T16:18:36.442196331Z" level=error msg="ContainerStatus for \"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\": not found" Dec 16 16:18:36.446919 kubelet[2860]: E1216 16:18:36.445395 2860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\": not found" containerID="7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b" Dec 16 16:18:36.446919 kubelet[2860]: I1216 16:18:36.445508 2860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b"} err="failed to get container status \"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\": rpc error: code = NotFound desc = an error occurred when try to find container \"7de3ff119bbb03040d7b3c93146edea09caffe0850ddea33d6de4b5ac1c6650b\": not found" Dec 16 16:18:36.446919 kubelet[2860]: I1216 16:18:36.445606 2860 scope.go:117] "RemoveContainer" containerID="aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64" Dec 16 16:18:36.447471 containerd[1594]: time="2025-12-16T16:18:36.447424191Z" level=error msg="ContainerStatus for \"aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64\": not found" Dec 16 16:18:36.459675 kubelet[2860]: E1216 16:18:36.459629 2860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64\": not found" containerID="aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64" Dec 16 16:18:36.459838 kubelet[2860]: I1216 16:18:36.459685 2860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64"} err="failed to get container status \"aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64\": rpc error: code = NotFound desc = an error occurred when try to find container \"aff721b34f8352d85d8ad21bee41fb5ac09c426720c2fef315e9c57ed9b47b64\": not found" Dec 16 16:18:36.459838 kubelet[2860]: I1216 16:18:36.459718 2860 scope.go:117] "RemoveContainer" containerID="cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724" Dec 16 16:18:36.460196 containerd[1594]: time="2025-12-16T16:18:36.460141507Z" level=error msg="ContainerStatus for \"cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724\": not found" Dec 16 16:18:36.460731 kubelet[2860]: E1216 16:18:36.460389 2860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724\": not found" containerID="cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724" Dec 16 16:18:36.460731 kubelet[2860]: I1216 16:18:36.460426 2860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724"} err="failed to get container status \"cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd0bb4d006553a3688c166559097365f18845bbdc4155c362c43a4542141f724\": not found" Dec 16 16:18:36.460731 kubelet[2860]: I1216 16:18:36.460450 2860 scope.go:117] "RemoveContainer" containerID="e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed" Dec 16 16:18:36.461510 containerd[1594]: time="2025-12-16T16:18:36.461464338Z" level=error msg="ContainerStatus for \"e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed\": not found" Dec 16 16:18:36.462065 kubelet[2860]: E1216 16:18:36.461712 2860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed\": not found" containerID="e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed" Dec 16 16:18:36.462239 kubelet[2860]: I1216 16:18:36.462209 2860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed"} err="failed to get container status \"e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9b1ed52435b38b7cd0d08aa4296963fecb8bb9950da79819bd1ed118a51d5ed\": not found" Dec 16 16:18:36.462352 kubelet[2860]: I1216 16:18:36.462327 2860 scope.go:117] "RemoveContainer" containerID="c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a" Dec 16 16:18:36.463272 containerd[1594]: time="2025-12-16T16:18:36.463229913Z" level=error msg="ContainerStatus for \"c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a\": not found" Dec 16 16:18:36.463494 kubelet[2860]: E1216 16:18:36.463460 2860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a\": not found" containerID="c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a" Dec 16 16:18:36.463569 kubelet[2860]: I1216 16:18:36.463497 2860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a"} err="failed to get container status \"c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2f86e436cc406bc3f47435cccdee4805347fb930badb375218aea1c5132100a\": not found" Dec 16 16:18:36.463569 kubelet[2860]: I1216 16:18:36.463561 2860 scope.go:117] "RemoveContainer" containerID="8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935" Dec 16 16:18:36.468394 containerd[1594]: time="2025-12-16T16:18:36.468349511Z" level=info msg="RemoveContainer for \"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\"" Dec 16 16:18:36.491966 containerd[1594]: time="2025-12-16T16:18:36.491730607Z" level=info msg="RemoveContainer for \"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\" returns successfully" Dec 16 16:18:36.492738 kubelet[2860]: I1216 16:18:36.492657 2860 scope.go:117] "RemoveContainer" containerID="8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935" Dec 16 16:18:36.493284 containerd[1594]: time="2025-12-16T16:18:36.493126106Z" level=error msg="ContainerStatus for \"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\": not found" Dec 16 16:18:36.494384 kubelet[2860]: E1216 16:18:36.494349 2860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\": not found" containerID="8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935" Dec 16 16:18:36.495436 kubelet[2860]: I1216 16:18:36.495234 2860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935"} err="failed to get container status \"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d162f9ce54d8d3babe2598ddac2129be3a39040996094923b3614eb6176d935\": not found" Dec 16 16:18:36.506683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3602c11958df86bc3b8a2bfc7692586c508b1a8886265c736d6842314b84d9b0-shm.mount: Deactivated successfully. Dec 16 16:18:36.506887 systemd[1]: var-lib-kubelet-pods-adef91e5\x2d15a0\x2d4ce8\x2da00d\x2dbcff575fd802-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2f9rr.mount: Deactivated successfully. Dec 16 16:18:36.507014 systemd[1]: var-lib-kubelet-pods-96f3a87d\x2dc857\x2d4a7a\x2daa8b\x2d4a40191468c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d96vxq.mount: Deactivated successfully. Dec 16 16:18:36.507179 systemd[1]: var-lib-kubelet-pods-96f3a87d\x2dc857\x2d4a7a\x2daa8b\x2d4a40191468c4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 16:18:36.507292 systemd[1]: var-lib-kubelet-pods-96f3a87d\x2dc857\x2d4a7a\x2daa8b\x2d4a40191468c4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 16:18:36.923971 kubelet[2860]: E1216 16:18:36.923759 2860 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 16:18:37.392394 sshd[4417]: Connection closed by 139.178.68.195 port 42084 Dec 16 16:18:37.393901 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Dec 16 16:18:37.406680 systemd[1]: sshd@22-10.230.59.10:22-139.178.68.195:42084.service: Deactivated successfully. Dec 16 16:18:37.409554 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 16:18:37.413995 systemd-logind[1575]: Session 24 logged out. Waiting for processes to exit. Dec 16 16:18:37.416260 systemd-logind[1575]: Removed session 24. Dec 16 16:18:37.551540 systemd[1]: Started sshd@23-10.230.59.10:22-139.178.68.195:42100.service - OpenSSH per-connection server daemon (139.178.68.195:42100). Dec 16 16:18:37.750411 kubelet[2860]: I1216 16:18:37.750237 2860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96f3a87d-c857-4a7a-aa8b-4a40191468c4" path="/var/lib/kubelet/pods/96f3a87d-c857-4a7a-aa8b-4a40191468c4/volumes" Dec 16 16:18:37.752615 kubelet[2860]: I1216 16:18:37.752562 2860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adef91e5-15a0-4ce8-a00d-bcff575fd802" path="/var/lib/kubelet/pods/adef91e5-15a0-4ce8-a00d-bcff575fd802/volumes" Dec 16 16:18:38.478752 sshd[4568]: Accepted publickey for core from 139.178.68.195 port 42100 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:18:38.481457 sshd-session[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:18:38.491139 systemd-logind[1575]: New session 25 of user core. Dec 16 16:18:38.500357 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 16:18:39.711665 systemd[1]: Created slice kubepods-burstable-podb8bef970_80d7_442c_843b_7b7102ef8c50.slice - libcontainer container kubepods-burstable-podb8bef970_80d7_442c_843b_7b7102ef8c50.slice. Dec 16 16:18:39.840443 kubelet[2860]: I1216 16:18:39.840380 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8bef970-80d7-442c-843b-7b7102ef8c50-cilium-config-path\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.840443 kubelet[2860]: I1216 16:18:39.840442 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8bef970-80d7-442c-843b-7b7102ef8c50-cilium-cgroup\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.841132 kubelet[2860]: I1216 16:18:39.840475 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8bef970-80d7-442c-843b-7b7102ef8c50-lib-modules\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.841132 kubelet[2860]: I1216 16:18:39.840513 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8bef970-80d7-442c-843b-7b7102ef8c50-host-proc-sys-net\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.841132 kubelet[2860]: I1216 16:18:39.840546 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8bef970-80d7-442c-843b-7b7102ef8c50-bpf-maps\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.841132 kubelet[2860]: I1216 16:18:39.840583 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8bef970-80d7-442c-843b-7b7102ef8c50-hostproc\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.841132 kubelet[2860]: I1216 16:18:39.840628 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8bef970-80d7-442c-843b-7b7102ef8c50-cni-path\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.841132 kubelet[2860]: I1216 16:18:39.840662 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8bef970-80d7-442c-843b-7b7102ef8c50-hubble-tls\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.841460 kubelet[2860]: I1216 16:18:39.840711 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8bef970-80d7-442c-843b-7b7102ef8c50-clustermesh-secrets\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.841460 kubelet[2860]: I1216 16:18:39.840752 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8bef970-80d7-442c-843b-7b7102ef8c50-host-proc-sys-kernel\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.841460 kubelet[2860]: I1216 16:18:39.840781 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8bef970-80d7-442c-843b-7b7102ef8c50-xtables-lock\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.841460 kubelet[2860]: I1216 16:18:39.840811 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8bef970-80d7-442c-843b-7b7102ef8c50-etc-cni-netd\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.841460 kubelet[2860]: I1216 16:18:39.840838 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b8bef970-80d7-442c-843b-7b7102ef8c50-cilium-ipsec-secrets\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.841833 kubelet[2860]: I1216 16:18:39.840875 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8fcm\" (UniqueName: \"kubernetes.io/projected/b8bef970-80d7-442c-843b-7b7102ef8c50-kube-api-access-c8fcm\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.841833 kubelet[2860]: I1216 16:18:39.840940 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8bef970-80d7-442c-843b-7b7102ef8c50-cilium-run\") pod \"cilium-sktgg\" (UID: \"b8bef970-80d7-442c-843b-7b7102ef8c50\") " pod="kube-system/cilium-sktgg" Dec 16 16:18:39.846662 sshd[4571]: Connection closed by 139.178.68.195 port 42100 Dec 16 16:18:39.846416 sshd-session[4568]: pam_unix(sshd:session): session closed for user core Dec 16 16:18:39.852422 systemd[1]: sshd@23-10.230.59.10:22-139.178.68.195:42100.service: Deactivated successfully. Dec 16 16:18:39.856432 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 16:18:39.858165 systemd-logind[1575]: Session 25 logged out. Waiting for processes to exit. Dec 16 16:18:39.860330 systemd-logind[1575]: Removed session 25. Dec 16 16:18:40.007081 systemd[1]: Started sshd@24-10.230.59.10:22-139.178.68.195:42114.service - OpenSSH per-connection server daemon (139.178.68.195:42114). Dec 16 16:18:40.021972 containerd[1594]: time="2025-12-16T16:18:40.021900536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sktgg,Uid:b8bef970-80d7-442c-843b-7b7102ef8c50,Namespace:kube-system,Attempt:0,}" Dec 16 16:18:40.055919 containerd[1594]: time="2025-12-16T16:18:40.055308071Z" level=info msg="connecting to shim f242ba288c52ef0a90bdc8b1a72609f9b90924007edcf6ee1e3b28cb69d6a2cd" address="unix:///run/containerd/s/105ee8294ad8eb4dbc09f015148fde0345625fb7f2f572a64c283314af86b197" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:18:40.094327 systemd[1]: Started cri-containerd-f242ba288c52ef0a90bdc8b1a72609f9b90924007edcf6ee1e3b28cb69d6a2cd.scope - libcontainer container f242ba288c52ef0a90bdc8b1a72609f9b90924007edcf6ee1e3b28cb69d6a2cd. Dec 16 16:18:40.139256 containerd[1594]: time="2025-12-16T16:18:40.139201263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sktgg,Uid:b8bef970-80d7-442c-843b-7b7102ef8c50,Namespace:kube-system,Attempt:0,} returns sandbox id \"f242ba288c52ef0a90bdc8b1a72609f9b90924007edcf6ee1e3b28cb69d6a2cd\"" Dec 16 16:18:40.147659 containerd[1594]: time="2025-12-16T16:18:40.147603085Z" level=info msg="CreateContainer within sandbox \"f242ba288c52ef0a90bdc8b1a72609f9b90924007edcf6ee1e3b28cb69d6a2cd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 16:18:40.163802 containerd[1594]: time="2025-12-16T16:18:40.163687383Z" level=info msg="Container d54fb1dcddc4331a32a0aaba62ff6ae36712f227067f75d00b486de77c7ebdae: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:18:40.180951 containerd[1594]: time="2025-12-16T16:18:40.180888079Z" level=info msg="CreateContainer within sandbox \"f242ba288c52ef0a90bdc8b1a72609f9b90924007edcf6ee1e3b28cb69d6a2cd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d54fb1dcddc4331a32a0aaba62ff6ae36712f227067f75d00b486de77c7ebdae\"" Dec 16 16:18:40.182757 containerd[1594]: time="2025-12-16T16:18:40.182645183Z" level=info msg="StartContainer for \"d54fb1dcddc4331a32a0aaba62ff6ae36712f227067f75d00b486de77c7ebdae\"" Dec 16 16:18:40.185384 containerd[1594]: time="2025-12-16T16:18:40.185313411Z" level=info msg="connecting to shim d54fb1dcddc4331a32a0aaba62ff6ae36712f227067f75d00b486de77c7ebdae" address="unix:///run/containerd/s/105ee8294ad8eb4dbc09f015148fde0345625fb7f2f572a64c283314af86b197" protocol=ttrpc version=3 Dec 16 16:18:40.225258 systemd[1]: Started cri-containerd-d54fb1dcddc4331a32a0aaba62ff6ae36712f227067f75d00b486de77c7ebdae.scope - libcontainer container d54fb1dcddc4331a32a0aaba62ff6ae36712f227067f75d00b486de77c7ebdae. Dec 16 16:18:40.277351 containerd[1594]: time="2025-12-16T16:18:40.277222185Z" level=info msg="StartContainer for \"d54fb1dcddc4331a32a0aaba62ff6ae36712f227067f75d00b486de77c7ebdae\" returns successfully" Dec 16 16:18:40.303592 systemd[1]: cri-containerd-d54fb1dcddc4331a32a0aaba62ff6ae36712f227067f75d00b486de77c7ebdae.scope: Deactivated successfully. Dec 16 16:18:40.304721 systemd[1]: cri-containerd-d54fb1dcddc4331a32a0aaba62ff6ae36712f227067f75d00b486de77c7ebdae.scope: Consumed 39ms CPU time, 9.6M memory peak, 3M read from disk. Dec 16 16:18:40.310585 containerd[1594]: time="2025-12-16T16:18:40.310528888Z" level=info msg="received container exit event container_id:\"d54fb1dcddc4331a32a0aaba62ff6ae36712f227067f75d00b486de77c7ebdae\" id:\"d54fb1dcddc4331a32a0aaba62ff6ae36712f227067f75d00b486de77c7ebdae\" pid:4651 exited_at:{seconds:1765901920 nanos:308515632}" Dec 16 16:18:40.379917 containerd[1594]: time="2025-12-16T16:18:40.379863585Z" level=info msg="CreateContainer within sandbox \"f242ba288c52ef0a90bdc8b1a72609f9b90924007edcf6ee1e3b28cb69d6a2cd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 16:18:40.403710 containerd[1594]: time="2025-12-16T16:18:40.403647176Z" level=info msg="Container a3a5ab1d7cf52cb5ee87db64eecd5304d8c17616b1821bda7ede4a109c88e1fc: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:18:40.411359 containerd[1594]: time="2025-12-16T16:18:40.411293406Z" level=info msg="CreateContainer within sandbox \"f242ba288c52ef0a90bdc8b1a72609f9b90924007edcf6ee1e3b28cb69d6a2cd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a3a5ab1d7cf52cb5ee87db64eecd5304d8c17616b1821bda7ede4a109c88e1fc\"" Dec 16 16:18:40.412455 containerd[1594]: time="2025-12-16T16:18:40.412414872Z" level=info msg="StartContainer for \"a3a5ab1d7cf52cb5ee87db64eecd5304d8c17616b1821bda7ede4a109c88e1fc\"" Dec 16 16:18:40.415858 containerd[1594]: time="2025-12-16T16:18:40.413968547Z" level=info msg="connecting to shim a3a5ab1d7cf52cb5ee87db64eecd5304d8c17616b1821bda7ede4a109c88e1fc" address="unix:///run/containerd/s/105ee8294ad8eb4dbc09f015148fde0345625fb7f2f572a64c283314af86b197" protocol=ttrpc version=3 Dec 16 16:18:40.451645 systemd[1]: Started cri-containerd-a3a5ab1d7cf52cb5ee87db64eecd5304d8c17616b1821bda7ede4a109c88e1fc.scope - libcontainer container a3a5ab1d7cf52cb5ee87db64eecd5304d8c17616b1821bda7ede4a109c88e1fc. Dec 16 16:18:40.511916 containerd[1594]: time="2025-12-16T16:18:40.511864321Z" level=info msg="StartContainer for \"a3a5ab1d7cf52cb5ee87db64eecd5304d8c17616b1821bda7ede4a109c88e1fc\" returns successfully" Dec 16 16:18:40.530545 systemd[1]: cri-containerd-a3a5ab1d7cf52cb5ee87db64eecd5304d8c17616b1821bda7ede4a109c88e1fc.scope: Deactivated successfully. Dec 16 16:18:40.531926 systemd[1]: cri-containerd-a3a5ab1d7cf52cb5ee87db64eecd5304d8c17616b1821bda7ede4a109c88e1fc.scope: Consumed 34ms CPU time, 7.2M memory peak, 1.8M read from disk. Dec 16 16:18:40.535164 containerd[1594]: time="2025-12-16T16:18:40.535009325Z" level=info msg="received container exit event container_id:\"a3a5ab1d7cf52cb5ee87db64eecd5304d8c17616b1821bda7ede4a109c88e1fc\" id:\"a3a5ab1d7cf52cb5ee87db64eecd5304d8c17616b1821bda7ede4a109c88e1fc\" pid:4698 exited_at:{seconds:1765901920 nanos:533647343}" Dec 16 16:18:40.943077 sshd[4587]: Accepted publickey for core from 139.178.68.195 port 42114 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:18:40.945179 sshd-session[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:18:40.953743 systemd-logind[1575]: New session 26 of user core. Dec 16 16:18:40.965310 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 16:18:41.386327 containerd[1594]: time="2025-12-16T16:18:41.385999706Z" level=info msg="CreateContainer within sandbox \"f242ba288c52ef0a90bdc8b1a72609f9b90924007edcf6ee1e3b28cb69d6a2cd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 16:18:41.415547 containerd[1594]: time="2025-12-16T16:18:41.412147858Z" level=info msg="Container f57aa4f6d7ff67d6ee1ab0045702bc9a28b437111ab29d63494b51143096907b: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:18:41.435952 containerd[1594]: time="2025-12-16T16:18:41.435892107Z" level=info msg="CreateContainer within sandbox \"f242ba288c52ef0a90bdc8b1a72609f9b90924007edcf6ee1e3b28cb69d6a2cd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f57aa4f6d7ff67d6ee1ab0045702bc9a28b437111ab29d63494b51143096907b\"" Dec 16 16:18:41.437508 containerd[1594]: time="2025-12-16T16:18:41.437427077Z" level=info msg="StartContainer for \"f57aa4f6d7ff67d6ee1ab0045702bc9a28b437111ab29d63494b51143096907b\"" Dec 16 16:18:41.439503 containerd[1594]: time="2025-12-16T16:18:41.439467639Z" level=info msg="connecting to shim f57aa4f6d7ff67d6ee1ab0045702bc9a28b437111ab29d63494b51143096907b" address="unix:///run/containerd/s/105ee8294ad8eb4dbc09f015148fde0345625fb7f2f572a64c283314af86b197" protocol=ttrpc version=3 Dec 16 16:18:41.475308 systemd[1]: Started cri-containerd-f57aa4f6d7ff67d6ee1ab0045702bc9a28b437111ab29d63494b51143096907b.scope - libcontainer container f57aa4f6d7ff67d6ee1ab0045702bc9a28b437111ab29d63494b51143096907b. Dec 16 16:18:41.574080 sshd[4732]: Connection closed by 139.178.68.195 port 42114 Dec 16 16:18:41.573858 sshd-session[4587]: pam_unix(sshd:session): session closed for user core Dec 16 16:18:41.581343 systemd[1]: sshd@24-10.230.59.10:22-139.178.68.195:42114.service: Deactivated successfully. Dec 16 16:18:41.586900 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 16:18:41.591173 systemd-logind[1575]: Session 26 logged out. Waiting for processes to exit. Dec 16 16:18:41.595994 systemd-logind[1575]: Removed session 26. Dec 16 16:18:41.602607 systemd[1]: cri-containerd-f57aa4f6d7ff67d6ee1ab0045702bc9a28b437111ab29d63494b51143096907b.scope: Deactivated successfully. Dec 16 16:18:41.605212 containerd[1594]: time="2025-12-16T16:18:41.605127721Z" level=info msg="StartContainer for \"f57aa4f6d7ff67d6ee1ab0045702bc9a28b437111ab29d63494b51143096907b\" returns successfully" Dec 16 16:18:41.608057 containerd[1594]: time="2025-12-16T16:18:41.607521254Z" level=info msg="received container exit event container_id:\"f57aa4f6d7ff67d6ee1ab0045702bc9a28b437111ab29d63494b51143096907b\" id:\"f57aa4f6d7ff67d6ee1ab0045702bc9a28b437111ab29d63494b51143096907b\" pid:4748 exited_at:{seconds:1765901921 nanos:606683618}" Dec 16 16:18:41.734707 systemd[1]: Started sshd@25-10.230.59.10:22-139.178.68.195:34312.service - OpenSSH per-connection server daemon (139.178.68.195:34312). Dec 16 16:18:41.926744 kubelet[2860]: E1216 16:18:41.926663 2860 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 16:18:41.968687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f57aa4f6d7ff67d6ee1ab0045702bc9a28b437111ab29d63494b51143096907b-rootfs.mount: Deactivated successfully. Dec 16 16:18:42.394824 containerd[1594]: time="2025-12-16T16:18:42.394702578Z" level=info msg="CreateContainer within sandbox \"f242ba288c52ef0a90bdc8b1a72609f9b90924007edcf6ee1e3b28cb69d6a2cd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 16:18:42.409756 containerd[1594]: time="2025-12-16T16:18:42.408404124Z" level=info msg="Container 101c1134baf9599b0a52225c4b9f39f6f5effb411e3987ad9e7f19dbb95e5d75: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:18:42.441390 containerd[1594]: time="2025-12-16T16:18:42.441333042Z" level=info msg="CreateContainer within sandbox \"f242ba288c52ef0a90bdc8b1a72609f9b90924007edcf6ee1e3b28cb69d6a2cd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"101c1134baf9599b0a52225c4b9f39f6f5effb411e3987ad9e7f19dbb95e5d75\"" Dec 16 16:18:42.442795 containerd[1594]: time="2025-12-16T16:18:42.442748713Z" level=info msg="StartContainer for \"101c1134baf9599b0a52225c4b9f39f6f5effb411e3987ad9e7f19dbb95e5d75\"" Dec 16 16:18:42.444679 containerd[1594]: time="2025-12-16T16:18:42.444455416Z" level=info msg="connecting to shim 101c1134baf9599b0a52225c4b9f39f6f5effb411e3987ad9e7f19dbb95e5d75" address="unix:///run/containerd/s/105ee8294ad8eb4dbc09f015148fde0345625fb7f2f572a64c283314af86b197" protocol=ttrpc version=3 Dec 16 16:18:42.488363 systemd[1]: Started cri-containerd-101c1134baf9599b0a52225c4b9f39f6f5effb411e3987ad9e7f19dbb95e5d75.scope - libcontainer container 101c1134baf9599b0a52225c4b9f39f6f5effb411e3987ad9e7f19dbb95e5d75. Dec 16 16:18:42.540849 systemd[1]: cri-containerd-101c1134baf9599b0a52225c4b9f39f6f5effb411e3987ad9e7f19dbb95e5d75.scope: Deactivated successfully. Dec 16 16:18:42.543460 containerd[1594]: time="2025-12-16T16:18:42.543311497Z" level=info msg="received container exit event container_id:\"101c1134baf9599b0a52225c4b9f39f6f5effb411e3987ad9e7f19dbb95e5d75\" id:\"101c1134baf9599b0a52225c4b9f39f6f5effb411e3987ad9e7f19dbb95e5d75\" pid:4799 exited_at:{seconds:1765901922 nanos:543012967}" Dec 16 16:18:42.555881 containerd[1594]: time="2025-12-16T16:18:42.555690071Z" level=info msg="StartContainer for \"101c1134baf9599b0a52225c4b9f39f6f5effb411e3987ad9e7f19dbb95e5d75\" returns successfully" Dec 16 16:18:42.586440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-101c1134baf9599b0a52225c4b9f39f6f5effb411e3987ad9e7f19dbb95e5d75-rootfs.mount: Deactivated successfully. Dec 16 16:18:42.650628 sshd[4783]: Accepted publickey for core from 139.178.68.195 port 34312 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:18:42.653025 sshd-session[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:18:42.661790 systemd-logind[1575]: New session 27 of user core. Dec 16 16:18:42.668284 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 16:18:43.407910 containerd[1594]: time="2025-12-16T16:18:43.407136227Z" level=info msg="CreateContainer within sandbox \"f242ba288c52ef0a90bdc8b1a72609f9b90924007edcf6ee1e3b28cb69d6a2cd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 16:18:43.421191 containerd[1594]: time="2025-12-16T16:18:43.421147835Z" level=info msg="Container db00d2099d1337dc4d162aa7f7e2e46b82559570c145f9d6b460f2d44e2d076a: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:18:43.433728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1399576410.mount: Deactivated successfully. Dec 16 16:18:43.443050 containerd[1594]: time="2025-12-16T16:18:43.442898110Z" level=info msg="CreateContainer within sandbox \"f242ba288c52ef0a90bdc8b1a72609f9b90924007edcf6ee1e3b28cb69d6a2cd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"db00d2099d1337dc4d162aa7f7e2e46b82559570c145f9d6b460f2d44e2d076a\"" Dec 16 16:18:43.444057 containerd[1594]: time="2025-12-16T16:18:43.443975816Z" level=info msg="StartContainer for \"db00d2099d1337dc4d162aa7f7e2e46b82559570c145f9d6b460f2d44e2d076a\"" Dec 16 16:18:43.447277 containerd[1594]: time="2025-12-16T16:18:43.447209300Z" level=info msg="connecting to shim db00d2099d1337dc4d162aa7f7e2e46b82559570c145f9d6b460f2d44e2d076a" address="unix:///run/containerd/s/105ee8294ad8eb4dbc09f015148fde0345625fb7f2f572a64c283314af86b197" protocol=ttrpc version=3 Dec 16 16:18:43.489284 systemd[1]: Started cri-containerd-db00d2099d1337dc4d162aa7f7e2e46b82559570c145f9d6b460f2d44e2d076a.scope - libcontainer container db00d2099d1337dc4d162aa7f7e2e46b82559570c145f9d6b460f2d44e2d076a. Dec 16 16:18:43.565201 containerd[1594]: time="2025-12-16T16:18:43.565145537Z" level=info msg="StartContainer for \"db00d2099d1337dc4d162aa7f7e2e46b82559570c145f9d6b460f2d44e2d076a\" returns successfully" Dec 16 16:18:44.390549 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Dec 16 16:18:44.456759 kubelet[2860]: I1216 16:18:44.455213 2860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sktgg" podStartSLOduration=5.455169346 podStartE2EDuration="5.455169346s" podCreationTimestamp="2025-12-16 16:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 16:18:44.453403517 +0000 UTC m=+153.035792326" watchObservedRunningTime="2025-12-16 16:18:44.455169346 +0000 UTC m=+153.037558158" Dec 16 16:18:45.114109 kubelet[2860]: I1216 16:18:45.111764 2860 setters.go:543] "Node became not ready" node="srv-899vz.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T16:18:45Z","lastTransitionTime":"2025-12-16T16:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 16:18:48.376223 systemd-networkd[1483]: lxc_health: Link UP Dec 16 16:18:48.392586 systemd-networkd[1483]: lxc_health: Gained carrier Dec 16 16:18:49.600312 systemd-networkd[1483]: lxc_health: Gained IPv6LL Dec 16 16:18:54.886078 sshd[4825]: Connection closed by 139.178.68.195 port 34312 Dec 16 16:18:54.887359 sshd-session[4783]: pam_unix(sshd:session): session closed for user core Dec 16 16:18:54.914084 systemd[1]: sshd@25-10.230.59.10:22-139.178.68.195:34312.service: Deactivated successfully. Dec 16 16:18:54.919806 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 16:18:54.925201 systemd-logind[1575]: Session 27 logged out. Waiting for processes to exit. Dec 16 16:18:54.927053 systemd-logind[1575]: Removed session 27.