Jan 24 02:59:03.029272 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 02:59:03.029311 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 02:59:03.029324 kernel: BIOS-provided physical RAM map: Jan 24 02:59:03.029340 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 02:59:03.029350 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 02:59:03.029359 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 02:59:03.029370 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 24 02:59:03.029380 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 24 02:59:03.029411 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 02:59:03.029422 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 02:59:03.029432 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 02:59:03.029442 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 02:59:03.029466 kernel: NX (Execute Disable) protection: active Jan 24 02:59:03.029477 kernel: APIC: Static calls initialized Jan 24 02:59:03.029489 kernel: SMBIOS 2.8 present. Jan 24 02:59:03.029514 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014 Jan 24 02:59:03.029526 kernel: Hypervisor detected: KVM Jan 24 02:59:03.029543 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 02:59:03.029555 kernel: kvm-clock: using sched offset of 5203886864 cycles Jan 24 02:59:03.029566 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 02:59:03.029577 kernel: tsc: Detected 2799.998 MHz processor Jan 24 02:59:03.029588 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 02:59:03.029600 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 02:59:03.029610 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 24 02:59:03.029621 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 02:59:03.029632 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 02:59:03.029648 kernel: Using GB pages for direct mapping Jan 24 02:59:03.029659 kernel: ACPI: Early table checksum verification disabled Jan 24 02:59:03.029670 kernel: ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) Jan 24 02:59:03.029681 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 02:59:03.032010 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 02:59:03.032025 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 02:59:03.032037 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 24 02:59:03.032048 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 02:59:03.032060 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 02:59:03.032080 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 02:59:03.032091 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 02:59:03.032103 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 24 02:59:03.032114 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 24 02:59:03.032125 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 24 02:59:03.032144 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 24 02:59:03.032155 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 24 02:59:03.032172 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 24 02:59:03.032183 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 24 02:59:03.032195 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 02:59:03.032230 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 02:59:03.032245 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 24 02:59:03.032256 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 24 02:59:03.032268 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 24 02:59:03.032279 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 24 02:59:03.032298 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 24 02:59:03.032309 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 24 02:59:03.032320 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 24 02:59:03.032332 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 24 02:59:03.032343 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 24 02:59:03.032354 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 24 02:59:03.032366 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 24 02:59:03.032377 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 24 02:59:03.032416 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 24 02:59:03.032437 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 24 02:59:03.032449 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 24 02:59:03.032460 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 24 02:59:03.032472 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 24 02:59:03.032484 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 24 02:59:03.032495 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 24 02:59:03.032518 kernel: Zone ranges: Jan 24 02:59:03.032530 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 02:59:03.032542 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 24 02:59:03.032559 kernel: Normal empty Jan 24 02:59:03.032571 kernel: Movable zone start for each node Jan 24 02:59:03.032582 kernel: Early memory node ranges Jan 24 02:59:03.032593 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 02:59:03.032605 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 24 02:59:03.032616 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 24 02:59:03.032628 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 02:59:03.032639 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 02:59:03.032656 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 24 02:59:03.032669 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 02:59:03.032687 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 02:59:03.032699 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 02:59:03.032711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 02:59:03.032722 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 02:59:03.032734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 02:59:03.032745 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 02:59:03.032757 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 02:59:03.032768 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 02:59:03.032779 kernel: TSC deadline timer available Jan 24 02:59:03.032796 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 24 02:59:03.032808 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 02:59:03.032819 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 02:59:03.032831 kernel: Booting paravirtualized kernel on KVM Jan 24 02:59:03.032843 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 02:59:03.032855 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 24 02:59:03.032866 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Jan 24 02:59:03.032878 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Jan 24 02:59:03.032890 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 24 02:59:03.032907 kernel: kvm-guest: PV spinlocks enabled Jan 24 02:59:03.032918 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 02:59:03.032931 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 02:59:03.032943 kernel: random: crng init done Jan 24 02:59:03.032955 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 02:59:03.032966 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 02:59:03.032978 kernel: Fallback order for Node 0: 0 Jan 24 02:59:03.032989 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 24 02:59:03.033006 kernel: Policy zone: DMA32 Jan 24 02:59:03.033023 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 02:59:03.033036 kernel: software IO TLB: area num 16. Jan 24 02:59:03.033047 kernel: Memory: 1901596K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 194760K reserved, 0K cma-reserved) Jan 24 02:59:03.033059 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 24 02:59:03.033071 kernel: Kernel/User page tables isolation: enabled Jan 24 02:59:03.033082 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 02:59:03.033094 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 02:59:03.033105 kernel: Dynamic Preempt: voluntary Jan 24 02:59:03.033123 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 02:59:03.033136 kernel: rcu: RCU event tracing is enabled. Jan 24 02:59:03.033148 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 24 02:59:03.033160 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 02:59:03.033172 kernel: Rude variant of Tasks RCU enabled. Jan 24 02:59:03.033198 kernel: Tracing variant of Tasks RCU enabled. Jan 24 02:59:03.033215 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 02:59:03.033228 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 24 02:59:03.033239 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 24 02:59:03.033252 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 02:59:03.033263 kernel: Console: colour VGA+ 80x25 Jan 24 02:59:03.033280 kernel: printk: console [tty0] enabled Jan 24 02:59:03.033293 kernel: printk: console [ttyS0] enabled Jan 24 02:59:03.033305 kernel: ACPI: Core revision 20230628 Jan 24 02:59:03.033317 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 02:59:03.033330 kernel: x2apic enabled Jan 24 02:59:03.033342 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 02:59:03.033359 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 24 02:59:03.033377 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 24 02:59:03.033405 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 02:59:03.033419 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 24 02:59:03.033431 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 24 02:59:03.033443 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 02:59:03.033455 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 02:59:03.033467 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 02:59:03.033479 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 24 02:59:03.033498 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 02:59:03.033522 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 02:59:03.033534 kernel: MDS: Mitigation: Clear CPU buffers Jan 24 02:59:03.033545 kernel: MMIO Stale Data: Unknown: No mitigations Jan 24 02:59:03.033557 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 24 02:59:03.033569 kernel: active return thunk: its_return_thunk Jan 24 02:59:03.033581 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 02:59:03.033593 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 02:59:03.033604 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 02:59:03.033616 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 02:59:03.033628 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 02:59:03.033646 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 24 02:59:03.033658 kernel: Freeing SMP alternatives memory: 32K Jan 24 02:59:03.033676 kernel: pid_max: default: 32768 minimum: 301 Jan 24 02:59:03.033689 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 02:59:03.033701 kernel: landlock: Up and running. Jan 24 02:59:03.033713 kernel: SELinux: Initializing. Jan 24 02:59:03.033725 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 02:59:03.033737 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 02:59:03.033750 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 24 02:59:03.033762 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 24 02:59:03.033774 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 24 02:59:03.033793 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 24 02:59:03.033805 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 24 02:59:03.033817 kernel: signal: max sigframe size: 1776 Jan 24 02:59:03.033829 kernel: rcu: Hierarchical SRCU implementation. Jan 24 02:59:03.033842 kernel: rcu: Max phase no-delay instances is 400. Jan 24 02:59:03.033854 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 02:59:03.033866 kernel: smp: Bringing up secondary CPUs ... Jan 24 02:59:03.033878 kernel: smpboot: x86: Booting SMP configuration: Jan 24 02:59:03.033890 kernel: .... node #0, CPUs: #1 Jan 24 02:59:03.033907 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 24 02:59:03.033920 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 02:59:03.033932 kernel: smpboot: Max logical packages: 16 Jan 24 02:59:03.033944 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 24 02:59:03.033956 kernel: devtmpfs: initialized Jan 24 02:59:03.033968 kernel: x86/mm: Memory block size: 128MB Jan 24 02:59:03.033980 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 02:59:03.033992 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 24 02:59:03.034004 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 02:59:03.034022 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 02:59:03.034035 kernel: audit: initializing netlink subsys (disabled) Jan 24 02:59:03.034047 kernel: audit: type=2000 audit(1769223540.932:1): state=initialized audit_enabled=0 res=1 Jan 24 02:59:03.034059 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 02:59:03.034071 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 02:59:03.034083 kernel: cpuidle: using governor menu Jan 24 02:59:03.034095 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 02:59:03.034107 kernel: dca service started, version 1.12.1 Jan 24 02:59:03.034119 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 02:59:03.034136 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 02:59:03.034149 kernel: PCI: Using configuration type 1 for base access Jan 24 02:59:03.034161 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 02:59:03.034173 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 02:59:03.034186 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 02:59:03.034198 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 02:59:03.034210 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 02:59:03.034222 kernel: ACPI: Added _OSI(Module Device) Jan 24 02:59:03.034234 kernel: ACPI: Added _OSI(Processor Device) Jan 24 02:59:03.034251 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 02:59:03.034264 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 02:59:03.034276 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 02:59:03.034288 kernel: ACPI: Interpreter enabled Jan 24 02:59:03.034300 kernel: ACPI: PM: (supports S0 S5) Jan 24 02:59:03.034311 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 02:59:03.034324 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 02:59:03.034336 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 02:59:03.034348 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 02:59:03.034366 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 02:59:03.035724 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 02:59:03.035924 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 24 02:59:03.036096 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 24 02:59:03.036120 kernel: PCI host bridge to bus 0000:00 Jan 24 02:59:03.036339 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 02:59:03.036535 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 02:59:03.036705 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 02:59:03.036861 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 24 02:59:03.037017 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 02:59:03.037182 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 24 02:59:03.037348 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 02:59:03.039647 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 02:59:03.039861 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 24 02:59:03.040036 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 24 02:59:03.040207 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 24 02:59:03.041485 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 24 02:59:03.041696 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 02:59:03.041886 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 24 02:59:03.042060 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 24 02:59:03.042261 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 24 02:59:03.043547 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 24 02:59:03.043741 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 24 02:59:03.043913 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 24 02:59:03.044103 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 24 02:59:03.044273 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 24 02:59:03.048553 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 24 02:59:03.048744 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 24 02:59:03.048984 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 24 02:59:03.049161 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 24 02:59:03.049358 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 24 02:59:03.049569 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 24 02:59:03.049762 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 24 02:59:03.049932 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 24 02:59:03.050122 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 24 02:59:03.050295 kernel: pci 0000:00:03.0: reg 0x10: [io 0xd0c0-0xd0df] Jan 24 02:59:03.050483 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 24 02:59:03.050669 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 24 02:59:03.050840 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 24 02:59:03.051046 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 24 02:59:03.051228 kernel: pci 0000:00:04.0: reg 0x10: [io 0xd000-0xd07f] Jan 24 02:59:03.051435 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 24 02:59:03.051638 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 24 02:59:03.051841 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 02:59:03.052014 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 02:59:03.052205 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 02:59:03.052401 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xd0e0-0xd0ff] Jan 24 02:59:03.052605 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 24 02:59:03.052799 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 02:59:03.052977 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 02:59:03.053198 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 24 02:59:03.053472 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 24 02:59:03.053680 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 24 02:59:03.053849 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Jan 24 02:59:03.054015 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 24 02:59:03.054183 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 02:59:03.054376 kernel: pci_bus 0000:02: extended config space not accessible Jan 24 02:59:03.054618 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 24 02:59:03.054812 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 24 02:59:03.054989 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 24 02:59:03.055164 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Jan 24 02:59:03.055338 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 24 02:59:03.055549 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 02:59:03.055747 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 24 02:59:03.055920 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 24 02:59:03.056100 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 24 02:59:03.056266 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 24 02:59:03.056461 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 24 02:59:03.056677 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 24 02:59:03.056860 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 24 02:59:03.057037 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 24 02:59:03.057208 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 24 02:59:03.057462 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 24 02:59:03.057657 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 24 02:59:03.057826 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 24 02:59:03.058023 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 24 02:59:03.058197 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 24 02:59:03.058364 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 24 02:59:03.058580 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 24 02:59:03.058752 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 24 02:59:03.058929 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 24 02:59:03.059096 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 24 02:59:03.059270 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 24 02:59:03.059487 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 24 02:59:03.059667 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 24 02:59:03.059839 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 24 02:59:03.060006 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 24 02:59:03.060173 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 24 02:59:03.060200 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 02:59:03.060214 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 02:59:03.060227 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 02:59:03.060239 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 02:59:03.060251 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 02:59:03.060263 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 02:59:03.060275 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 02:59:03.060288 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 02:59:03.060300 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 02:59:03.060318 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 02:59:03.060330 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 02:59:03.060343 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 02:59:03.060355 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 02:59:03.060367 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 02:59:03.060379 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 02:59:03.060414 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 02:59:03.060428 kernel: iommu: Default domain type: Translated Jan 24 02:59:03.060441 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 02:59:03.060460 kernel: PCI: Using ACPI for IRQ routing Jan 24 02:59:03.060472 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 02:59:03.060485 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 02:59:03.060498 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 24 02:59:03.060681 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 02:59:03.060849 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 02:59:03.061017 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 02:59:03.061037 kernel: vgaarb: loaded Jan 24 02:59:03.061050 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 02:59:03.061070 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 02:59:03.061083 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 02:59:03.061095 kernel: pnp: PnP ACPI init Jan 24 02:59:03.061293 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 02:59:03.061314 kernel: pnp: PnP ACPI: found 5 devices Jan 24 02:59:03.061327 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 02:59:03.061339 kernel: NET: Registered PF_INET protocol family Jan 24 02:59:03.061352 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 02:59:03.061372 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 24 02:59:03.061384 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 02:59:03.061441 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 02:59:03.061453 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 24 02:59:03.061466 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 24 02:59:03.061478 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 02:59:03.061490 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 02:59:03.061514 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 02:59:03.061528 kernel: NET: Registered PF_XDP protocol family Jan 24 02:59:03.061703 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 24 02:59:03.061871 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 24 02:59:03.062041 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 24 02:59:03.062209 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 24 02:59:03.062377 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 24 02:59:03.062574 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 24 02:59:03.062750 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 24 02:59:03.062917 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x1000-0x1fff] Jan 24 02:59:03.063084 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x2000-0x2fff] Jan 24 02:59:03.063253 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x3000-0x3fff] Jan 24 02:59:03.063450 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x4000-0x4fff] Jan 24 02:59:03.063633 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x5000-0x5fff] Jan 24 02:59:03.063854 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x6000-0x6fff] Jan 24 02:59:03.064039 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x7000-0x7fff] Jan 24 02:59:03.064250 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 24 02:59:03.064489 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Jan 24 02:59:03.064679 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 24 02:59:03.064851 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 02:59:03.065021 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 24 02:59:03.065188 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Jan 24 02:59:03.065355 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 24 02:59:03.065549 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 02:59:03.065762 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 24 02:59:03.065935 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff] Jan 24 02:59:03.066112 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 24 02:59:03.066281 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 24 02:59:03.066491 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 24 02:59:03.066679 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff] Jan 24 02:59:03.066862 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 24 02:59:03.067037 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 24 02:59:03.067210 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 24 02:59:03.067456 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff] Jan 24 02:59:03.067677 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 24 02:59:03.067853 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 24 02:59:03.068030 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 24 02:59:03.068197 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff] Jan 24 02:59:03.068364 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 24 02:59:03.068582 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 24 02:59:03.068786 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 24 02:59:03.068969 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff] Jan 24 02:59:03.069136 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 24 02:59:03.069304 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 24 02:59:03.069617 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 24 02:59:03.069786 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff] Jan 24 02:59:03.069956 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 24 02:59:03.070122 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 24 02:59:03.070294 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 24 02:59:03.070511 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff] Jan 24 02:59:03.070690 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 24 02:59:03.070858 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 24 02:59:03.071057 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 02:59:03.071221 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 02:59:03.071373 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 02:59:03.071678 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 24 02:59:03.071834 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 02:59:03.071996 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 24 02:59:03.072189 kernel: pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] Jan 24 02:59:03.072351 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 24 02:59:03.072566 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 02:59:03.072743 kernel: pci_bus 0000:02: resource 0 [io 0xc000-0xcfff] Jan 24 02:59:03.072911 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 24 02:59:03.073079 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 02:59:03.073290 kernel: pci_bus 0000:03: resource 0 [io 0x1000-0x1fff] Jan 24 02:59:03.073522 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 24 02:59:03.073716 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 24 02:59:03.073899 kernel: pci_bus 0000:04: resource 0 [io 0x2000-0x2fff] Jan 24 02:59:03.074058 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 24 02:59:03.074217 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 24 02:59:03.074429 kernel: pci_bus 0000:05: resource 0 [io 0x3000-0x3fff] Jan 24 02:59:03.074608 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 24 02:59:03.074769 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 24 02:59:03.074954 kernel: pci_bus 0000:06: resource 0 [io 0x4000-0x4fff] Jan 24 02:59:03.075125 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 24 02:59:03.075285 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 24 02:59:03.075536 kernel: pci_bus 0000:07: resource 0 [io 0x5000-0x5fff] Jan 24 02:59:03.075701 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 24 02:59:03.075858 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 24 02:59:03.076027 kernel: pci_bus 0000:08: resource 0 [io 0x6000-0x6fff] Jan 24 02:59:03.076184 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 24 02:59:03.076350 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 24 02:59:03.076569 kernel: pci_bus 0000:09: resource 0 [io 0x7000-0x7fff] Jan 24 02:59:03.076743 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 24 02:59:03.076904 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 24 02:59:03.076926 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 02:59:03.076939 kernel: PCI: CLS 0 bytes, default 64 Jan 24 02:59:03.076961 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 02:59:03.076975 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 24 02:59:03.076988 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 02:59:03.077002 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 24 02:59:03.077015 kernel: Initialise system trusted keyrings Jan 24 02:59:03.077028 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 24 02:59:03.077041 kernel: Key type asymmetric registered Jan 24 02:59:03.077054 kernel: Asymmetric key parser 'x509' registered Jan 24 02:59:03.077066 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 02:59:03.077085 kernel: io scheduler mq-deadline registered Jan 24 02:59:03.077098 kernel: io scheduler kyber registered Jan 24 02:59:03.077111 kernel: io scheduler bfq registered Jan 24 02:59:03.077286 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 24 02:59:03.077521 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 24 02:59:03.077694 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:59:03.077881 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 24 02:59:03.078051 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 24 02:59:03.078230 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:59:03.078430 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 24 02:59:03.078614 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 24 02:59:03.078785 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:59:03.078964 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 24 02:59:03.079139 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 24 02:59:03.079325 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:59:03.079557 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 24 02:59:03.079726 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 24 02:59:03.079948 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:59:03.080121 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 24 02:59:03.080288 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 24 02:59:03.080496 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:59:03.080742 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 24 02:59:03.080912 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 24 02:59:03.081080 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:59:03.081287 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 24 02:59:03.081557 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 24 02:59:03.081738 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 02:59:03.081759 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 02:59:03.081773 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 02:59:03.081786 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 02:59:03.081799 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 02:59:03.081813 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 02:59:03.081826 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 02:59:03.081839 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 02:59:03.081860 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 02:59:03.081873 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 02:59:03.082063 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 24 02:59:03.082223 kernel: rtc_cmos 00:03: registered as rtc0 Jan 24 02:59:03.082379 kernel: rtc_cmos 00:03: setting system clock to 2026-01-24T02:59:02 UTC (1769223542) Jan 24 02:59:03.082581 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 24 02:59:03.082602 kernel: intel_pstate: CPU model not supported Jan 24 02:59:03.082623 kernel: NET: Registered PF_INET6 protocol family Jan 24 02:59:03.082637 kernel: Segment Routing with IPv6 Jan 24 02:59:03.082650 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 02:59:03.082662 kernel: NET: Registered PF_PACKET protocol family Jan 24 02:59:03.082676 kernel: Key type dns_resolver registered Jan 24 02:59:03.082688 kernel: IPI shorthand broadcast: enabled Jan 24 02:59:03.082702 kernel: sched_clock: Marking stable (1457004039, 215832140)->(1818469698, -145633519) Jan 24 02:59:03.082714 kernel: registered taskstats version 1 Jan 24 02:59:03.082727 kernel: Loading compiled-in X.509 certificates Jan 24 02:59:03.082740 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 02:59:03.082759 kernel: Key type .fscrypt registered Jan 24 02:59:03.082771 kernel: Key type fscrypt-provisioning registered Jan 24 02:59:03.082784 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 02:59:03.082796 kernel: ima: Allocated hash algorithm: sha1 Jan 24 02:59:03.082809 kernel: ima: No architecture policies found Jan 24 02:59:03.082822 kernel: clk: Disabling unused clocks Jan 24 02:59:03.082835 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 02:59:03.082848 kernel: Write protecting the kernel read-only data: 36864k Jan 24 02:59:03.082861 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 02:59:03.082879 kernel: Run /init as init process Jan 24 02:59:03.082893 kernel: with arguments: Jan 24 02:59:03.082906 kernel: /init Jan 24 02:59:03.082919 kernel: with environment: Jan 24 02:59:03.082931 kernel: HOME=/ Jan 24 02:59:03.082978 kernel: TERM=linux Jan 24 02:59:03.082995 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 02:59:03.083011 systemd[1]: Detected virtualization kvm. Jan 24 02:59:03.083032 systemd[1]: Detected architecture x86-64. Jan 24 02:59:03.083046 systemd[1]: Running in initrd. Jan 24 02:59:03.083059 systemd[1]: No hostname configured, using default hostname. Jan 24 02:59:03.083072 systemd[1]: Hostname set to . Jan 24 02:59:03.083086 systemd[1]: Initializing machine ID from VM UUID. Jan 24 02:59:03.083100 systemd[1]: Queued start job for default target initrd.target. Jan 24 02:59:03.083114 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 02:59:03.083128 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 02:59:03.083148 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 02:59:03.083162 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 02:59:03.083176 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 02:59:03.083190 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 02:59:03.083205 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 02:59:03.083219 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 02:59:03.083238 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 02:59:03.083252 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 02:59:03.083266 systemd[1]: Reached target paths.target - Path Units. Jan 24 02:59:03.083280 systemd[1]: Reached target slices.target - Slice Units. Jan 24 02:59:03.083293 systemd[1]: Reached target swap.target - Swaps. Jan 24 02:59:03.083312 systemd[1]: Reached target timers.target - Timer Units. Jan 24 02:59:03.083326 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 02:59:03.083339 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 02:59:03.083353 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 02:59:03.083373 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 02:59:03.083424 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 02:59:03.083443 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 02:59:03.083457 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 02:59:03.083471 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 02:59:03.083484 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 02:59:03.083507 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 02:59:03.083524 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 02:59:03.083538 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 02:59:03.083559 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 02:59:03.083573 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 02:59:03.083587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 02:59:03.083600 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 02:59:03.083723 systemd-journald[203]: Collecting audit messages is disabled. Jan 24 02:59:03.083775 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 02:59:03.083790 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 02:59:03.083805 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 02:59:03.083827 systemd-journald[203]: Journal started Jan 24 02:59:03.083854 systemd-journald[203]: Runtime Journal (/run/log/journal/3936b6af8f4b4a478316179f3ccbb363) is 4.7M, max 38.0M, 33.2M free. Jan 24 02:59:03.038612 systemd-modules-load[204]: Inserted module 'overlay' Jan 24 02:59:03.136045 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 02:59:03.136081 kernel: Bridge firewalling registered Jan 24 02:59:03.136101 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 02:59:03.093261 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 24 02:59:03.138129 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 02:59:03.143977 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 02:59:03.147244 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 02:59:03.154783 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 02:59:03.158602 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 02:59:03.161301 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 02:59:03.172677 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 02:59:03.188450 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 02:59:03.193571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 02:59:03.198793 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 02:59:03.199927 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 02:59:03.205605 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 02:59:03.210628 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 02:59:03.227766 dracut-cmdline[238]: dracut-dracut-053 Jan 24 02:59:03.233420 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 02:59:03.266027 systemd-resolved[239]: Positive Trust Anchors: Jan 24 02:59:03.266047 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 02:59:03.266090 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 02:59:03.270247 systemd-resolved[239]: Defaulting to hostname 'linux'. Jan 24 02:59:03.272633 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 02:59:03.274579 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 02:59:03.343446 kernel: SCSI subsystem initialized Jan 24 02:59:03.355447 kernel: Loading iSCSI transport class v2.0-870. Jan 24 02:59:03.368420 kernel: iscsi: registered transport (tcp) Jan 24 02:59:03.394576 kernel: iscsi: registered transport (qla4xxx) Jan 24 02:59:03.394690 kernel: QLogic iSCSI HBA Driver Jan 24 02:59:03.451170 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 02:59:03.460669 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 02:59:03.491371 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 02:59:03.491509 kernel: device-mapper: uevent: version 1.0.3 Jan 24 02:59:03.494422 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 02:59:03.541459 kernel: raid6: sse2x4 gen() 7825 MB/s Jan 24 02:59:03.559442 kernel: raid6: sse2x2 gen() 5642 MB/s Jan 24 02:59:03.582079 kernel: raid6: sse2x1 gen() 5630 MB/s Jan 24 02:59:03.582374 kernel: raid6: using algorithm sse2x4 gen() 7825 MB/s Jan 24 02:59:03.600998 kernel: raid6: .... xor() 8063 MB/s, rmw enabled Jan 24 02:59:03.601188 kernel: raid6: using ssse3x2 recovery algorithm Jan 24 02:59:03.635449 kernel: xor: automatically using best checksumming function avx Jan 24 02:59:03.827447 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 02:59:03.845133 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 02:59:03.852692 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 02:59:03.882685 systemd-udevd[422]: Using default interface naming scheme 'v255'. Jan 24 02:59:03.889410 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 02:59:03.898028 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 02:59:03.920193 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jan 24 02:59:03.961287 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 02:59:03.968617 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 02:59:04.087904 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 02:59:04.096729 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 02:59:04.126996 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 02:59:04.128903 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 02:59:04.130574 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 02:59:04.132543 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 02:59:04.141849 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 02:59:04.176312 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 02:59:04.209410 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 24 02:59:04.215412 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 02:59:04.222666 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 24 02:59:04.244076 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 02:59:04.244142 kernel: GPT:17805311 != 125829119 Jan 24 02:59:04.244161 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 02:59:04.244178 kernel: GPT:17805311 != 125829119 Jan 24 02:59:04.244194 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 02:59:04.244210 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 02:59:04.253330 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 02:59:04.253541 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 02:59:04.254786 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 02:59:04.255513 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 02:59:04.255700 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 02:59:04.256431 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 02:59:04.263721 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 02:59:04.270709 kernel: AVX version of gcm_enc/dec engaged. Jan 24 02:59:04.270741 kernel: AES CTR mode by8 optimization enabled Jan 24 02:59:04.301415 kernel: ACPI: bus type USB registered Jan 24 02:59:04.301489 kernel: usbcore: registered new interface driver usbfs Jan 24 02:59:04.301511 kernel: usbcore: registered new interface driver hub Jan 24 02:59:04.301529 kernel: usbcore: registered new device driver usb Jan 24 02:59:04.322409 kernel: libata version 3.00 loaded. Jan 24 02:59:04.353713 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 02:59:04.354074 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 02:59:04.355420 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 24 02:59:04.355696 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 24 02:59:04.355915 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 24 02:59:04.356414 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 02:59:04.356703 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 02:59:04.359442 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 24 02:59:04.359699 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 24 02:59:04.359909 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 24 02:59:04.360684 kernel: hub 1-0:1.0: USB hub found Jan 24 02:59:04.360927 kernel: hub 1-0:1.0: 4 ports detected Jan 24 02:59:04.361131 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 24 02:59:04.361416 kernel: hub 2-0:1.0: USB hub found Jan 24 02:59:04.361666 kernel: hub 2-0:1.0: 4 ports detected Jan 24 02:59:04.366852 kernel: scsi host0: ahci Jan 24 02:59:04.369416 kernel: scsi host1: ahci Jan 24 02:59:04.369825 kernel: scsi host2: ahci Jan 24 02:59:04.370497 kernel: scsi host3: ahci Jan 24 02:59:04.373647 kernel: scsi host4: ahci Jan 24 02:59:04.374425 kernel: scsi host5: ahci Jan 24 02:59:04.374671 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 24 02:59:04.374693 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 24 02:59:04.374710 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 24 02:59:04.374726 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 24 02:59:04.374742 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 24 02:59:04.374758 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 24 02:59:04.443047 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 02:59:04.446626 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (472) Jan 24 02:59:04.451448 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Jan 24 02:59:04.456997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 02:59:04.465970 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 02:59:04.472895 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 02:59:04.474497 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 02:59:04.481079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 02:59:04.487588 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 02:59:04.492262 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 02:59:04.507709 disk-uuid[563]: Primary Header is updated. Jan 24 02:59:04.507709 disk-uuid[563]: Secondary Entries is updated. Jan 24 02:59:04.507709 disk-uuid[563]: Secondary Header is updated. Jan 24 02:59:04.515421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 02:59:04.522429 kernel: GPT:disk_guids don't match. Jan 24 02:59:04.522483 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 02:59:04.522504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 02:59:04.530818 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 02:59:04.535451 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 02:59:04.605061 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 24 02:59:04.686607 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 02:59:04.686687 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 02:59:04.689080 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 02:59:04.690430 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 02:59:04.692216 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 02:59:04.694420 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 24 02:59:04.754419 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 02:59:04.762826 kernel: usbcore: registered new interface driver usbhid Jan 24 02:59:04.762899 kernel: usbhid: USB HID core driver Jan 24 02:59:04.770109 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 24 02:59:04.770158 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 24 02:59:05.533474 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 02:59:05.534084 disk-uuid[565]: The operation has completed successfully. Jan 24 02:59:05.581462 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 02:59:05.581625 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 02:59:05.608616 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 02:59:05.621371 sh[587]: Success Jan 24 02:59:05.637426 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 24 02:59:05.700939 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 02:59:05.715523 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 02:59:05.718241 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 02:59:05.741672 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 02:59:05.741737 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 02:59:05.743664 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 02:59:05.745730 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 02:59:05.748243 kernel: BTRFS info (device dm-0): using free space tree Jan 24 02:59:05.758291 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 02:59:05.760024 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 02:59:05.772646 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 02:59:05.777602 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 02:59:05.806892 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 02:59:05.806960 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 02:59:05.808635 kernel: BTRFS info (device vda6): using free space tree Jan 24 02:59:05.816432 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 02:59:05.828869 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 02:59:05.832767 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 02:59:05.838073 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 02:59:05.845639 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 02:59:05.945558 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 02:59:05.962730 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 02:59:06.000850 systemd-networkd[769]: lo: Link UP Jan 24 02:59:06.001917 systemd-networkd[769]: lo: Gained carrier Jan 24 02:59:06.005491 systemd-networkd[769]: Enumeration completed Jan 24 02:59:06.006340 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 02:59:06.007299 systemd[1]: Reached target network.target - Network. Jan 24 02:59:06.008133 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 02:59:06.008139 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 02:59:06.012332 systemd-networkd[769]: eth0: Link UP Jan 24 02:59:06.012343 systemd-networkd[769]: eth0: Gained carrier Jan 24 02:59:06.012363 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 02:59:06.053551 systemd-networkd[769]: eth0: DHCPv4 address 10.243.72.22/30, gateway 10.243.72.21 acquired from 10.243.72.21 Jan 24 02:59:06.087977 ignition[698]: Ignition 2.19.0 Jan 24 02:59:06.088002 ignition[698]: Stage: fetch-offline Jan 24 02:59:06.088071 ignition[698]: no configs at "/usr/lib/ignition/base.d" Jan 24 02:59:06.090801 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 02:59:06.088090 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 02:59:06.088248 ignition[698]: parsed url from cmdline: "" Jan 24 02:59:06.088254 ignition[698]: no config URL provided Jan 24 02:59:06.088264 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 02:59:06.088280 ignition[698]: no config at "/usr/lib/ignition/user.ign" Jan 24 02:59:06.088289 ignition[698]: failed to fetch config: resource requires networking Jan 24 02:59:06.088830 ignition[698]: Ignition finished successfully Jan 24 02:59:06.099655 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 02:59:06.123551 ignition[778]: Ignition 2.19.0 Jan 24 02:59:06.123570 ignition[778]: Stage: fetch Jan 24 02:59:06.123921 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jan 24 02:59:06.123949 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 02:59:06.124137 ignition[778]: parsed url from cmdline: "" Jan 24 02:59:06.124144 ignition[778]: no config URL provided Jan 24 02:59:06.124154 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 02:59:06.124171 ignition[778]: no config at "/usr/lib/ignition/user.ign" Jan 24 02:59:06.124324 ignition[778]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 24 02:59:06.126855 ignition[778]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 24 02:59:06.126900 ignition[778]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 24 02:59:06.145066 ignition[778]: GET result: OK Jan 24 02:59:06.147419 ignition[778]: parsing config with SHA512: efb88fc1d50e9988e5e6baf1e3814f4dc8f7dbb4e22cdc6bfdfcb43d86ee96924a4e1c6790b4d8d1d64b40b32a330671cebb2b801300433ef6c43a422ca5d0e2 Jan 24 02:59:06.153959 unknown[778]: fetched base config from "system" Jan 24 02:59:06.154788 ignition[778]: fetch: fetch complete Jan 24 02:59:06.153977 unknown[778]: fetched base config from "system" Jan 24 02:59:06.154796 ignition[778]: fetch: fetch passed Jan 24 02:59:06.153987 unknown[778]: fetched user config from "openstack" Jan 24 02:59:06.154892 ignition[778]: Ignition finished successfully Jan 24 02:59:06.156758 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 02:59:06.163648 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 02:59:06.192213 ignition[784]: Ignition 2.19.0 Jan 24 02:59:06.192233 ignition[784]: Stage: kargs Jan 24 02:59:06.194657 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 24 02:59:06.194688 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 02:59:06.198823 ignition[784]: kargs: kargs passed Jan 24 02:59:06.198908 ignition[784]: Ignition finished successfully Jan 24 02:59:06.200197 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 02:59:06.207654 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 02:59:06.240052 ignition[790]: Ignition 2.19.0 Jan 24 02:59:06.240449 ignition[790]: Stage: disks Jan 24 02:59:06.240731 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 24 02:59:06.240752 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 02:59:06.243849 ignition[790]: disks: disks passed Jan 24 02:59:06.243938 ignition[790]: Ignition finished successfully Jan 24 02:59:06.245253 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 02:59:06.247279 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 02:59:06.248899 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 02:59:06.249712 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 02:59:06.251209 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 02:59:06.252676 systemd[1]: Reached target basic.target - Basic System. Jan 24 02:59:06.258608 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 02:59:06.286554 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 24 02:59:06.291477 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 02:59:06.299654 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 02:59:06.412416 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 02:59:06.413051 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 02:59:06.414331 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 02:59:06.421531 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 02:59:06.429553 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 02:59:06.430606 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 02:59:06.434492 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 24 02:59:06.436623 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 02:59:06.438000 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 02:59:06.441922 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 02:59:06.448439 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (806) Jan 24 02:59:06.452649 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 02:59:06.457719 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 02:59:06.457748 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 02:59:06.457790 kernel: BTRFS info (device vda6): using free space tree Jan 24 02:59:06.467500 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 02:59:06.473354 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 02:59:06.539067 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 02:59:06.558250 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 24 02:59:06.581821 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 02:59:06.603481 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 02:59:06.710625 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 02:59:06.719616 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 02:59:06.721591 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 02:59:06.734770 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 02:59:06.739356 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 02:59:06.766949 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 02:59:06.783855 ignition[923]: INFO : Ignition 2.19.0 Jan 24 02:59:06.783855 ignition[923]: INFO : Stage: mount Jan 24 02:59:06.786314 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 02:59:06.786314 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 02:59:06.786314 ignition[923]: INFO : mount: mount passed Jan 24 02:59:06.786314 ignition[923]: INFO : Ignition finished successfully Jan 24 02:59:06.786849 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 02:59:07.710796 systemd-networkd[769]: eth0: Gained IPv6LL Jan 24 02:59:09.220624 systemd-networkd[769]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d205:24:19ff:fef3:4816/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d205:24:19ff:fef3:4816/64 assigned by NDisc. Jan 24 02:59:09.220637 systemd-networkd[769]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 24 02:59:13.681153 coreos-metadata[808]: Jan 24 02:59:13.681 WARN failed to locate config-drive, using the metadata service API instead Jan 24 02:59:13.701780 coreos-metadata[808]: Jan 24 02:59:13.701 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 24 02:59:13.718127 coreos-metadata[808]: Jan 24 02:59:13.718 INFO Fetch successful Jan 24 02:59:13.719084 coreos-metadata[808]: Jan 24 02:59:13.718 INFO wrote hostname srv-fpdmo.gb1.brightbox.com to /sysroot/etc/hostname Jan 24 02:59:13.720672 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 24 02:59:13.720831 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 24 02:59:13.729540 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 02:59:13.747933 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 02:59:13.759434 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Jan 24 02:59:13.766671 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 02:59:13.766731 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 02:59:13.766751 kernel: BTRFS info (device vda6): using free space tree Jan 24 02:59:13.771448 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 02:59:13.774692 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 02:59:13.813379 ignition[957]: INFO : Ignition 2.19.0 Jan 24 02:59:13.813379 ignition[957]: INFO : Stage: files Jan 24 02:59:13.815179 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 02:59:13.815179 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 02:59:13.815179 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 24 02:59:13.817870 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 02:59:13.817870 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 02:59:13.820197 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 02:59:13.820197 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 02:59:13.822378 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 02:59:13.822220 unknown[957]: wrote ssh authorized keys file for user: core Jan 24 02:59:13.825959 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 02:59:13.827238 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 02:59:14.024669 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 02:59:14.331470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 02:59:14.331470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 24 02:59:14.331470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 24 02:59:14.965654 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 24 02:59:15.469065 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 24 02:59:15.470444 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 24 02:59:15.470444 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 02:59:15.470444 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 02:59:15.470444 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 02:59:15.470444 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 02:59:15.470444 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 02:59:15.470444 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 02:59:15.470444 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 02:59:15.479255 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 02:59:15.479255 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 02:59:15.479255 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 02:59:15.479255 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 02:59:15.479255 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 02:59:15.479255 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 02:59:15.722820 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 24 02:59:16.992536 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 02:59:16.992536 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 24 02:59:17.002048 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 02:59:17.002048 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 02:59:17.002048 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 24 02:59:17.002048 ignition[957]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 24 02:59:17.002048 ignition[957]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 02:59:17.002048 ignition[957]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 02:59:17.002048 ignition[957]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 02:59:17.002048 ignition[957]: INFO : files: files passed Jan 24 02:59:17.002048 ignition[957]: INFO : Ignition finished successfully Jan 24 02:59:17.002334 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 02:59:17.022197 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 02:59:17.031780 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 02:59:17.044721 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 02:59:17.045768 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 02:59:17.055531 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 02:59:17.055531 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 02:59:17.059101 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 02:59:17.061854 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 02:59:17.063309 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 02:59:17.069624 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 02:59:17.126231 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 02:59:17.126443 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 02:59:17.128520 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 02:59:17.129770 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 02:59:17.132264 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 02:59:17.145620 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 02:59:17.162523 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 02:59:17.168681 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 02:59:17.184699 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 02:59:17.186808 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 02:59:17.187736 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 02:59:17.189543 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 02:59:17.189826 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 02:59:17.191500 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 02:59:17.192530 systemd[1]: Stopped target basic.target - Basic System. Jan 24 02:59:17.194094 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 02:59:17.195504 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 02:59:17.196750 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 02:59:17.198240 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 02:59:17.199821 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 02:59:17.201416 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 02:59:17.202866 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 02:59:17.204351 systemd[1]: Stopped target swap.target - Swaps. Jan 24 02:59:17.205690 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 02:59:17.205961 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 02:59:17.207550 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 02:59:17.208489 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 02:59:17.209933 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 02:59:17.210111 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 02:59:17.211668 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 02:59:17.211907 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 02:59:17.213775 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 02:59:17.213975 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 02:59:17.215633 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 02:59:17.215817 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 02:59:17.231726 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 02:59:17.242712 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 02:59:17.243587 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 02:59:17.244553 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 02:59:17.246576 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 02:59:17.246869 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 02:59:17.262708 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 02:59:17.262876 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 02:59:17.270971 ignition[1009]: INFO : Ignition 2.19.0 Jan 24 02:59:17.273327 ignition[1009]: INFO : Stage: umount Jan 24 02:59:17.273327 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 02:59:17.273327 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 02:59:17.273327 ignition[1009]: INFO : umount: umount passed Jan 24 02:59:17.273327 ignition[1009]: INFO : Ignition finished successfully Jan 24 02:59:17.276921 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 02:59:17.277872 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 02:59:17.279319 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 02:59:17.281500 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 02:59:17.282508 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 02:59:17.282639 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 02:59:17.283343 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 02:59:17.284275 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 02:59:17.285508 systemd[1]: Stopped target network.target - Network. Jan 24 02:59:17.287205 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 02:59:17.287276 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 02:59:17.289435 systemd[1]: Stopped target paths.target - Path Units. Jan 24 02:59:17.291537 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 02:59:17.295470 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 02:59:17.296575 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 02:59:17.298207 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 02:59:17.299593 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 02:59:17.299672 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 02:59:17.301043 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 02:59:17.301108 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 02:59:17.302306 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 02:59:17.302381 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 02:59:17.303688 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 02:59:17.303795 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 02:59:17.305420 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 02:59:17.308050 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 02:59:17.311646 systemd-networkd[769]: eth0: DHCPv6 lease lost Jan 24 02:59:17.313740 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 02:59:17.314776 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 02:59:17.314996 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 02:59:17.317368 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 02:59:17.317968 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 02:59:17.321273 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 02:59:17.321343 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 02:59:17.322632 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 02:59:17.322720 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 02:59:17.329570 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 02:59:17.330279 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 02:59:17.330383 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 02:59:17.333051 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 02:59:17.338301 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 02:59:17.338612 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 02:59:17.349792 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 02:59:17.349976 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 02:59:17.351250 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 02:59:17.351318 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 02:59:17.352654 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 02:59:17.352721 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 02:59:17.357205 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 02:59:17.357530 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 02:59:17.359664 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 02:59:17.359745 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 02:59:17.361566 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 02:59:17.361647 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 02:59:17.363118 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 02:59:17.363224 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 02:59:17.365488 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 02:59:17.365587 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 02:59:17.367353 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 02:59:17.367522 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 02:59:17.375601 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 02:59:17.376475 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 02:59:17.376559 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 02:59:17.378143 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 02:59:17.378227 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 02:59:17.381554 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 02:59:17.381656 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 02:59:17.383837 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 02:59:17.383934 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 02:59:17.387351 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 02:59:17.387562 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 02:59:17.392887 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 02:59:17.393052 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 02:59:17.395081 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 02:59:17.400575 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 02:59:17.412576 systemd[1]: Switching root. Jan 24 02:59:17.460707 systemd-journald[203]: Journal stopped Jan 24 02:59:19.099644 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 24 02:59:19.099811 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 02:59:19.099838 kernel: SELinux: policy capability open_perms=1 Jan 24 02:59:19.099885 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 02:59:19.099908 kernel: SELinux: policy capability always_check_network=0 Jan 24 02:59:19.099956 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 02:59:19.099977 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 02:59:19.099996 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 02:59:19.100014 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 02:59:19.100032 kernel: audit: type=1403 audit(1769223557.715:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 02:59:19.100060 systemd[1]: Successfully loaded SELinux policy in 56.641ms. Jan 24 02:59:19.100100 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.946ms. Jan 24 02:59:19.100160 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 02:59:19.100184 systemd[1]: Detected virtualization kvm. Jan 24 02:59:19.100223 systemd[1]: Detected architecture x86-64. Jan 24 02:59:19.100244 systemd[1]: Detected first boot. Jan 24 02:59:19.100264 systemd[1]: Hostname set to . Jan 24 02:59:19.100284 systemd[1]: Initializing machine ID from VM UUID. Jan 24 02:59:19.100303 zram_generator::config[1052]: No configuration found. Jan 24 02:59:19.100323 systemd[1]: Populated /etc with preset unit settings. Jan 24 02:59:19.100372 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 02:59:19.100451 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 02:59:19.100476 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 02:59:19.100498 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 02:59:19.100519 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 02:59:19.100539 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 02:59:19.100558 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 02:59:19.100591 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 02:59:19.100649 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 02:59:19.100673 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 02:59:19.100694 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 02:59:19.100714 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 02:59:19.100759 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 02:59:19.100782 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 02:59:19.100802 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 02:59:19.100822 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 02:59:19.100842 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 02:59:19.100890 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 02:59:19.100914 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 02:59:19.100949 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 02:59:19.100984 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 02:59:19.101035 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 02:59:19.101084 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 02:59:19.101108 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 02:59:19.101140 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 02:59:19.101161 systemd[1]: Reached target slices.target - Slice Units. Jan 24 02:59:19.101181 systemd[1]: Reached target swap.target - Swaps. Jan 24 02:59:19.101201 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 02:59:19.101221 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 02:59:19.101272 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 02:59:19.101295 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 02:59:19.101315 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 02:59:19.101335 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 02:59:19.101356 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 02:59:19.101377 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 02:59:19.101421 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 02:59:19.101446 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 02:59:19.101466 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 02:59:19.101517 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 02:59:19.101539 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 02:59:19.101560 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 02:59:19.101581 systemd[1]: Reached target machines.target - Containers. Jan 24 02:59:19.101601 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 02:59:19.101650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 02:59:19.101674 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 02:59:19.101702 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 02:59:19.101738 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 02:59:19.101760 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 02:59:19.101780 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 02:59:19.101815 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 02:59:19.101836 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 02:59:19.101856 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 02:59:19.101876 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 02:59:19.101896 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 02:59:19.101915 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 02:59:19.101962 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 02:59:19.101985 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 02:59:19.102004 kernel: loop: module loaded Jan 24 02:59:19.102024 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 02:59:19.102044 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 02:59:19.102065 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 02:59:19.102085 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 02:59:19.102104 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 02:59:19.102134 systemd[1]: Stopped verity-setup.service. Jan 24 02:59:19.102172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 02:59:19.102194 kernel: ACPI: bus type drm_connector registered Jan 24 02:59:19.102214 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 02:59:19.102234 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 02:59:19.102253 kernel: fuse: init (API version 7.39) Jan 24 02:59:19.102272 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 02:59:19.102306 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 02:59:19.102341 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 02:59:19.102362 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 02:59:19.102383 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 02:59:19.102446 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 02:59:19.102470 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 02:59:19.102554 systemd-journald[1145]: Collecting audit messages is disabled. Jan 24 02:59:19.102593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 02:59:19.102615 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 02:59:19.102663 systemd-journald[1145]: Journal started Jan 24 02:59:19.102733 systemd-journald[1145]: Runtime Journal (/run/log/journal/3936b6af8f4b4a478316179f3ccbb363) is 4.7M, max 38.0M, 33.2M free. Jan 24 02:59:18.630521 systemd[1]: Queued start job for default target multi-user.target. Jan 24 02:59:18.656404 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 02:59:18.657229 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 02:59:19.107441 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 02:59:19.111571 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 02:59:19.111856 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 02:59:19.113098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 02:59:19.113471 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 02:59:19.115597 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 02:59:19.115817 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 02:59:19.117182 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 02:59:19.118550 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 02:59:19.118824 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 02:59:19.120247 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 02:59:19.121315 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 02:59:19.122493 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 02:59:19.140356 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 02:59:19.149516 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 02:59:19.157507 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 02:59:19.159570 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 02:59:19.159634 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 02:59:19.164541 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 02:59:19.174228 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 02:59:19.184608 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 02:59:19.185716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 02:59:19.193622 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 02:59:19.196628 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 02:59:19.198507 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 02:59:19.203635 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 02:59:19.204498 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 02:59:19.207608 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 02:59:19.211613 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 02:59:19.221707 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 02:59:19.229886 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 02:59:19.230891 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 02:59:19.232081 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 02:59:19.342979 systemd-journald[1145]: Time spent on flushing to /var/log/journal/3936b6af8f4b4a478316179f3ccbb363 is 215.794ms for 1150 entries. Jan 24 02:59:19.342979 systemd-journald[1145]: System Journal (/var/log/journal/3936b6af8f4b4a478316179f3ccbb363) is 8.0M, max 584.8M, 576.8M free. Jan 24 02:59:19.610591 systemd-journald[1145]: Received client request to flush runtime journal. Jan 24 02:59:19.610716 kernel: loop0: detected capacity change from 0 to 142488 Jan 24 02:59:19.610764 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 02:59:19.610800 kernel: loop1: detected capacity change from 0 to 140768 Jan 24 02:59:19.352841 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 02:59:19.358148 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 02:59:19.370676 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 02:59:19.518466 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 02:59:19.528802 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 02:59:19.544544 udevadm[1197]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 02:59:19.583507 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 02:59:19.585480 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 02:59:19.589517 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 02:59:19.592851 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 24 02:59:19.592870 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 24 02:59:19.612234 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 02:59:19.628507 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 02:59:19.648726 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 02:59:19.658414 kernel: loop2: detected capacity change from 0 to 8 Jan 24 02:59:19.688422 kernel: loop3: detected capacity change from 0 to 224512 Jan 24 02:59:19.696863 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 02:59:19.711366 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 02:59:19.745440 kernel: loop4: detected capacity change from 0 to 142488 Jan 24 02:59:19.761133 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 24 02:59:19.761162 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 24 02:59:19.768291 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 02:59:19.826428 kernel: loop5: detected capacity change from 0 to 140768 Jan 24 02:59:19.852424 kernel: loop6: detected capacity change from 0 to 8 Jan 24 02:59:19.875427 kernel: loop7: detected capacity change from 0 to 224512 Jan 24 02:59:19.913668 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 24 02:59:19.915469 (sd-merge)[1213]: Merged extensions into '/usr'. Jan 24 02:59:19.929877 systemd[1]: Reloading requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 02:59:19.929913 systemd[1]: Reloading... Jan 24 02:59:20.165466 zram_generator::config[1239]: No configuration found. Jan 24 02:59:20.433034 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 02:59:20.498378 systemd[1]: Reloading finished in 567 ms. Jan 24 02:59:20.550958 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 02:59:20.559807 systemd[1]: Starting ensure-sysext.service... Jan 24 02:59:20.565736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 02:59:20.649269 systemd[1]: Reloading requested from client PID 1295 ('systemctl') (unit ensure-sysext.service)... Jan 24 02:59:20.649294 systemd[1]: Reloading... Jan 24 02:59:20.657957 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 02:59:20.658639 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 02:59:20.661751 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 02:59:20.662282 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 24 02:59:20.662721 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 24 02:59:20.669629 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 02:59:20.669646 systemd-tmpfiles[1296]: Skipping /boot Jan 24 02:59:20.691968 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 02:59:20.692616 systemd-tmpfiles[1296]: Skipping /boot Jan 24 02:59:20.723115 ldconfig[1180]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 02:59:20.778456 zram_generator::config[1324]: No configuration found. Jan 24 02:59:20.951056 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 02:59:21.016294 systemd[1]: Reloading finished in 366 ms. Jan 24 02:59:21.038255 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 02:59:21.039511 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 02:59:21.042987 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 02:59:21.068960 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 02:59:21.073866 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 02:59:21.078682 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 02:59:21.088767 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 02:59:21.096825 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 02:59:21.106510 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 02:59:21.123969 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 02:59:21.129616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 02:59:21.129888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 02:59:21.132882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 02:59:21.139405 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 02:59:21.150749 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 02:59:21.151831 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 02:59:21.152000 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 02:59:21.156820 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 02:59:21.157093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 02:59:21.157320 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 02:59:21.157476 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 02:59:21.161695 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 02:59:21.161986 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 02:59:21.172889 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 02:59:21.175888 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 02:59:21.176134 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 02:59:21.177175 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 02:59:21.177452 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 02:59:21.183924 systemd[1]: Finished ensure-sysext.service. Jan 24 02:59:21.185502 systemd-udevd[1389]: Using default interface naming scheme 'v255'. Jan 24 02:59:21.201606 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 02:59:21.210475 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 02:59:21.224202 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 02:59:21.227470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 02:59:21.227732 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 02:59:21.229819 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 02:59:21.230035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 02:59:21.233672 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 02:59:21.233804 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 02:59:21.244593 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 02:59:21.245956 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 02:59:21.246205 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 02:59:21.264122 augenrules[1418]: No rules Jan 24 02:59:21.267777 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 02:59:21.269983 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 02:59:21.292094 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 02:59:21.293821 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 02:59:21.304687 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 02:59:21.363295 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 02:59:21.365252 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 02:59:21.422580 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 02:59:21.423660 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 02:59:21.465556 systemd-resolved[1387]: Positive Trust Anchors: Jan 24 02:59:21.465579 systemd-resolved[1387]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 02:59:21.465621 systemd-resolved[1387]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 02:59:21.473672 systemd-networkd[1432]: lo: Link UP Jan 24 02:59:21.474084 systemd-networkd[1432]: lo: Gained carrier Jan 24 02:59:21.475551 systemd-networkd[1432]: Enumeration completed Jan 24 02:59:21.475801 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 02:59:21.481752 systemd-resolved[1387]: Using system hostname 'srv-fpdmo.gb1.brightbox.com'. Jan 24 02:59:21.485687 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 02:59:21.488543 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 02:59:21.489327 systemd[1]: Reached target network.target - Network. Jan 24 02:59:21.490208 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 02:59:21.499914 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 02:59:21.649627 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 02:59:21.650266 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 02:59:21.656151 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 02:59:21.655591 systemd-networkd[1432]: eth0: Link UP Jan 24 02:59:21.655599 systemd-networkd[1432]: eth0: Gained carrier Jan 24 02:59:21.655622 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 02:59:21.666866 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 02:59:21.670380 kernel: ACPI: button: Power Button [PWRF] Jan 24 02:59:21.693587 systemd-networkd[1432]: eth0: DHCPv4 address 10.243.72.22/30, gateway 10.243.72.21 acquired from 10.243.72.21 Jan 24 02:59:21.695942 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Jan 24 02:59:21.697687 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Jan 24 02:59:21.712426 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1443) Jan 24 02:59:21.802597 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 24 02:59:21.821560 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 02:59:21.828417 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 02:59:21.828805 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 02:59:21.872862 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 02:59:21.882709 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 02:59:21.904571 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 02:59:21.935607 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 02:59:22.126671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 02:59:22.131722 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 02:59:22.138697 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 02:59:22.166484 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 02:59:22.215171 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 02:59:22.216294 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 02:59:22.217085 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 02:59:22.218108 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 02:59:22.218945 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 02:59:22.220171 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 02:59:22.221024 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 02:59:22.221805 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 02:59:22.222562 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 02:59:22.222614 systemd[1]: Reached target paths.target - Path Units. Jan 24 02:59:22.223244 systemd[1]: Reached target timers.target - Timer Units. Jan 24 02:59:22.227555 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 02:59:22.230738 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 02:59:22.237315 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 02:59:22.240376 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 02:59:22.241931 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 02:59:22.242767 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 02:59:22.243470 systemd[1]: Reached target basic.target - Basic System. Jan 24 02:59:22.244159 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 02:59:22.244212 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 02:59:22.253720 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 02:59:22.259821 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 02:59:22.261176 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 02:59:22.270668 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 02:59:22.286787 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 02:59:22.307308 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 02:59:22.309915 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 02:59:22.314647 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 02:59:22.317011 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 02:59:22.323715 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 02:59:22.328500 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 02:59:22.329663 jq[1478]: false Jan 24 02:59:22.339665 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 02:59:22.341760 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 02:59:22.342791 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 02:59:22.347646 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 02:59:22.365534 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 02:59:22.370121 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 02:59:22.378739 extend-filesystems[1479]: Found loop4 Jan 24 02:59:22.387756 extend-filesystems[1479]: Found loop5 Jan 24 02:59:22.387756 extend-filesystems[1479]: Found loop6 Jan 24 02:59:22.387756 extend-filesystems[1479]: Found loop7 Jan 24 02:59:22.387756 extend-filesystems[1479]: Found vda Jan 24 02:59:22.387756 extend-filesystems[1479]: Found vda1 Jan 24 02:59:22.387756 extend-filesystems[1479]: Found vda2 Jan 24 02:59:22.387756 extend-filesystems[1479]: Found vda3 Jan 24 02:59:22.387756 extend-filesystems[1479]: Found usr Jan 24 02:59:22.387756 extend-filesystems[1479]: Found vda4 Jan 24 02:59:22.387756 extend-filesystems[1479]: Found vda6 Jan 24 02:59:22.387756 extend-filesystems[1479]: Found vda7 Jan 24 02:59:22.387756 extend-filesystems[1479]: Found vda9 Jan 24 02:59:22.387756 extend-filesystems[1479]: Checking size of /dev/vda9 Jan 24 02:59:22.389220 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 02:59:22.450751 jq[1489]: true Jan 24 02:59:22.390567 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 02:59:22.416067 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 02:59:22.416407 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 02:59:22.470373 jq[1502]: true Jan 24 02:59:22.470692 extend-filesystems[1479]: Resized partition /dev/vda9 Jan 24 02:59:22.473542 update_engine[1488]: I20260124 02:59:22.469013 1488 main.cc:92] Flatcar Update Engine starting Jan 24 02:59:22.473941 tar[1493]: linux-amd64/LICENSE Jan 24 02:59:22.473941 tar[1493]: linux-amd64/helm Jan 24 02:59:22.480095 dbus-daemon[1477]: [system] SELinux support is enabled Jan 24 02:59:22.486003 (ntainerd)[1507]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 02:59:22.490760 extend-filesystems[1518]: resize2fs 1.47.1 (20-May-2024) Jan 24 02:59:22.489984 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 02:59:22.497733 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 24 02:59:22.494009 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 02:59:22.499667 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 02:59:22.499715 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 02:59:22.500545 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 02:59:22.500584 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 02:59:22.517453 dbus-daemon[1477]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1432 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 24 02:59:22.593225 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1439) Jan 24 02:59:22.593383 update_engine[1488]: I20260124 02:59:22.592855 1488 update_check_scheduler.cc:74] Next update check in 5m9s Jan 24 02:59:22.597533 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 24 02:59:22.598977 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 02:59:22.599749 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 02:59:22.604792 systemd[1]: Started update-engine.service - Update Engine. Jan 24 02:59:22.616714 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 02:59:22.756336 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 02:59:22.767323 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 02:59:22.777603 systemd-logind[1487]: New seat seat0. Jan 24 02:59:22.790735 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 02:59:22.976084 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Jan 24 02:59:22.981385 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 02:59:22.997521 systemd[1]: Starting sshkeys.service... Jan 24 02:59:23.069503 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 24 02:59:23.071597 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 24 02:59:23.078575 dbus-daemon[1477]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1520 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 24 02:59:23.094628 systemd[1]: Starting polkit.service - Authorization Manager... Jan 24 02:59:23.102143 sshd_keygen[1503]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 02:59:23.116076 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 02:59:23.125410 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 24 02:59:23.133818 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 02:59:23.148569 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 02:59:23.168225 extend-filesystems[1518]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 02:59:23.168225 extend-filesystems[1518]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 24 02:59:23.168225 extend-filesystems[1518]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 24 02:59:23.168130 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 02:59:23.178871 extend-filesystems[1479]: Resized filesystem in /dev/vda9 Jan 24 02:59:23.169111 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 02:59:23.226792 polkitd[1546]: Started polkitd version 121 Jan 24 02:59:23.278884 polkitd[1546]: Loading rules from directory /etc/polkit-1/rules.d Jan 24 02:59:23.279027 polkitd[1546]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 24 02:59:23.287084 polkitd[1546]: Finished loading, compiling and executing 2 rules Jan 24 02:59:23.294181 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 24 02:59:23.294524 systemd[1]: Started polkit.service - Authorization Manager. Jan 24 02:59:23.296832 polkitd[1546]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 24 02:59:23.298697 containerd[1507]: time="2026-01-24T02:59:23.298538909Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 02:59:23.332830 systemd-hostnamed[1520]: Hostname set to (static) Jan 24 02:59:23.334620 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 02:59:23.344594 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 02:59:23.354983 systemd[1]: Started sshd@0-10.243.72.22:22-20.161.92.111:52500.service - OpenSSH per-connection server daemon (20.161.92.111:52500). Jan 24 02:59:23.389034 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 02:59:23.389364 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 02:59:23.402233 containerd[1507]: time="2026-01-24T02:59:23.402165271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 02:59:23.403895 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 02:59:23.408960 containerd[1507]: time="2026-01-24T02:59:23.408911771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 02:59:23.408960 containerd[1507]: time="2026-01-24T02:59:23.408959661Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 02:59:23.409109 containerd[1507]: time="2026-01-24T02:59:23.408989407Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 02:59:23.409418 containerd[1507]: time="2026-01-24T02:59:23.409364247Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 02:59:23.409508 containerd[1507]: time="2026-01-24T02:59:23.409423912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 02:59:23.409686 containerd[1507]: time="2026-01-24T02:59:23.409564367Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 02:59:23.409686 containerd[1507]: time="2026-01-24T02:59:23.409606762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 02:59:23.410518 containerd[1507]: time="2026-01-24T02:59:23.409872613Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 02:59:23.410518 containerd[1507]: time="2026-01-24T02:59:23.409904173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 02:59:23.410518 containerd[1507]: time="2026-01-24T02:59:23.409933368Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 02:59:23.410518 containerd[1507]: time="2026-01-24T02:59:23.410002563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 02:59:23.410518 containerd[1507]: time="2026-01-24T02:59:23.410179662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 02:59:23.412505 containerd[1507]: time="2026-01-24T02:59:23.411077013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 02:59:23.412505 containerd[1507]: time="2026-01-24T02:59:23.411304707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 02:59:23.412505 containerd[1507]: time="2026-01-24T02:59:23.411331217Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 02:59:23.412505 containerd[1507]: time="2026-01-24T02:59:23.411513746Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 02:59:23.412505 containerd[1507]: time="2026-01-24T02:59:23.411609372Z" level=info msg="metadata content store policy set" policy=shared Jan 24 02:59:23.429940 containerd[1507]: time="2026-01-24T02:59:23.429544720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 02:59:23.429940 containerd[1507]: time="2026-01-24T02:59:23.429655838Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 02:59:23.429940 containerd[1507]: time="2026-01-24T02:59:23.429773970Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 02:59:23.429940 containerd[1507]: time="2026-01-24T02:59:23.429835072Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 02:59:23.429940 containerd[1507]: time="2026-01-24T02:59:23.429871350Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 02:59:23.430417 containerd[1507]: time="2026-01-24T02:59:23.430220023Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 02:59:23.431184 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 02:59:23.432112 containerd[1507]: time="2026-01-24T02:59:23.432074549Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 02:59:23.433411 containerd[1507]: time="2026-01-24T02:59:23.432374591Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 02:59:23.433411 containerd[1507]: time="2026-01-24T02:59:23.432510966Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 02:59:23.433411 containerd[1507]: time="2026-01-24T02:59:23.432754766Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 02:59:23.433411 containerd[1507]: time="2026-01-24T02:59:23.432787151Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 02:59:23.433411 containerd[1507]: time="2026-01-24T02:59:23.432835646Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 02:59:23.433411 containerd[1507]: time="2026-01-24T02:59:23.432860034Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 02:59:23.433411 containerd[1507]: time="2026-01-24T02:59:23.433142960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 02:59:23.433411 containerd[1507]: time="2026-01-24T02:59:23.433178084Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 02:59:23.433411 containerd[1507]: time="2026-01-24T02:59:23.433327985Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 02:59:23.433411 containerd[1507]: time="2026-01-24T02:59:23.433356600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 02:59:23.436037 containerd[1507]: time="2026-01-24T02:59:23.435591480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 02:59:23.436037 containerd[1507]: time="2026-01-24T02:59:23.435815599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.436128 containerd[1507]: time="2026-01-24T02:59:23.435856148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.437418 containerd[1507]: time="2026-01-24T02:59:23.436116479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.439415 containerd[1507]: time="2026-01-24T02:59:23.437762880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.439415 containerd[1507]: time="2026-01-24T02:59:23.437826638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.439415 containerd[1507]: time="2026-01-24T02:59:23.437862248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.439415 containerd[1507]: time="2026-01-24T02:59:23.437885153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.439415 containerd[1507]: time="2026-01-24T02:59:23.437946648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.439415 containerd[1507]: time="2026-01-24T02:59:23.437991197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.439415 containerd[1507]: time="2026-01-24T02:59:23.438031702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.439415 containerd[1507]: time="2026-01-24T02:59:23.438072318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.439415 containerd[1507]: time="2026-01-24T02:59:23.438099683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.439415 containerd[1507]: time="2026-01-24T02:59:23.438121213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.439415 containerd[1507]: time="2026-01-24T02:59:23.438187770Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 02:59:23.439415 containerd[1507]: time="2026-01-24T02:59:23.438251401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.439415 containerd[1507]: time="2026-01-24T02:59:23.438275223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.439415 containerd[1507]: time="2026-01-24T02:59:23.438583469Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 02:59:23.439970 containerd[1507]: time="2026-01-24T02:59:23.438819052Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 02:59:23.439970 containerd[1507]: time="2026-01-24T02:59:23.438992714Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 02:59:23.439970 containerd[1507]: time="2026-01-24T02:59:23.439027909Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 02:59:23.439970 containerd[1507]: time="2026-01-24T02:59:23.439070555Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 02:59:23.439970 containerd[1507]: time="2026-01-24T02:59:23.439090621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.439970 containerd[1507]: time="2026-01-24T02:59:23.439111143Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 02:59:23.439970 containerd[1507]: time="2026-01-24T02:59:23.439152556Z" level=info msg="NRI interface is disabled by configuration." Jan 24 02:59:23.439970 containerd[1507]: time="2026-01-24T02:59:23.439173410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 02:59:23.440257 containerd[1507]: time="2026-01-24T02:59:23.439721644Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 02:59:23.440257 containerd[1507]: time="2026-01-24T02:59:23.439830743Z" level=info msg="Connect containerd service" Jan 24 02:59:23.440257 containerd[1507]: time="2026-01-24T02:59:23.439883103Z" level=info msg="using legacy CRI server" Jan 24 02:59:23.440257 containerd[1507]: time="2026-01-24T02:59:23.439907627Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 02:59:23.440257 containerd[1507]: time="2026-01-24T02:59:23.440131325Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 02:59:23.443761 containerd[1507]: time="2026-01-24T02:59:23.441352891Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 02:59:23.442012 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 02:59:23.445621 containerd[1507]: time="2026-01-24T02:59:23.445204815Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 02:59:23.445621 containerd[1507]: time="2026-01-24T02:59:23.445294998Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 02:59:23.445969 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 02:59:23.447563 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 02:59:23.451702 containerd[1507]: time="2026-01-24T02:59:23.445374607Z" level=info msg="Start subscribing containerd event" Jan 24 02:59:23.451862 containerd[1507]: time="2026-01-24T02:59:23.451825093Z" level=info msg="Start recovering state" Jan 24 02:59:23.452024 containerd[1507]: time="2026-01-24T02:59:23.451979113Z" level=info msg="Start event monitor" Jan 24 02:59:23.452074 containerd[1507]: time="2026-01-24T02:59:23.452048122Z" level=info msg="Start snapshots syncer" Jan 24 02:59:23.452119 containerd[1507]: time="2026-01-24T02:59:23.452080693Z" level=info msg="Start cni network conf syncer for default" Jan 24 02:59:23.452119 containerd[1507]: time="2026-01-24T02:59:23.452101053Z" level=info msg="Start streaming server" Jan 24 02:59:23.452272 containerd[1507]: time="2026-01-24T02:59:23.452244720Z" level=info msg="containerd successfully booted in 0.158401s" Jan 24 02:59:23.452907 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 02:59:23.518656 systemd-networkd[1432]: eth0: Gained IPv6LL Jan 24 02:59:23.520040 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Jan 24 02:59:23.525406 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 02:59:23.527210 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 02:59:23.537607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 02:59:23.544823 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 02:59:23.618242 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 02:59:23.696578 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Jan 24 02:59:23.698997 systemd-networkd[1432]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d205:24:19ff:fef3:4816/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d205:24:19ff:fef3:4816/64 assigned by NDisc. Jan 24 02:59:23.699020 systemd-networkd[1432]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 24 02:59:23.834830 systemd[1]: Started sshd@1-10.243.72.22:22-159.223.6.232:32826.service - OpenSSH per-connection server daemon (159.223.6.232:32826). Jan 24 02:59:23.964411 sshd[1597]: Invalid user webmaster from 159.223.6.232 port 32826 Jan 24 02:59:23.982613 sshd[1597]: Connection closed by invalid user webmaster 159.223.6.232 port 32826 [preauth] Jan 24 02:59:23.986564 systemd[1]: sshd@1-10.243.72.22:22-159.223.6.232:32826.service: Deactivated successfully. Jan 24 02:59:24.016631 sshd[1573]: Accepted publickey for core from 20.161.92.111 port 52500 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:59:24.020846 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:59:24.044020 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 02:59:24.059739 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 02:59:24.071709 systemd-logind[1487]: New session 1 of user core. Jan 24 02:59:24.131368 tar[1493]: linux-amd64/README.md Jan 24 02:59:24.149845 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 02:59:24.184214 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 02:59:24.186771 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 02:59:24.201939 (systemd)[1605]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 02:59:24.362834 systemd[1605]: Queued start job for default target default.target. Jan 24 02:59:24.371035 systemd[1605]: Created slice app.slice - User Application Slice. Jan 24 02:59:24.371274 systemd[1605]: Reached target paths.target - Paths. Jan 24 02:59:24.371434 systemd[1605]: Reached target timers.target - Timers. Jan 24 02:59:24.377069 systemd[1605]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 02:59:24.395379 systemd[1605]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 02:59:24.396232 systemd[1605]: Reached target sockets.target - Sockets. Jan 24 02:59:24.396441 systemd[1605]: Reached target basic.target - Basic System. Jan 24 02:59:24.396651 systemd[1605]: Reached target default.target - Main User Target. Jan 24 02:59:24.396840 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 02:59:24.397131 systemd[1605]: Startup finished in 183ms. Jan 24 02:59:24.410854 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 02:59:24.868597 systemd[1]: Started sshd@2-10.243.72.22:22-20.161.92.111:52514.service - OpenSSH per-connection server daemon (20.161.92.111:52514). Jan 24 02:59:25.437935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 02:59:25.442134 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Jan 24 02:59:25.453048 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 02:59:25.454156 sshd[1618]: Accepted publickey for core from 20.161.92.111 port 52514 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:59:25.454933 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:59:25.467189 systemd-logind[1487]: New session 2 of user core. Jan 24 02:59:25.474774 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 02:59:25.861887 sshd[1618]: pam_unix(sshd:session): session closed for user core Jan 24 02:59:25.866967 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Jan 24 02:59:25.868033 systemd[1]: sshd@2-10.243.72.22:22-20.161.92.111:52514.service: Deactivated successfully. Jan 24 02:59:25.872583 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 02:59:25.877126 systemd-logind[1487]: Removed session 2. Jan 24 02:59:25.972900 systemd[1]: Started sshd@3-10.243.72.22:22-20.161.92.111:52518.service - OpenSSH per-connection server daemon (20.161.92.111:52518). Jan 24 02:59:26.327930 kubelet[1626]: E0124 02:59:26.327855 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 02:59:26.333910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 02:59:26.334188 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 02:59:26.335560 systemd[1]: kubelet.service: Consumed 1.694s CPU time. Jan 24 02:59:26.546835 sshd[1636]: Accepted publickey for core from 20.161.92.111 port 52518 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:59:26.550228 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:59:26.559237 systemd-logind[1487]: New session 3 of user core. Jan 24 02:59:26.565811 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 02:59:26.956830 sshd[1636]: pam_unix(sshd:session): session closed for user core Jan 24 02:59:26.962792 systemd[1]: sshd@3-10.243.72.22:22-20.161.92.111:52518.service: Deactivated successfully. Jan 24 02:59:26.966087 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 02:59:26.967623 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Jan 24 02:59:26.969527 systemd-logind[1487]: Removed session 3. Jan 24 02:59:28.526656 login[1583]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 24 02:59:28.542648 systemd-logind[1487]: New session 4 of user core. Jan 24 02:59:28.551382 login[1582]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 24 02:59:28.552595 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 02:59:28.562997 systemd-logind[1487]: New session 5 of user core. Jan 24 02:59:28.572833 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 02:59:29.404766 coreos-metadata[1476]: Jan 24 02:59:29.404 WARN failed to locate config-drive, using the metadata service API instead Jan 24 02:59:29.431967 coreos-metadata[1476]: Jan 24 02:59:29.431 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 24 02:59:29.438686 coreos-metadata[1476]: Jan 24 02:59:29.438 INFO Fetch failed with 404: resource not found Jan 24 02:59:29.438686 coreos-metadata[1476]: Jan 24 02:59:29.438 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 24 02:59:29.439078 coreos-metadata[1476]: Jan 24 02:59:29.439 INFO Fetch successful Jan 24 02:59:29.439275 coreos-metadata[1476]: Jan 24 02:59:29.439 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 24 02:59:29.454005 coreos-metadata[1476]: Jan 24 02:59:29.453 INFO Fetch successful Jan 24 02:59:29.454005 coreos-metadata[1476]: Jan 24 02:59:29.453 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 24 02:59:29.475473 coreos-metadata[1476]: Jan 24 02:59:29.475 INFO Fetch successful Jan 24 02:59:29.475473 coreos-metadata[1476]: Jan 24 02:59:29.475 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 24 02:59:29.488982 coreos-metadata[1476]: Jan 24 02:59:29.488 INFO Fetch successful Jan 24 02:59:29.489272 coreos-metadata[1476]: Jan 24 02:59:29.489 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 24 02:59:29.504898 coreos-metadata[1476]: Jan 24 02:59:29.504 INFO Fetch successful Jan 24 02:59:29.540082 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 02:59:29.541159 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 02:59:30.363596 coreos-metadata[1549]: Jan 24 02:59:30.363 WARN failed to locate config-drive, using the metadata service API instead Jan 24 02:59:30.386496 coreos-metadata[1549]: Jan 24 02:59:30.386 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 24 02:59:30.409731 coreos-metadata[1549]: Jan 24 02:59:30.409 INFO Fetch successful Jan 24 02:59:30.409993 coreos-metadata[1549]: Jan 24 02:59:30.409 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 24 02:59:30.442165 coreos-metadata[1549]: Jan 24 02:59:30.442 INFO Fetch successful Jan 24 02:59:30.492680 unknown[1549]: wrote ssh authorized keys file for user: core Jan 24 02:59:30.521171 update-ssh-keys[1681]: Updated "/home/core/.ssh/authorized_keys" Jan 24 02:59:30.522315 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 02:59:30.526814 systemd[1]: Finished sshkeys.service. Jan 24 02:59:30.530591 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 02:59:30.532487 systemd[1]: Startup finished in 1.623s (kernel) + 14.965s (initrd) + 12.870s (userspace) = 29.460s. Jan 24 02:59:36.412282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 02:59:36.421646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 02:59:36.683850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 02:59:36.690098 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 02:59:36.751204 kubelet[1692]: E0124 02:59:36.751120 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 02:59:36.756122 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 02:59:36.756439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 02:59:37.065862 systemd[1]: Started sshd@4-10.243.72.22:22-20.161.92.111:54952.service - OpenSSH per-connection server daemon (20.161.92.111:54952). Jan 24 02:59:37.639236 sshd[1700]: Accepted publickey for core from 20.161.92.111 port 54952 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:59:37.641746 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:59:37.648592 systemd-logind[1487]: New session 6 of user core. Jan 24 02:59:37.656724 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 02:59:38.044186 sshd[1700]: pam_unix(sshd:session): session closed for user core Jan 24 02:59:38.049332 systemd[1]: sshd@4-10.243.72.22:22-20.161.92.111:54952.service: Deactivated successfully. Jan 24 02:59:38.051692 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 02:59:38.052743 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Jan 24 02:59:38.054011 systemd-logind[1487]: Removed session 6. Jan 24 02:59:38.148816 systemd[1]: Started sshd@5-10.243.72.22:22-20.161.92.111:54964.service - OpenSSH per-connection server daemon (20.161.92.111:54964). Jan 24 02:59:38.723658 sshd[1707]: Accepted publickey for core from 20.161.92.111 port 54964 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:59:38.725772 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:59:38.732230 systemd-logind[1487]: New session 7 of user core. Jan 24 02:59:38.740614 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 02:59:39.121537 sshd[1707]: pam_unix(sshd:session): session closed for user core Jan 24 02:59:39.126212 systemd[1]: sshd@5-10.243.72.22:22-20.161.92.111:54964.service: Deactivated successfully. Jan 24 02:59:39.129327 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 02:59:39.133693 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Jan 24 02:59:39.135354 systemd-logind[1487]: Removed session 7. Jan 24 02:59:39.233822 systemd[1]: Started sshd@6-10.243.72.22:22-20.161.92.111:54968.service - OpenSSH per-connection server daemon (20.161.92.111:54968). Jan 24 02:59:39.794227 sshd[1714]: Accepted publickey for core from 20.161.92.111 port 54968 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:59:39.796578 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:59:39.803592 systemd-logind[1487]: New session 8 of user core. Jan 24 02:59:39.810973 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 02:59:40.202719 sshd[1714]: pam_unix(sshd:session): session closed for user core Jan 24 02:59:40.207580 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Jan 24 02:59:40.208746 systemd[1]: sshd@6-10.243.72.22:22-20.161.92.111:54968.service: Deactivated successfully. Jan 24 02:59:40.211044 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 02:59:40.213474 systemd-logind[1487]: Removed session 8. Jan 24 02:59:40.302767 systemd[1]: Started sshd@7-10.243.72.22:22-20.161.92.111:54982.service - OpenSSH per-connection server daemon (20.161.92.111:54982). Jan 24 02:59:40.877565 sshd[1721]: Accepted publickey for core from 20.161.92.111 port 54982 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:59:40.879709 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:59:40.886322 systemd-logind[1487]: New session 9 of user core. Jan 24 02:59:40.903795 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 02:59:41.207006 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 02:59:41.207523 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 02:59:41.223683 sudo[1724]: pam_unix(sudo:session): session closed for user root Jan 24 02:59:41.314212 sshd[1721]: pam_unix(sshd:session): session closed for user core Jan 24 02:59:41.320120 systemd[1]: sshd@7-10.243.72.22:22-20.161.92.111:54982.service: Deactivated successfully. Jan 24 02:59:41.322320 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 02:59:41.323244 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Jan 24 02:59:41.325193 systemd-logind[1487]: Removed session 9. Jan 24 02:59:41.414248 systemd[1]: Started sshd@8-10.243.72.22:22-20.161.92.111:54988.service - OpenSSH per-connection server daemon (20.161.92.111:54988). Jan 24 02:59:41.987695 sshd[1729]: Accepted publickey for core from 20.161.92.111 port 54988 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:59:41.989998 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:59:41.998252 systemd-logind[1487]: New session 10 of user core. Jan 24 02:59:42.009736 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 02:59:42.304205 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 02:59:42.304710 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 02:59:42.311153 sudo[1733]: pam_unix(sudo:session): session closed for user root Jan 24 02:59:42.319922 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 02:59:42.320360 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 02:59:42.339892 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 02:59:42.344398 auditctl[1736]: No rules Jan 24 02:59:42.344948 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 02:59:42.345244 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 02:59:42.350124 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 02:59:42.406768 augenrules[1754]: No rules Jan 24 02:59:42.409043 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 02:59:42.410702 sudo[1732]: pam_unix(sudo:session): session closed for user root Jan 24 02:59:42.501018 sshd[1729]: pam_unix(sshd:session): session closed for user core Jan 24 02:59:42.505250 systemd[1]: sshd@8-10.243.72.22:22-20.161.92.111:54988.service: Deactivated successfully. Jan 24 02:59:42.508066 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 02:59:42.509992 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Jan 24 02:59:42.512385 systemd-logind[1487]: Removed session 10. Jan 24 02:59:42.607897 systemd[1]: Started sshd@9-10.243.72.22:22-20.161.92.111:52078.service - OpenSSH per-connection server daemon (20.161.92.111:52078). Jan 24 02:59:43.164625 sshd[1762]: Accepted publickey for core from 20.161.92.111 port 52078 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 02:59:43.167705 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 02:59:43.173736 systemd-logind[1487]: New session 11 of user core. Jan 24 02:59:43.183623 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 02:59:43.479693 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 02:59:43.480141 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 02:59:44.200901 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 02:59:44.214228 (dockerd)[1781]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 02:59:44.957190 dockerd[1781]: time="2026-01-24T02:59:44.955479513Z" level=info msg="Starting up" Jan 24 02:59:45.199255 systemd[1]: var-lib-docker-metacopy\x2dcheck953775427-merged.mount: Deactivated successfully. Jan 24 02:59:45.222190 dockerd[1781]: time="2026-01-24T02:59:45.220834620Z" level=info msg="Loading containers: start." Jan 24 02:59:45.391557 kernel: Initializing XFRM netlink socket Jan 24 02:59:45.438306 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Jan 24 02:59:45.529770 systemd-networkd[1432]: docker0: Link UP Jan 24 02:59:45.542131 systemd-timesyncd[1407]: Contacted time server [2a02:8010:d015::123]:123 (2.flatcar.pool.ntp.org). Jan 24 02:59:45.542288 systemd-timesyncd[1407]: Initial clock synchronization to Sat 2026-01-24 02:59:45.525804 UTC. Jan 24 02:59:45.557657 dockerd[1781]: time="2026-01-24T02:59:45.557381304Z" level=info msg="Loading containers: done." Jan 24 02:59:45.587895 dockerd[1781]: time="2026-01-24T02:59:45.587797117Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 02:59:45.588124 dockerd[1781]: time="2026-01-24T02:59:45.587962320Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 02:59:45.588204 dockerd[1781]: time="2026-01-24T02:59:45.588186490Z" level=info msg="Daemon has completed initialization" Jan 24 02:59:45.626106 dockerd[1781]: time="2026-01-24T02:59:45.625892955Z" level=info msg="API listen on /run/docker.sock" Jan 24 02:59:45.626573 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 02:59:46.913638 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 02:59:46.925746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 02:59:47.140487 containerd[1507]: time="2026-01-24T02:59:47.139554200Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 02:59:47.585911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 02:59:47.594958 (kubelet)[1932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 02:59:47.726962 kubelet[1932]: E0124 02:59:47.726822 1932 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 02:59:47.730274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 02:59:47.730557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 02:59:48.204090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3298641075.mount: Deactivated successfully. Jan 24 02:59:50.602474 containerd[1507]: time="2026-01-24T02:59:50.601633859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:59:50.605124 containerd[1507]: time="2026-01-24T02:59:50.604998710Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070655" Jan 24 02:59:50.606419 containerd[1507]: time="2026-01-24T02:59:50.605594761Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:59:50.611420 containerd[1507]: time="2026-01-24T02:59:50.611318459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:59:50.615020 containerd[1507]: time="2026-01-24T02:59:50.613745959Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.473809061s" Jan 24 02:59:50.615020 containerd[1507]: time="2026-01-24T02:59:50.613914156Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 02:59:50.617695 containerd[1507]: time="2026-01-24T02:59:50.617654497Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 02:59:53.762698 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 24 02:59:54.358424 containerd[1507]: time="2026-01-24T02:59:54.356920355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:59:54.359950 containerd[1507]: time="2026-01-24T02:59:54.359867261Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993362" Jan 24 02:59:54.360804 containerd[1507]: time="2026-01-24T02:59:54.360692522Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:59:54.367347 containerd[1507]: time="2026-01-24T02:59:54.367256222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:59:54.369166 containerd[1507]: time="2026-01-24T02:59:54.369113889Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 3.751378158s" Jan 24 02:59:54.370001 containerd[1507]: time="2026-01-24T02:59:54.369484794Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 02:59:54.371147 containerd[1507]: time="2026-01-24T02:59:54.371097822Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 02:59:56.642860 containerd[1507]: time="2026-01-24T02:59:56.642770486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:59:56.645069 containerd[1507]: time="2026-01-24T02:59:56.644941689Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405084" Jan 24 02:59:56.647414 containerd[1507]: time="2026-01-24T02:59:56.646427040Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:59:56.650797 containerd[1507]: time="2026-01-24T02:59:56.650743486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 02:59:56.652599 containerd[1507]: time="2026-01-24T02:59:56.652563004Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.281404754s" Jan 24 02:59:56.652778 containerd[1507]: time="2026-01-24T02:59:56.652748545Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 02:59:56.654705 containerd[1507]: time="2026-01-24T02:59:56.654672982Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 02:59:57.914176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 02:59:57.926757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 02:59:58.405132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 02:59:58.421016 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 02:59:58.514538 kubelet[2016]: E0124 02:59:58.514384 2016 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 02:59:58.519985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 02:59:58.520328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 02:59:59.913716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331844611.mount: Deactivated successfully. Jan 24 03:00:00.919502 containerd[1507]: time="2026-01-24T03:00:00.917911789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:00.926268 containerd[1507]: time="2026-01-24T03:00:00.926019333Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161907" Jan 24 03:00:00.928301 containerd[1507]: time="2026-01-24T03:00:00.928201967Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:00.933663 containerd[1507]: time="2026-01-24T03:00:00.933588594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:00.936041 containerd[1507]: time="2026-01-24T03:00:00.935234751Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 4.280482656s" Jan 24 03:00:00.936041 containerd[1507]: time="2026-01-24T03:00:00.935465442Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 03:00:00.939035 containerd[1507]: time="2026-01-24T03:00:00.938820538Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 03:00:01.692177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1948342505.mount: Deactivated successfully. Jan 24 03:00:03.726948 containerd[1507]: time="2026-01-24T03:00:03.726689750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:03.760679 containerd[1507]: time="2026-01-24T03:00:03.760556062Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 24 03:00:03.762654 containerd[1507]: time="2026-01-24T03:00:03.762580387Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:03.770318 containerd[1507]: time="2026-01-24T03:00:03.768368618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:03.770318 containerd[1507]: time="2026-01-24T03:00:03.770099300Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.831207311s" Jan 24 03:00:03.770318 containerd[1507]: time="2026-01-24T03:00:03.770184978Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 03:00:03.784199 containerd[1507]: time="2026-01-24T03:00:03.784088785Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 03:00:04.338480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount337789223.mount: Deactivated successfully. Jan 24 03:00:04.349196 containerd[1507]: time="2026-01-24T03:00:04.347933786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:04.349439 containerd[1507]: time="2026-01-24T03:00:04.349382929Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 24 03:00:04.350358 containerd[1507]: time="2026-01-24T03:00:04.350324573Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:04.353505 containerd[1507]: time="2026-01-24T03:00:04.353471125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:04.354839 containerd[1507]: time="2026-01-24T03:00:04.354794300Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 570.639175ms" Jan 24 03:00:04.355014 containerd[1507]: time="2026-01-24T03:00:04.354986254Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 03:00:04.356978 containerd[1507]: time="2026-01-24T03:00:04.356757055Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 03:00:04.974046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3560147391.mount: Deactivated successfully. Jan 24 03:00:08.076183 update_engine[1488]: I20260124 03:00:08.075320 1488 update_attempter.cc:509] Updating boot flags... Jan 24 03:00:08.191481 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2141) Jan 24 03:00:08.353440 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2140) Jan 24 03:00:08.443441 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2140) Jan 24 03:00:08.663456 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 24 03:00:08.672713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 03:00:09.280865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:00:09.293963 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 03:00:09.493370 kubelet[2157]: E0124 03:00:09.493192 2157 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 03:00:09.499855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 03:00:09.500503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 03:00:11.055925 systemd[1]: Started sshd@10-10.243.72.22:22-159.223.6.232:45836.service - OpenSSH per-connection server daemon (159.223.6.232:45836). Jan 24 03:00:11.214444 sshd[2169]: Invalid user webmaster from 159.223.6.232 port 45836 Jan 24 03:00:11.232807 sshd[2169]: Connection closed by invalid user webmaster 159.223.6.232 port 45836 [preauth] Jan 24 03:00:11.235501 systemd[1]: sshd@10-10.243.72.22:22-159.223.6.232:45836.service: Deactivated successfully. Jan 24 03:00:11.840450 containerd[1507]: time="2026-01-24T03:00:11.838985432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:11.849640 containerd[1507]: time="2026-01-24T03:00:11.849545136Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Jan 24 03:00:11.851776 containerd[1507]: time="2026-01-24T03:00:11.851724236Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:11.859099 containerd[1507]: time="2026-01-24T03:00:11.859044016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:11.863285 containerd[1507]: time="2026-01-24T03:00:11.863236403Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 7.506423426s" Jan 24 03:00:11.863543 containerd[1507]: time="2026-01-24T03:00:11.863513239Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 03:00:15.757204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:00:15.772842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 03:00:15.818248 systemd[1]: Reloading requested from client PID 2203 ('systemctl') (unit session-11.scope)... Jan 24 03:00:15.818600 systemd[1]: Reloading... Jan 24 03:00:16.048448 zram_generator::config[2239]: No configuration found. Jan 24 03:00:16.341660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 03:00:16.471172 systemd[1]: Reloading finished in 651 ms. Jan 24 03:00:16.557269 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 03:00:16.557797 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 03:00:16.558624 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:00:16.570113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 03:00:16.731680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:00:16.753055 (kubelet)[2310]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 03:00:16.850493 kubelet[2310]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 03:00:16.850493 kubelet[2310]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 03:00:16.850493 kubelet[2310]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 03:00:16.851255 kubelet[2310]: I0124 03:00:16.850726 2310 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 03:00:17.768606 kubelet[2310]: I0124 03:00:17.768336 2310 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 03:00:17.768606 kubelet[2310]: I0124 03:00:17.768446 2310 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 03:00:17.769726 kubelet[2310]: I0124 03:00:17.769041 2310 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 03:00:17.850113 kubelet[2310]: I0124 03:00:17.849863 2310 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 03:00:17.856369 kubelet[2310]: E0124 03:00:17.856221 2310 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.72.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.72.22:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:00:17.869454 kubelet[2310]: E0124 03:00:17.868475 2310 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 03:00:17.869454 kubelet[2310]: I0124 03:00:17.868546 2310 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 03:00:17.879368 kubelet[2310]: I0124 03:00:17.878674 2310 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 03:00:17.883781 kubelet[2310]: I0124 03:00:17.883171 2310 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 03:00:17.884369 kubelet[2310]: I0124 03:00:17.883259 2310 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-fpdmo.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 03:00:17.887066 kubelet[2310]: I0124 03:00:17.886598 2310 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 03:00:17.887066 kubelet[2310]: I0124 03:00:17.886639 2310 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 03:00:17.888729 kubelet[2310]: I0124 03:00:17.888678 2310 state_mem.go:36] "Initialized new in-memory state store" Jan 24 03:00:17.893711 kubelet[2310]: I0124 03:00:17.893650 2310 kubelet.go:446] "Attempting to sync node with API server" Jan 24 03:00:17.893861 kubelet[2310]: I0124 03:00:17.893746 2310 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 03:00:17.893861 kubelet[2310]: I0124 03:00:17.893851 2310 kubelet.go:352] "Adding apiserver pod source" Jan 24 03:00:17.894019 kubelet[2310]: I0124 03:00:17.893930 2310 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 03:00:17.904837 kubelet[2310]: W0124 03:00:17.904077 2310 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.72.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.72.22:6443: connect: connection refused Jan 24 03:00:17.904837 kubelet[2310]: E0124 03:00:17.904252 2310 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.72.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.72.22:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:00:17.904837 kubelet[2310]: W0124 03:00:17.904679 2310 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.72.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-fpdmo.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.72.22:6443: connect: connection refused Jan 24 03:00:17.904837 kubelet[2310]: E0124 03:00:17.904724 2310 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.72.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-fpdmo.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.72.22:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:00:17.906368 kubelet[2310]: I0124 03:00:17.906328 2310 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 03:00:17.910841 kubelet[2310]: I0124 03:00:17.910803 2310 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 03:00:17.915820 kubelet[2310]: W0124 03:00:17.912152 2310 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 03:00:17.915820 kubelet[2310]: I0124 03:00:17.913758 2310 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 03:00:17.915820 kubelet[2310]: I0124 03:00:17.913844 2310 server.go:1287] "Started kubelet" Jan 24 03:00:17.920531 kubelet[2310]: I0124 03:00:17.919415 2310 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 03:00:17.920531 kubelet[2310]: I0124 03:00:17.919862 2310 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 03:00:17.921878 kubelet[2310]: I0124 03:00:17.921613 2310 server.go:479] "Adding debug handlers to kubelet server" Jan 24 03:00:17.924818 kubelet[2310]: I0124 03:00:17.924088 2310 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 03:00:17.924818 kubelet[2310]: I0124 03:00:17.924632 2310 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 03:00:17.936490 kubelet[2310]: I0124 03:00:17.936179 2310 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 03:00:17.937270 kubelet[2310]: E0124 03:00:17.937192 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-fpdmo.gb1.brightbox.com\" not found" Jan 24 03:00:17.938536 kubelet[2310]: I0124 03:00:17.938042 2310 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 03:00:17.939677 kubelet[2310]: I0124 03:00:17.939649 2310 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 03:00:17.939907 kubelet[2310]: I0124 03:00:17.939830 2310 reconciler.go:26] "Reconciler: start to sync state" Jan 24 03:00:17.940888 kubelet[2310]: W0124 03:00:17.940502 2310 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.72.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.72.22:6443: connect: connection refused Jan 24 03:00:17.940888 kubelet[2310]: E0124 03:00:17.940564 2310 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.72.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.72.22:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:00:17.946777 kubelet[2310]: I0124 03:00:17.946595 2310 factory.go:221] Registration of the systemd container factory successfully Jan 24 03:00:17.947060 kubelet[2310]: I0124 03:00:17.946834 2310 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 03:00:17.947551 kubelet[2310]: E0124 03:00:17.947339 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.72.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-fpdmo.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.72.22:6443: connect: connection refused" interval="200ms" Jan 24 03:00:17.964692 kubelet[2310]: E0124 03:00:17.940661 2310 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.72.22:6443/api/v1/namespaces/default/events\": dial tcp 10.243.72.22:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-fpdmo.gb1.brightbox.com.188d8b81b9fbb6da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-fpdmo.gb1.brightbox.com,UID:srv-fpdmo.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-fpdmo.gb1.brightbox.com,},FirstTimestamp:2026-01-24 03:00:17.913796314 +0000 UTC m=+1.150030494,LastTimestamp:2026-01-24 03:00:17.913796314 +0000 UTC m=+1.150030494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-fpdmo.gb1.brightbox.com,}" Jan 24 03:00:17.972202 kubelet[2310]: I0124 03:00:17.967705 2310 factory.go:221] Registration of the containerd container factory successfully Jan 24 03:00:18.008923 kubelet[2310]: I0124 03:00:18.008817 2310 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 03:00:18.015106 kubelet[2310]: I0124 03:00:18.014367 2310 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 03:00:18.015106 kubelet[2310]: I0124 03:00:18.014528 2310 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 03:00:18.015106 kubelet[2310]: I0124 03:00:18.014619 2310 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 03:00:18.015106 kubelet[2310]: I0124 03:00:18.014637 2310 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 03:00:18.015106 kubelet[2310]: E0124 03:00:18.014778 2310 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 03:00:18.020803 kubelet[2310]: W0124 03:00:18.019936 2310 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.72.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.72.22:6443: connect: connection refused Jan 24 03:00:18.020803 kubelet[2310]: E0124 03:00:18.020003 2310 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.72.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.72.22:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:00:18.028965 kubelet[2310]: I0124 03:00:18.028930 2310 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 03:00:18.029321 kubelet[2310]: I0124 03:00:18.029282 2310 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 03:00:18.029535 kubelet[2310]: I0124 03:00:18.029513 2310 state_mem.go:36] "Initialized new in-memory state store" Jan 24 03:00:18.037850 kubelet[2310]: E0124 03:00:18.037787 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-fpdmo.gb1.brightbox.com\" not found" Jan 24 03:00:18.041434 kubelet[2310]: I0124 03:00:18.041287 2310 policy_none.go:49] "None policy: Start" Jan 24 03:00:18.041547 kubelet[2310]: I0124 03:00:18.041434 2310 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 03:00:18.041547 kubelet[2310]: I0124 03:00:18.041530 2310 state_mem.go:35] "Initializing new in-memory state store" Jan 24 03:00:18.071370 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 03:00:18.085102 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 03:00:18.090179 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 03:00:18.106413 kubelet[2310]: I0124 03:00:18.105470 2310 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 03:00:18.106413 kubelet[2310]: I0124 03:00:18.105916 2310 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 03:00:18.106413 kubelet[2310]: I0124 03:00:18.105960 2310 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 03:00:18.107892 kubelet[2310]: I0124 03:00:18.107642 2310 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 03:00:18.110747 kubelet[2310]: E0124 03:00:18.110713 2310 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 03:00:18.111023 kubelet[2310]: E0124 03:00:18.110920 2310 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-fpdmo.gb1.brightbox.com\" not found" Jan 24 03:00:18.134931 systemd[1]: Created slice kubepods-burstable-podbcd4b207e815ecc1563656a1334c6adf.slice - libcontainer container kubepods-burstable-podbcd4b207e815ecc1563656a1334c6adf.slice. Jan 24 03:00:18.141276 kubelet[2310]: I0124 03:00:18.141188 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bcd4b207e815ecc1563656a1334c6adf-ca-certs\") pod \"kube-apiserver-srv-fpdmo.gb1.brightbox.com\" (UID: \"bcd4b207e815ecc1563656a1334c6adf\") " pod="kube-system/kube-apiserver-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.141433 kubelet[2310]: I0124 03:00:18.141302 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bcd4b207e815ecc1563656a1334c6adf-usr-share-ca-certificates\") pod \"kube-apiserver-srv-fpdmo.gb1.brightbox.com\" (UID: \"bcd4b207e815ecc1563656a1334c6adf\") " pod="kube-system/kube-apiserver-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.141433 kubelet[2310]: I0124 03:00:18.141340 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c2acca4eaea0a4a1f5cd1cf5cb8cf60-ca-certs\") pod \"kube-controller-manager-srv-fpdmo.gb1.brightbox.com\" (UID: \"2c2acca4eaea0a4a1f5cd1cf5cb8cf60\") " pod="kube-system/kube-controller-manager-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.141433 kubelet[2310]: I0124 03:00:18.141369 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ec8c448c299c818604a7e75640462e8-kubeconfig\") pod \"kube-scheduler-srv-fpdmo.gb1.brightbox.com\" (UID: \"3ec8c448c299c818604a7e75640462e8\") " pod="kube-system/kube-scheduler-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.141433 kubelet[2310]: I0124 03:00:18.141419 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bcd4b207e815ecc1563656a1334c6adf-k8s-certs\") pod \"kube-apiserver-srv-fpdmo.gb1.brightbox.com\" (UID: \"bcd4b207e815ecc1563656a1334c6adf\") " pod="kube-system/kube-apiserver-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.141713 kubelet[2310]: I0124 03:00:18.141452 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2c2acca4eaea0a4a1f5cd1cf5cb8cf60-flexvolume-dir\") pod \"kube-controller-manager-srv-fpdmo.gb1.brightbox.com\" (UID: \"2c2acca4eaea0a4a1f5cd1cf5cb8cf60\") " pod="kube-system/kube-controller-manager-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.141713 kubelet[2310]: I0124 03:00:18.141478 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c2acca4eaea0a4a1f5cd1cf5cb8cf60-k8s-certs\") pod \"kube-controller-manager-srv-fpdmo.gb1.brightbox.com\" (UID: \"2c2acca4eaea0a4a1f5cd1cf5cb8cf60\") " pod="kube-system/kube-controller-manager-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.141713 kubelet[2310]: I0124 03:00:18.141502 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c2acca4eaea0a4a1f5cd1cf5cb8cf60-kubeconfig\") pod \"kube-controller-manager-srv-fpdmo.gb1.brightbox.com\" (UID: \"2c2acca4eaea0a4a1f5cd1cf5cb8cf60\") " pod="kube-system/kube-controller-manager-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.141713 kubelet[2310]: I0124 03:00:18.141528 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c2acca4eaea0a4a1f5cd1cf5cb8cf60-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-fpdmo.gb1.brightbox.com\" (UID: \"2c2acca4eaea0a4a1f5cd1cf5cb8cf60\") " pod="kube-system/kube-controller-manager-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.143999 kubelet[2310]: E0124 03:00:18.143954 2310 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpdmo.gb1.brightbox.com\" not found" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.149321 systemd[1]: Created slice kubepods-burstable-pod3ec8c448c299c818604a7e75640462e8.slice - libcontainer container kubepods-burstable-pod3ec8c448c299c818604a7e75640462e8.slice. Jan 24 03:00:18.150778 kubelet[2310]: E0124 03:00:18.150635 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.72.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-fpdmo.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.72.22:6443: connect: connection refused" interval="400ms" Jan 24 03:00:18.153802 kubelet[2310]: E0124 03:00:18.153771 2310 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpdmo.gb1.brightbox.com\" not found" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.156933 systemd[1]: Created slice kubepods-burstable-pod2c2acca4eaea0a4a1f5cd1cf5cb8cf60.slice - libcontainer container kubepods-burstable-pod2c2acca4eaea0a4a1f5cd1cf5cb8cf60.slice. Jan 24 03:00:18.159965 kubelet[2310]: E0124 03:00:18.159650 2310 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpdmo.gb1.brightbox.com\" not found" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.214008 kubelet[2310]: I0124 03:00:18.213146 2310 kubelet_node_status.go:75] "Attempting to register node" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.214528 kubelet[2310]: E0124 03:00:18.213824 2310 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.72.22:6443/api/v1/nodes\": dial tcp 10.243.72.22:6443: connect: connection refused" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.419028 kubelet[2310]: I0124 03:00:18.418367 2310 kubelet_node_status.go:75] "Attempting to register node" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.419028 kubelet[2310]: E0124 03:00:18.418892 2310 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.72.22:6443/api/v1/nodes\": dial tcp 10.243.72.22:6443: connect: connection refused" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.447516 containerd[1507]: time="2026-01-24T03:00:18.447297296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-fpdmo.gb1.brightbox.com,Uid:bcd4b207e815ecc1563656a1334c6adf,Namespace:kube-system,Attempt:0,}" Jan 24 03:00:18.472217 containerd[1507]: time="2026-01-24T03:00:18.470789768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-fpdmo.gb1.brightbox.com,Uid:2c2acca4eaea0a4a1f5cd1cf5cb8cf60,Namespace:kube-system,Attempt:0,}" Jan 24 03:00:18.472217 containerd[1507]: time="2026-01-24T03:00:18.471317017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-fpdmo.gb1.brightbox.com,Uid:3ec8c448c299c818604a7e75640462e8,Namespace:kube-system,Attempt:0,}" Jan 24 03:00:18.551970 kubelet[2310]: E0124 03:00:18.551779 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.72.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-fpdmo.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.72.22:6443: connect: connection refused" interval="800ms" Jan 24 03:00:18.823847 kubelet[2310]: I0124 03:00:18.823157 2310 kubelet_node_status.go:75] "Attempting to register node" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:18.823847 kubelet[2310]: E0124 03:00:18.823619 2310 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.72.22:6443/api/v1/nodes\": dial tcp 10.243.72.22:6443: connect: connection refused" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:19.009888 kubelet[2310]: W0124 03:00:19.009664 2310 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.72.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-fpdmo.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.72.22:6443: connect: connection refused Jan 24 03:00:19.010532 kubelet[2310]: E0124 03:00:19.009986 2310 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.72.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-fpdmo.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.72.22:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:00:19.100730 kubelet[2310]: W0124 03:00:19.100377 2310 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.72.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.72.22:6443: connect: connection refused Jan 24 03:00:19.100730 kubelet[2310]: E0124 03:00:19.100507 2310 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.72.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.72.22:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:00:19.127772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount389840964.mount: Deactivated successfully. Jan 24 03:00:19.150988 containerd[1507]: time="2026-01-24T03:00:19.149001709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 03:00:19.153430 containerd[1507]: time="2026-01-24T03:00:19.153312533Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 03:00:19.156008 containerd[1507]: time="2026-01-24T03:00:19.154634957Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 03:00:19.157425 containerd[1507]: time="2026-01-24T03:00:19.157220551Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 03:00:19.161323 containerd[1507]: time="2026-01-24T03:00:19.159432325Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 03:00:19.161323 containerd[1507]: time="2026-01-24T03:00:19.161173805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 03:00:19.163068 containerd[1507]: time="2026-01-24T03:00:19.163022077Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 24 03:00:19.164672 containerd[1507]: time="2026-01-24T03:00:19.164627375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 03:00:19.166008 containerd[1507]: time="2026-01-24T03:00:19.165956750Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 695.003768ms" Jan 24 03:00:19.170079 containerd[1507]: time="2026-01-24T03:00:19.170039975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 698.621391ms" Jan 24 03:00:19.173442 containerd[1507]: time="2026-01-24T03:00:19.173380076Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 725.742739ms" Jan 24 03:00:19.240937 kubelet[2310]: W0124 03:00:19.240829 2310 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.72.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.72.22:6443: connect: connection refused Jan 24 03:00:19.241197 kubelet[2310]: E0124 03:00:19.241166 2310 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.72.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.72.22:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:00:19.355112 kubelet[2310]: E0124 03:00:19.354660 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.72.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-fpdmo.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.72.22:6443: connect: connection refused" interval="1.6s" Jan 24 03:00:19.365442 kubelet[2310]: W0124 03:00:19.365346 2310 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.72.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.72.22:6443: connect: connection refused Jan 24 03:00:19.365907 kubelet[2310]: E0124 03:00:19.365836 2310 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.72.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.72.22:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:00:19.444688 containerd[1507]: time="2026-01-24T03:00:19.443622666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:00:19.444688 containerd[1507]: time="2026-01-24T03:00:19.444623113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:00:19.444688 containerd[1507]: time="2026-01-24T03:00:19.444644978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:00:19.445009 containerd[1507]: time="2026-01-24T03:00:19.444761774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:00:19.456917 containerd[1507]: time="2026-01-24T03:00:19.454535318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:00:19.456917 containerd[1507]: time="2026-01-24T03:00:19.456612298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:00:19.456917 containerd[1507]: time="2026-01-24T03:00:19.456636400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:00:19.456917 containerd[1507]: time="2026-01-24T03:00:19.456771323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:00:19.461457 containerd[1507]: time="2026-01-24T03:00:19.461197828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:00:19.461457 containerd[1507]: time="2026-01-24T03:00:19.461275553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:00:19.461457 containerd[1507]: time="2026-01-24T03:00:19.461298377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:00:19.462583 containerd[1507]: time="2026-01-24T03:00:19.462483696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:00:19.500679 systemd[1]: Started cri-containerd-116ccfb1470cce14ca21efe3af456eee1ace34cd77e449fca6dff03a1b6d69f6.scope - libcontainer container 116ccfb1470cce14ca21efe3af456eee1ace34cd77e449fca6dff03a1b6d69f6. Jan 24 03:00:19.529697 systemd[1]: Started cri-containerd-25bc1c833be323019104f01bab5dde40fb840fe3bb217b4cb491ab6602b3405d.scope - libcontainer container 25bc1c833be323019104f01bab5dde40fb840fe3bb217b4cb491ab6602b3405d. Jan 24 03:00:19.535842 systemd[1]: Started cri-containerd-373a1e66a679b1965d15ba36235f533aadc504362b90e6cf2726950c2cedbbd5.scope - libcontainer container 373a1e66a679b1965d15ba36235f533aadc504362b90e6cf2726950c2cedbbd5. Jan 24 03:00:19.627804 containerd[1507]: time="2026-01-24T03:00:19.627535722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-fpdmo.gb1.brightbox.com,Uid:2c2acca4eaea0a4a1f5cd1cf5cb8cf60,Namespace:kube-system,Attempt:0,} returns sandbox id \"116ccfb1470cce14ca21efe3af456eee1ace34cd77e449fca6dff03a1b6d69f6\"" Jan 24 03:00:19.633411 kubelet[2310]: I0124 03:00:19.632764 2310 kubelet_node_status.go:75] "Attempting to register node" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:19.633411 kubelet[2310]: E0124 03:00:19.633344 2310 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.72.22:6443/api/v1/nodes\": dial tcp 10.243.72.22:6443: connect: connection refused" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:19.656179 containerd[1507]: time="2026-01-24T03:00:19.656119426Z" level=info msg="CreateContainer within sandbox \"116ccfb1470cce14ca21efe3af456eee1ace34cd77e449fca6dff03a1b6d69f6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 03:00:19.675481 containerd[1507]: time="2026-01-24T03:00:19.675430551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-fpdmo.gb1.brightbox.com,Uid:bcd4b207e815ecc1563656a1334c6adf,Namespace:kube-system,Attempt:0,} returns sandbox id \"25bc1c833be323019104f01bab5dde40fb840fe3bb217b4cb491ab6602b3405d\"" Jan 24 03:00:19.684496 containerd[1507]: time="2026-01-24T03:00:19.683579957Z" level=info msg="CreateContainer within sandbox \"25bc1c833be323019104f01bab5dde40fb840fe3bb217b4cb491ab6602b3405d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 03:00:19.689680 containerd[1507]: time="2026-01-24T03:00:19.689577530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-fpdmo.gb1.brightbox.com,Uid:3ec8c448c299c818604a7e75640462e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"373a1e66a679b1965d15ba36235f533aadc504362b90e6cf2726950c2cedbbd5\"" Jan 24 03:00:19.694582 containerd[1507]: time="2026-01-24T03:00:19.693957719Z" level=info msg="CreateContainer within sandbox \"373a1e66a679b1965d15ba36235f533aadc504362b90e6cf2726950c2cedbbd5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 03:00:19.714230 containerd[1507]: time="2026-01-24T03:00:19.714167064Z" level=info msg="CreateContainer within sandbox \"116ccfb1470cce14ca21efe3af456eee1ace34cd77e449fca6dff03a1b6d69f6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"454be247b245c9173cd3e1ed97b5b482ef57bea912368c4cb5ba556cb3e40504\"" Jan 24 03:00:19.715582 containerd[1507]: time="2026-01-24T03:00:19.715494634Z" level=info msg="StartContainer for \"454be247b245c9173cd3e1ed97b5b482ef57bea912368c4cb5ba556cb3e40504\"" Jan 24 03:00:19.716899 containerd[1507]: time="2026-01-24T03:00:19.716834772Z" level=info msg="CreateContainer within sandbox \"25bc1c833be323019104f01bab5dde40fb840fe3bb217b4cb491ab6602b3405d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"17f0480c3b893d2f322383ea82395f631cf38fdf56bd66cadcf6ba288d57c7d2\"" Jan 24 03:00:19.718952 containerd[1507]: time="2026-01-24T03:00:19.717371617Z" level=info msg="StartContainer for \"17f0480c3b893d2f322383ea82395f631cf38fdf56bd66cadcf6ba288d57c7d2\"" Jan 24 03:00:19.740309 containerd[1507]: time="2026-01-24T03:00:19.740206590Z" level=info msg="CreateContainer within sandbox \"373a1e66a679b1965d15ba36235f533aadc504362b90e6cf2726950c2cedbbd5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d1d211f7211d07c9fb336c49599897e2c28d5a5c8efd730f1139dc16f0651ea0\"" Jan 24 03:00:19.741542 containerd[1507]: time="2026-01-24T03:00:19.741492777Z" level=info msg="StartContainer for \"d1d211f7211d07c9fb336c49599897e2c28d5a5c8efd730f1139dc16f0651ea0\"" Jan 24 03:00:19.778074 systemd[1]: Started cri-containerd-17f0480c3b893d2f322383ea82395f631cf38fdf56bd66cadcf6ba288d57c7d2.scope - libcontainer container 17f0480c3b893d2f322383ea82395f631cf38fdf56bd66cadcf6ba288d57c7d2. Jan 24 03:00:19.781149 systemd[1]: Started cri-containerd-454be247b245c9173cd3e1ed97b5b482ef57bea912368c4cb5ba556cb3e40504.scope - libcontainer container 454be247b245c9173cd3e1ed97b5b482ef57bea912368c4cb5ba556cb3e40504. Jan 24 03:00:19.811661 systemd[1]: Started cri-containerd-d1d211f7211d07c9fb336c49599897e2c28d5a5c8efd730f1139dc16f0651ea0.scope - libcontainer container d1d211f7211d07c9fb336c49599897e2c28d5a5c8efd730f1139dc16f0651ea0. Jan 24 03:00:19.906407 containerd[1507]: time="2026-01-24T03:00:19.906256407Z" level=info msg="StartContainer for \"d1d211f7211d07c9fb336c49599897e2c28d5a5c8efd730f1139dc16f0651ea0\" returns successfully" Jan 24 03:00:19.922815 kubelet[2310]: E0124 03:00:19.922740 2310 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.72.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.72.22:6443: connect: connection refused" logger="UnhandledError" Jan 24 03:00:19.940770 containerd[1507]: time="2026-01-24T03:00:19.940686971Z" level=info msg="StartContainer for \"454be247b245c9173cd3e1ed97b5b482ef57bea912368c4cb5ba556cb3e40504\" returns successfully" Jan 24 03:00:19.941707 containerd[1507]: time="2026-01-24T03:00:19.940702913Z" level=info msg="StartContainer for \"17f0480c3b893d2f322383ea82395f631cf38fdf56bd66cadcf6ba288d57c7d2\" returns successfully" Jan 24 03:00:20.052994 kubelet[2310]: E0124 03:00:20.051952 2310 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpdmo.gb1.brightbox.com\" not found" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:20.082682 kubelet[2310]: E0124 03:00:20.081495 2310 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpdmo.gb1.brightbox.com\" not found" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:20.082682 kubelet[2310]: E0124 03:00:20.082078 2310 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpdmo.gb1.brightbox.com\" not found" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:21.066088 kubelet[2310]: E0124 03:00:21.065886 2310 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpdmo.gb1.brightbox.com\" not found" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:21.067282 kubelet[2310]: E0124 03:00:21.066046 2310 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpdmo.gb1.brightbox.com\" not found" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:21.241512 kubelet[2310]: I0124 03:00:21.239674 2310 kubelet_node_status.go:75] "Attempting to register node" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:23.003714 kubelet[2310]: E0124 03:00:23.003037 2310 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpdmo.gb1.brightbox.com\" not found" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:23.864742 kubelet[2310]: E0124 03:00:23.864664 2310 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpdmo.gb1.brightbox.com\" not found" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:24.481060 kubelet[2310]: E0124 03:00:24.480982 2310 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-fpdmo.gb1.brightbox.com\" not found" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:24.536759 kubelet[2310]: I0124 03:00:24.536648 2310 kubelet_node_status.go:78] "Successfully registered node" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:24.536966 kubelet[2310]: E0124 03:00:24.536793 2310 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-fpdmo.gb1.brightbox.com\": node \"srv-fpdmo.gb1.brightbox.com\" not found" Jan 24 03:00:24.540190 kubelet[2310]: I0124 03:00:24.540144 2310 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:24.582096 kubelet[2310]: E0124 03:00:24.581995 2310 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-fpdmo.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:24.582096 kubelet[2310]: I0124 03:00:24.582099 2310 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:24.582880 kubelet[2310]: E0124 03:00:24.582634 2310 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-fpdmo.gb1.brightbox.com.188d8b81b9fbb6da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-fpdmo.gb1.brightbox.com,UID:srv-fpdmo.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-fpdmo.gb1.brightbox.com,},FirstTimestamp:2026-01-24 03:00:17.913796314 +0000 UTC m=+1.150030494,LastTimestamp:2026-01-24 03:00:17.913796314 +0000 UTC m=+1.150030494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-fpdmo.gb1.brightbox.com,}" Jan 24 03:00:24.590862 kubelet[2310]: E0124 03:00:24.590794 2310 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-fpdmo.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:24.590862 kubelet[2310]: I0124 03:00:24.590853 2310 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:24.596818 kubelet[2310]: E0124 03:00:24.596756 2310 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-fpdmo.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:24.905801 kubelet[2310]: I0124 03:00:24.904878 2310 apiserver.go:52] "Watching apiserver" Jan 24 03:00:24.940439 kubelet[2310]: I0124 03:00:24.940366 2310 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 03:00:27.326722 systemd[1]: Reloading requested from client PID 2596 ('systemctl') (unit session-11.scope)... Jan 24 03:00:27.326785 systemd[1]: Reloading... Jan 24 03:00:27.477435 zram_generator::config[2644]: No configuration found. Jan 24 03:00:27.663189 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 03:00:27.810453 systemd[1]: Reloading finished in 482 ms. Jan 24 03:00:27.885192 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 03:00:27.908030 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 03:00:27.908551 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:00:27.908676 systemd[1]: kubelet.service: Consumed 1.700s CPU time, 128.6M memory peak, 0B memory swap peak. Jan 24 03:00:27.916155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 03:00:28.310720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 03:00:28.320456 (kubelet)[2699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 03:00:28.486102 kubelet[2699]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 03:00:28.488807 kubelet[2699]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 03:00:28.488807 kubelet[2699]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 03:00:28.490095 kubelet[2699]: I0124 03:00:28.489181 2699 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 03:00:28.494864 sudo[2710]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 24 03:00:28.495595 sudo[2710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 24 03:00:28.511141 kubelet[2699]: I0124 03:00:28.511074 2699 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 03:00:28.511141 kubelet[2699]: I0124 03:00:28.511119 2699 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 03:00:28.512360 kubelet[2699]: I0124 03:00:28.512298 2699 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 03:00:28.524070 kubelet[2699]: I0124 03:00:28.522974 2699 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 03:00:28.528141 kubelet[2699]: I0124 03:00:28.528112 2699 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 03:00:28.546857 kubelet[2699]: E0124 03:00:28.546801 2699 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 03:00:28.546857 kubelet[2699]: I0124 03:00:28.546858 2699 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 03:00:28.569189 kubelet[2699]: I0124 03:00:28.569035 2699 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 03:00:28.570287 kubelet[2699]: I0124 03:00:28.570213 2699 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 03:00:28.570686 kubelet[2699]: I0124 03:00:28.570286 2699 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-fpdmo.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 03:00:28.570938 kubelet[2699]: I0124 03:00:28.570700 2699 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 03:00:28.570938 kubelet[2699]: I0124 03:00:28.570718 2699 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 03:00:28.570938 kubelet[2699]: I0124 03:00:28.570854 2699 state_mem.go:36] "Initialized new in-memory state store" Jan 24 03:00:28.571182 kubelet[2699]: I0124 03:00:28.571138 2699 kubelet.go:446] "Attempting to sync node with API server" Jan 24 03:00:28.575583 kubelet[2699]: I0124 03:00:28.574563 2699 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 03:00:28.575583 kubelet[2699]: I0124 03:00:28.574621 2699 kubelet.go:352] "Adding apiserver pod source" Jan 24 03:00:28.575583 kubelet[2699]: I0124 03:00:28.574652 2699 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 03:00:28.581659 kubelet[2699]: I0124 03:00:28.581617 2699 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 03:00:28.583137 kubelet[2699]: I0124 03:00:28.583112 2699 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 03:00:28.596319 kubelet[2699]: I0124 03:00:28.596265 2699 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 03:00:28.596848 kubelet[2699]: I0124 03:00:28.596827 2699 server.go:1287] "Started kubelet" Jan 24 03:00:28.611692 kubelet[2699]: I0124 03:00:28.611608 2699 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 03:00:28.617821 kubelet[2699]: I0124 03:00:28.617733 2699 server.go:479] "Adding debug handlers to kubelet server" Jan 24 03:00:28.628151 kubelet[2699]: I0124 03:00:28.628053 2699 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 03:00:28.629990 kubelet[2699]: I0124 03:00:28.628654 2699 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 03:00:28.629990 kubelet[2699]: I0124 03:00:28.628761 2699 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 03:00:28.658074 kubelet[2699]: I0124 03:00:28.657734 2699 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 03:00:28.666066 kubelet[2699]: I0124 03:00:28.662458 2699 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 03:00:28.668423 kubelet[2699]: I0124 03:00:28.662484 2699 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 03:00:28.668423 kubelet[2699]: E0124 03:00:28.662721 2699 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-fpdmo.gb1.brightbox.com\" not found" Jan 24 03:00:28.668423 kubelet[2699]: I0124 03:00:28.668345 2699 reconciler.go:26] "Reconciler: start to sync state" Jan 24 03:00:28.674950 kubelet[2699]: I0124 03:00:28.674911 2699 factory.go:221] Registration of the systemd container factory successfully Jan 24 03:00:28.675115 kubelet[2699]: I0124 03:00:28.675085 2699 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 03:00:28.682218 kubelet[2699]: E0124 03:00:28.681191 2699 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 03:00:28.682218 kubelet[2699]: I0124 03:00:28.682147 2699 factory.go:221] Registration of the containerd container factory successfully Jan 24 03:00:28.731938 kubelet[2699]: I0124 03:00:28.730563 2699 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 03:00:28.743757 kubelet[2699]: I0124 03:00:28.742750 2699 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 03:00:28.743757 kubelet[2699]: I0124 03:00:28.742826 2699 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 03:00:28.743757 kubelet[2699]: I0124 03:00:28.742931 2699 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 03:00:28.743757 kubelet[2699]: I0124 03:00:28.742947 2699 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 03:00:28.748585 kubelet[2699]: E0124 03:00:28.748509 2699 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 03:00:28.853553 kubelet[2699]: E0124 03:00:28.851839 2699 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 24 03:00:28.855561 kubelet[2699]: I0124 03:00:28.855295 2699 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 03:00:28.855561 kubelet[2699]: I0124 03:00:28.855318 2699 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 03:00:28.855561 kubelet[2699]: I0124 03:00:28.855363 2699 state_mem.go:36] "Initialized new in-memory state store" Jan 24 03:00:28.856716 kubelet[2699]: I0124 03:00:28.856315 2699 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 03:00:28.859273 kubelet[2699]: I0124 03:00:28.857174 2699 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 03:00:28.859273 kubelet[2699]: I0124 03:00:28.857247 2699 policy_none.go:49] "None policy: Start" Jan 24 03:00:28.859273 kubelet[2699]: I0124 03:00:28.857306 2699 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 03:00:28.859273 kubelet[2699]: I0124 03:00:28.857360 2699 state_mem.go:35] "Initializing new in-memory state store" Jan 24 03:00:28.859273 kubelet[2699]: I0124 03:00:28.857641 2699 state_mem.go:75] "Updated machine memory state" Jan 24 03:00:28.874377 kubelet[2699]: I0124 03:00:28.874318 2699 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 03:00:28.874771 kubelet[2699]: I0124 03:00:28.874722 2699 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 03:00:28.874881 kubelet[2699]: I0124 03:00:28.874755 2699 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 03:00:28.899999 kubelet[2699]: I0124 03:00:28.899348 2699 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 03:00:28.906756 kubelet[2699]: E0124 03:00:28.906719 2699 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 03:00:29.049888 kubelet[2699]: I0124 03:00:29.049729 2699 kubelet_node_status.go:75] "Attempting to register node" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.054651 kubelet[2699]: I0124 03:00:29.054522 2699 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.067415 kubelet[2699]: I0124 03:00:29.067347 2699 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.068144 kubelet[2699]: I0124 03:00:29.068119 2699 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.076453 kubelet[2699]: I0124 03:00:29.074965 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bcd4b207e815ecc1563656a1334c6adf-k8s-certs\") pod \"kube-apiserver-srv-fpdmo.gb1.brightbox.com\" (UID: \"bcd4b207e815ecc1563656a1334c6adf\") " pod="kube-system/kube-apiserver-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.076453 kubelet[2699]: I0124 03:00:29.075025 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bcd4b207e815ecc1563656a1334c6adf-usr-share-ca-certificates\") pod \"kube-apiserver-srv-fpdmo.gb1.brightbox.com\" (UID: \"bcd4b207e815ecc1563656a1334c6adf\") " pod="kube-system/kube-apiserver-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.076453 kubelet[2699]: I0124 03:00:29.075064 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2c2acca4eaea0a4a1f5cd1cf5cb8cf60-flexvolume-dir\") pod \"kube-controller-manager-srv-fpdmo.gb1.brightbox.com\" (UID: \"2c2acca4eaea0a4a1f5cd1cf5cb8cf60\") " pod="kube-system/kube-controller-manager-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.076453 kubelet[2699]: I0124 03:00:29.075101 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c2acca4eaea0a4a1f5cd1cf5cb8cf60-k8s-certs\") pod \"kube-controller-manager-srv-fpdmo.gb1.brightbox.com\" (UID: \"2c2acca4eaea0a4a1f5cd1cf5cb8cf60\") " pod="kube-system/kube-controller-manager-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.076453 kubelet[2699]: I0124 03:00:29.075128 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c2acca4eaea0a4a1f5cd1cf5cb8cf60-kubeconfig\") pod \"kube-controller-manager-srv-fpdmo.gb1.brightbox.com\" (UID: \"2c2acca4eaea0a4a1f5cd1cf5cb8cf60\") " pod="kube-system/kube-controller-manager-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.076845 kubelet[2699]: I0124 03:00:29.075155 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c2acca4eaea0a4a1f5cd1cf5cb8cf60-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-fpdmo.gb1.brightbox.com\" (UID: \"2c2acca4eaea0a4a1f5cd1cf5cb8cf60\") " pod="kube-system/kube-controller-manager-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.076845 kubelet[2699]: I0124 03:00:29.075187 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bcd4b207e815ecc1563656a1334c6adf-ca-certs\") pod \"kube-apiserver-srv-fpdmo.gb1.brightbox.com\" (UID: \"bcd4b207e815ecc1563656a1334c6adf\") " pod="kube-system/kube-apiserver-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.076845 kubelet[2699]: I0124 03:00:29.075215 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c2acca4eaea0a4a1f5cd1cf5cb8cf60-ca-certs\") pod \"kube-controller-manager-srv-fpdmo.gb1.brightbox.com\" (UID: \"2c2acca4eaea0a4a1f5cd1cf5cb8cf60\") " pod="kube-system/kube-controller-manager-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.076845 kubelet[2699]: I0124 03:00:29.075244 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ec8c448c299c818604a7e75640462e8-kubeconfig\") pod \"kube-scheduler-srv-fpdmo.gb1.brightbox.com\" (UID: \"3ec8c448c299c818604a7e75640462e8\") " pod="kube-system/kube-scheduler-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.079425 kubelet[2699]: W0124 03:00:29.078841 2699 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 24 03:00:29.094413 kubelet[2699]: W0124 03:00:29.091470 2699 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 24 03:00:29.103866 kubelet[2699]: W0124 03:00:29.103720 2699 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 24 03:00:29.106348 kubelet[2699]: I0124 03:00:29.106304 2699 kubelet_node_status.go:124] "Node was previously registered" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.106471 kubelet[2699]: I0124 03:00:29.106459 2699 kubelet_node_status.go:78] "Successfully registered node" node="srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.535693 sudo[2710]: pam_unix(sudo:session): session closed for user root Jan 24 03:00:29.589047 kubelet[2699]: I0124 03:00:29.588959 2699 apiserver.go:52] "Watching apiserver" Jan 24 03:00:29.668170 kubelet[2699]: I0124 03:00:29.668114 2699 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 03:00:29.751299 kubelet[2699]: I0124 03:00:29.751210 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-fpdmo.gb1.brightbox.com" podStartSLOduration=0.751133486 podStartE2EDuration="751.133486ms" podCreationTimestamp="2026-01-24 03:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:00:29.73568768 +0000 UTC m=+1.372834513" watchObservedRunningTime="2026-01-24 03:00:29.751133486 +0000 UTC m=+1.388280316" Jan 24 03:00:29.767720 kubelet[2699]: I0124 03:00:29.767664 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-fpdmo.gb1.brightbox.com" podStartSLOduration=0.767647178 podStartE2EDuration="767.647178ms" podCreationTimestamp="2026-01-24 03:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:00:29.766983075 +0000 UTC m=+1.404129909" watchObservedRunningTime="2026-01-24 03:00:29.767647178 +0000 UTC m=+1.404794018" Jan 24 03:00:29.768038 kubelet[2699]: I0124 03:00:29.767760 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-fpdmo.gb1.brightbox.com" podStartSLOduration=0.767751988 podStartE2EDuration="767.751988ms" podCreationTimestamp="2026-01-24 03:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:00:29.752420286 +0000 UTC m=+1.389567121" watchObservedRunningTime="2026-01-24 03:00:29.767751988 +0000 UTC m=+1.404898834" Jan 24 03:00:29.795139 kubelet[2699]: I0124 03:00:29.794673 2699 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:29.811946 kubelet[2699]: W0124 03:00:29.811759 2699 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 24 03:00:29.811946 kubelet[2699]: E0124 03:00:29.811826 2699 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-fpdmo.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-fpdmo.gb1.brightbox.com" Jan 24 03:00:31.578527 sudo[1765]: pam_unix(sudo:session): session closed for user root Jan 24 03:00:31.685188 sshd[1762]: pam_unix(sshd:session): session closed for user core Jan 24 03:00:31.693307 systemd[1]: sshd@9-10.243.72.22:22-20.161.92.111:52078.service: Deactivated successfully. Jan 24 03:00:31.697827 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 03:00:31.698177 systemd[1]: session-11.scope: Consumed 6.646s CPU time, 142.4M memory peak, 0B memory swap peak. Jan 24 03:00:31.700690 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Jan 24 03:00:31.707830 systemd-logind[1487]: Removed session 11. Jan 24 03:00:32.452960 kubelet[2699]: I0124 03:00:32.452917 2699 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 03:00:32.455876 containerd[1507]: time="2026-01-24T03:00:32.455768271Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 03:00:32.456903 kubelet[2699]: I0124 03:00:32.456377 2699 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 03:00:33.290857 systemd[1]: Created slice kubepods-besteffort-podc449a54d_663e_4385_afd7_9a4707a0f152.slice - libcontainer container kubepods-besteffort-podc449a54d_663e_4385_afd7_9a4707a0f152.slice. Jan 24 03:00:33.303844 kubelet[2699]: I0124 03:00:33.303586 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c449a54d-663e-4385-afd7-9a4707a0f152-kube-proxy\") pod \"kube-proxy-c6bp6\" (UID: \"c449a54d-663e-4385-afd7-9a4707a0f152\") " pod="kube-system/kube-proxy-c6bp6" Jan 24 03:00:33.303844 kubelet[2699]: I0124 03:00:33.303655 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c449a54d-663e-4385-afd7-9a4707a0f152-xtables-lock\") pod \"kube-proxy-c6bp6\" (UID: \"c449a54d-663e-4385-afd7-9a4707a0f152\") " pod="kube-system/kube-proxy-c6bp6" Jan 24 03:00:33.303844 kubelet[2699]: I0124 03:00:33.303684 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c449a54d-663e-4385-afd7-9a4707a0f152-lib-modules\") pod \"kube-proxy-c6bp6\" (UID: \"c449a54d-663e-4385-afd7-9a4707a0f152\") " pod="kube-system/kube-proxy-c6bp6" Jan 24 03:00:33.303844 kubelet[2699]: I0124 03:00:33.303714 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srqz8\" (UniqueName: \"kubernetes.io/projected/c449a54d-663e-4385-afd7-9a4707a0f152-kube-api-access-srqz8\") pod \"kube-proxy-c6bp6\" (UID: \"c449a54d-663e-4385-afd7-9a4707a0f152\") " pod="kube-system/kube-proxy-c6bp6" Jan 24 03:00:33.313260 systemd[1]: Created slice kubepods-burstable-pod0a2fce04_ab43_4673_b17c_904c779364c0.slice - libcontainer container kubepods-burstable-pod0a2fce04_ab43_4673_b17c_904c779364c0.slice. Jan 24 03:00:33.403985 kubelet[2699]: I0124 03:00:33.403934 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-xtables-lock\") pod \"cilium-5n75p\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " pod="kube-system/cilium-5n75p" Jan 24 03:00:33.404234 kubelet[2699]: I0124 03:00:33.404003 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-host-proc-sys-kernel\") pod \"cilium-5n75p\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " pod="kube-system/cilium-5n75p" Jan 24 03:00:33.404234 kubelet[2699]: I0124 03:00:33.404041 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a2fce04-ab43-4673-b17c-904c779364c0-hubble-tls\") pod \"cilium-5n75p\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " pod="kube-system/cilium-5n75p" Jan 24 03:00:33.404234 kubelet[2699]: I0124 03:00:33.404200 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-cilium-cgroup\") pod \"cilium-5n75p\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " pod="kube-system/cilium-5n75p" Jan 24 03:00:33.404234 kubelet[2699]: I0124 03:00:33.404230 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-etc-cni-netd\") pod \"cilium-5n75p\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " pod="kube-system/cilium-5n75p" Jan 24 03:00:33.404528 kubelet[2699]: I0124 03:00:33.404285 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a2fce04-ab43-4673-b17c-904c779364c0-cilium-config-path\") pod \"cilium-5n75p\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " pod="kube-system/cilium-5n75p" Jan 24 03:00:33.404528 kubelet[2699]: I0124 03:00:33.404315 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncs9v\" (UniqueName: \"kubernetes.io/projected/0a2fce04-ab43-4673-b17c-904c779364c0-kube-api-access-ncs9v\") pod \"cilium-5n75p\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " pod="kube-system/cilium-5n75p" Jan 24 03:00:33.404528 kubelet[2699]: I0124 03:00:33.404342 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-hostproc\") pod \"cilium-5n75p\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " pod="kube-system/cilium-5n75p" Jan 24 03:00:33.404528 kubelet[2699]: I0124 03:00:33.404381 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-cilium-run\") pod \"cilium-5n75p\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " pod="kube-system/cilium-5n75p" Jan 24 03:00:33.404528 kubelet[2699]: I0124 03:00:33.404430 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-host-proc-sys-net\") pod \"cilium-5n75p\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " pod="kube-system/cilium-5n75p" Jan 24 03:00:33.404528 kubelet[2699]: I0124 03:00:33.404483 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-bpf-maps\") pod \"cilium-5n75p\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " pod="kube-system/cilium-5n75p" Jan 24 03:00:33.404817 kubelet[2699]: I0124 03:00:33.404511 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-lib-modules\") pod \"cilium-5n75p\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " pod="kube-system/cilium-5n75p" Jan 24 03:00:33.404817 kubelet[2699]: I0124 03:00:33.404578 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-cni-path\") pod \"cilium-5n75p\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " pod="kube-system/cilium-5n75p" Jan 24 03:00:33.404817 kubelet[2699]: I0124 03:00:33.404623 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a2fce04-ab43-4673-b17c-904c779364c0-clustermesh-secrets\") pod \"cilium-5n75p\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " pod="kube-system/cilium-5n75p" Jan 24 03:00:33.479904 systemd[1]: Created slice kubepods-besteffort-pod6bfd88f7_0781_4754_a785_60cd3dfc5296.slice - libcontainer container kubepods-besteffort-pod6bfd88f7_0781_4754_a785_60cd3dfc5296.slice. Jan 24 03:00:33.505182 kubelet[2699]: I0124 03:00:33.505125 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrn6z\" (UniqueName: \"kubernetes.io/projected/6bfd88f7-0781-4754-a785-60cd3dfc5296-kube-api-access-mrn6z\") pod \"cilium-operator-6c4d7847fc-rs4gx\" (UID: \"6bfd88f7-0781-4754-a785-60cd3dfc5296\") " pod="kube-system/cilium-operator-6c4d7847fc-rs4gx" Jan 24 03:00:33.508476 kubelet[2699]: I0124 03:00:33.505417 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bfd88f7-0781-4754-a785-60cd3dfc5296-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rs4gx\" (UID: \"6bfd88f7-0781-4754-a785-60cd3dfc5296\") " pod="kube-system/cilium-operator-6c4d7847fc-rs4gx" Jan 24 03:00:33.608005 containerd[1507]: time="2026-01-24T03:00:33.607795842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c6bp6,Uid:c449a54d-663e-4385-afd7-9a4707a0f152,Namespace:kube-system,Attempt:0,}" Jan 24 03:00:33.633414 containerd[1507]: time="2026-01-24T03:00:33.631342501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5n75p,Uid:0a2fce04-ab43-4673-b17c-904c779364c0,Namespace:kube-system,Attempt:0,}" Jan 24 03:00:33.667097 containerd[1507]: time="2026-01-24T03:00:33.666939546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:00:33.667537 containerd[1507]: time="2026-01-24T03:00:33.667385864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:00:33.667647 containerd[1507]: time="2026-01-24T03:00:33.667555422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:00:33.667790 containerd[1507]: time="2026-01-24T03:00:33.667745610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:00:33.701336 containerd[1507]: time="2026-01-24T03:00:33.701191422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:00:33.701336 containerd[1507]: time="2026-01-24T03:00:33.701277549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:00:33.701748 containerd[1507]: time="2026-01-24T03:00:33.701348440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:00:33.702530 containerd[1507]: time="2026-01-24T03:00:33.702357863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:00:33.714638 systemd[1]: Started cri-containerd-7974ce4d05f1f9c1e14a13756ac6473d452504d70d3052c6700a95791f876aaf.scope - libcontainer container 7974ce4d05f1f9c1e14a13756ac6473d452504d70d3052c6700a95791f876aaf. Jan 24 03:00:33.749866 systemd[1]: Started cri-containerd-3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f.scope - libcontainer container 3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f. Jan 24 03:00:33.778777 containerd[1507]: time="2026-01-24T03:00:33.778716897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c6bp6,Uid:c449a54d-663e-4385-afd7-9a4707a0f152,Namespace:kube-system,Attempt:0,} returns sandbox id \"7974ce4d05f1f9c1e14a13756ac6473d452504d70d3052c6700a95791f876aaf\"" Jan 24 03:00:33.788025 containerd[1507]: time="2026-01-24T03:00:33.787803767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rs4gx,Uid:6bfd88f7-0781-4754-a785-60cd3dfc5296,Namespace:kube-system,Attempt:0,}" Jan 24 03:00:33.790667 containerd[1507]: time="2026-01-24T03:00:33.790231319Z" level=info msg="CreateContainer within sandbox \"7974ce4d05f1f9c1e14a13756ac6473d452504d70d3052c6700a95791f876aaf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 03:00:33.835277 containerd[1507]: time="2026-01-24T03:00:33.835186065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5n75p,Uid:0a2fce04-ab43-4673-b17c-904c779364c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f\"" Jan 24 03:00:33.844025 containerd[1507]: time="2026-01-24T03:00:33.843901203Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 24 03:00:33.873560 containerd[1507]: time="2026-01-24T03:00:33.873165995Z" level=info msg="CreateContainer within sandbox \"7974ce4d05f1f9c1e14a13756ac6473d452504d70d3052c6700a95791f876aaf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a9c17033f70cb92360131cb311997092ddc0ee6c3c7fe5b97fd02602a668ff13\"" Jan 24 03:00:33.876991 containerd[1507]: time="2026-01-24T03:00:33.876339599Z" level=info msg="StartContainer for \"a9c17033f70cb92360131cb311997092ddc0ee6c3c7fe5b97fd02602a668ff13\"" Jan 24 03:00:33.896497 containerd[1507]: time="2026-01-24T03:00:33.894686217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:00:33.896497 containerd[1507]: time="2026-01-24T03:00:33.894898037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:00:33.896497 containerd[1507]: time="2026-01-24T03:00:33.894925651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:00:33.896497 containerd[1507]: time="2026-01-24T03:00:33.895099698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:00:33.934604 systemd[1]: Started cri-containerd-a9c17033f70cb92360131cb311997092ddc0ee6c3c7fe5b97fd02602a668ff13.scope - libcontainer container a9c17033f70cb92360131cb311997092ddc0ee6c3c7fe5b97fd02602a668ff13. Jan 24 03:00:33.946655 systemd[1]: Started cri-containerd-ae958553f5a390f7a91c3c381b37d313a9824eadfcf110d37c8738872eb9acad.scope - libcontainer container ae958553f5a390f7a91c3c381b37d313a9824eadfcf110d37c8738872eb9acad. Jan 24 03:00:34.009753 containerd[1507]: time="2026-01-24T03:00:34.009578801Z" level=info msg="StartContainer for \"a9c17033f70cb92360131cb311997092ddc0ee6c3c7fe5b97fd02602a668ff13\" returns successfully" Jan 24 03:00:34.056776 containerd[1507]: time="2026-01-24T03:00:34.056728136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rs4gx,Uid:6bfd88f7-0781-4754-a785-60cd3dfc5296,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae958553f5a390f7a91c3c381b37d313a9824eadfcf110d37c8738872eb9acad\"" Jan 24 03:00:34.859310 kubelet[2699]: I0124 03:00:34.859180 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c6bp6" podStartSLOduration=1.859128551 podStartE2EDuration="1.859128551s" podCreationTimestamp="2026-01-24 03:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:00:34.856873823 +0000 UTC m=+6.494020662" watchObservedRunningTime="2026-01-24 03:00:34.859128551 +0000 UTC m=+6.496275368" Jan 24 03:00:41.580353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157203293.mount: Deactivated successfully. Jan 24 03:00:45.417460 containerd[1507]: time="2026-01-24T03:00:45.417043602Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:45.420449 containerd[1507]: time="2026-01-24T03:00:45.420370390Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 24 03:00:45.421411 containerd[1507]: time="2026-01-24T03:00:45.421332595Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:45.425420 containerd[1507]: time="2026-01-24T03:00:45.425222144Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.579651963s" Jan 24 03:00:45.425420 containerd[1507]: time="2026-01-24T03:00:45.425292464Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 24 03:00:45.441926 containerd[1507]: time="2026-01-24T03:00:45.441719042Z" level=info msg="CreateContainer within sandbox \"3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 03:00:45.443053 containerd[1507]: time="2026-01-24T03:00:45.442973064Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 24 03:00:45.550124 containerd[1507]: time="2026-01-24T03:00:45.549881928Z" level=info msg="CreateContainer within sandbox \"3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc\"" Jan 24 03:00:45.551970 containerd[1507]: time="2026-01-24T03:00:45.550850073Z" level=info msg="StartContainer for \"a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc\"" Jan 24 03:00:45.703191 systemd[1]: Started cri-containerd-a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc.scope - libcontainer container a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc. Jan 24 03:00:45.760615 containerd[1507]: time="2026-01-24T03:00:45.760551063Z" level=info msg="StartContainer for \"a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc\" returns successfully" Jan 24 03:00:45.781560 systemd[1]: cri-containerd-a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc.scope: Deactivated successfully. Jan 24 03:00:45.850048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc-rootfs.mount: Deactivated successfully. Jan 24 03:00:46.170967 containerd[1507]: time="2026-01-24T03:00:46.155013236Z" level=info msg="shim disconnected" id=a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc namespace=k8s.io Jan 24 03:00:46.170967 containerd[1507]: time="2026-01-24T03:00:46.170727052Z" level=warning msg="cleaning up after shim disconnected" id=a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc namespace=k8s.io Jan 24 03:00:46.170967 containerd[1507]: time="2026-01-24T03:00:46.170762041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 03:00:46.945153 containerd[1507]: time="2026-01-24T03:00:46.943786076Z" level=info msg="CreateContainer within sandbox \"3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 03:00:46.981309 containerd[1507]: time="2026-01-24T03:00:46.981198160Z" level=info msg="CreateContainer within sandbox \"3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4\"" Jan 24 03:00:46.982697 containerd[1507]: time="2026-01-24T03:00:46.982122467Z" level=info msg="StartContainer for \"81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4\"" Jan 24 03:00:47.057692 systemd[1]: Started cri-containerd-81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4.scope - libcontainer container 81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4. Jan 24 03:00:47.214356 containerd[1507]: time="2026-01-24T03:00:47.212074108Z" level=info msg="StartContainer for \"81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4\" returns successfully" Jan 24 03:00:47.236189 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 03:00:47.236933 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 03:00:47.237140 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 24 03:00:47.253169 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 03:00:47.254117 systemd[1]: cri-containerd-81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4.scope: Deactivated successfully. Jan 24 03:00:47.334536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3821555403.mount: Deactivated successfully. Jan 24 03:00:47.402309 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 03:00:47.413364 containerd[1507]: time="2026-01-24T03:00:47.413196072Z" level=info msg="shim disconnected" id=81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4 namespace=k8s.io Jan 24 03:00:47.413770 containerd[1507]: time="2026-01-24T03:00:47.413351616Z" level=warning msg="cleaning up after shim disconnected" id=81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4 namespace=k8s.io Jan 24 03:00:47.413868 containerd[1507]: time="2026-01-24T03:00:47.413769249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 03:00:47.958454 containerd[1507]: time="2026-01-24T03:00:47.958380014Z" level=info msg="CreateContainer within sandbox \"3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 03:00:47.978260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4-rootfs.mount: Deactivated successfully. Jan 24 03:00:48.137849 containerd[1507]: time="2026-01-24T03:00:48.137781951Z" level=info msg="CreateContainer within sandbox \"3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521\"" Jan 24 03:00:48.141417 containerd[1507]: time="2026-01-24T03:00:48.139614542Z" level=info msg="StartContainer for \"b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521\"" Jan 24 03:00:48.264836 systemd[1]: Started cri-containerd-b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521.scope - libcontainer container b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521. Jan 24 03:00:48.361962 systemd[1]: cri-containerd-b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521.scope: Deactivated successfully. Jan 24 03:00:48.371562 containerd[1507]: time="2026-01-24T03:00:48.370789259Z" level=info msg="StartContainer for \"b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521\" returns successfully" Jan 24 03:00:48.417161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521-rootfs.mount: Deactivated successfully. Jan 24 03:00:48.548073 containerd[1507]: time="2026-01-24T03:00:48.547890262Z" level=info msg="shim disconnected" id=b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521 namespace=k8s.io Jan 24 03:00:48.548073 containerd[1507]: time="2026-01-24T03:00:48.547986565Z" level=warning msg="cleaning up after shim disconnected" id=b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521 namespace=k8s.io Jan 24 03:00:48.548073 containerd[1507]: time="2026-01-24T03:00:48.548005028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 03:00:48.603284 containerd[1507]: time="2026-01-24T03:00:48.603190873Z" level=warning msg="cleanup warnings time=\"2026-01-24T03:00:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 03:00:48.799813 containerd[1507]: time="2026-01-24T03:00:48.799141325Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:48.801189 containerd[1507]: time="2026-01-24T03:00:48.800932493Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 24 03:00:48.805535 containerd[1507]: time="2026-01-24T03:00:48.804883650Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 03:00:48.808719 containerd[1507]: time="2026-01-24T03:00:48.807907366Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.364870986s" Jan 24 03:00:48.808719 containerd[1507]: time="2026-01-24T03:00:48.807979308Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 24 03:00:48.812814 containerd[1507]: time="2026-01-24T03:00:48.812447060Z" level=info msg="CreateContainer within sandbox \"ae958553f5a390f7a91c3c381b37d313a9824eadfcf110d37c8738872eb9acad\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 24 03:00:48.852312 containerd[1507]: time="2026-01-24T03:00:48.852238668Z" level=info msg="CreateContainer within sandbox \"ae958553f5a390f7a91c3c381b37d313a9824eadfcf110d37c8738872eb9acad\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4\"" Jan 24 03:00:48.855135 containerd[1507]: time="2026-01-24T03:00:48.855076258Z" level=info msg="StartContainer for \"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4\"" Jan 24 03:00:48.911721 systemd[1]: Started cri-containerd-b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4.scope - libcontainer container b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4. Jan 24 03:00:48.990879 containerd[1507]: time="2026-01-24T03:00:48.989844186Z" level=info msg="StartContainer for \"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4\" returns successfully" Jan 24 03:00:49.001018 containerd[1507]: time="2026-01-24T03:00:48.999618343Z" level=info msg="CreateContainer within sandbox \"3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 03:00:49.038493 containerd[1507]: time="2026-01-24T03:00:49.038262348Z" level=info msg="CreateContainer within sandbox \"3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641\"" Jan 24 03:00:49.039522 containerd[1507]: time="2026-01-24T03:00:49.039166171Z" level=info msg="StartContainer for \"1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641\"" Jan 24 03:00:49.137682 systemd[1]: Started cri-containerd-1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641.scope - libcontainer container 1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641. Jan 24 03:00:49.206676 systemd[1]: cri-containerd-1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641.scope: Deactivated successfully. Jan 24 03:00:49.222294 containerd[1507]: time="2026-01-24T03:00:49.211022280Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a2fce04_ab43_4673_b17c_904c779364c0.slice/cri-containerd-1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641.scope/memory.events\": no such file or directory" Jan 24 03:00:49.225916 kubelet[2699]: E0124 03:00:49.225589 2699 cadvisor_stats_provider.go:522] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a2fce04_ab43_4673_b17c_904c779364c0.slice/cri-containerd-1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641.scope\": RecentStats: unable to find data in memory cache]" Jan 24 03:00:49.229504 containerd[1507]: time="2026-01-24T03:00:49.228918786Z" level=info msg="StartContainer for \"1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641\" returns successfully" Jan 24 03:00:49.294778 containerd[1507]: time="2026-01-24T03:00:49.294691512Z" level=info msg="shim disconnected" id=1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641 namespace=k8s.io Jan 24 03:00:49.295430 containerd[1507]: time="2026-01-24T03:00:49.295234914Z" level=warning msg="cleaning up after shim disconnected" id=1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641 namespace=k8s.io Jan 24 03:00:49.295430 containerd[1507]: time="2026-01-24T03:00:49.295276252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 03:00:49.342306 containerd[1507]: time="2026-01-24T03:00:49.342183087Z" level=warning msg="cleanup warnings time=\"2026-01-24T03:00:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 03:00:49.977145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641-rootfs.mount: Deactivated successfully. Jan 24 03:00:50.018703 containerd[1507]: time="2026-01-24T03:00:50.018425597Z" level=info msg="CreateContainer within sandbox \"3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 03:00:50.144567 containerd[1507]: time="2026-01-24T03:00:50.144497163Z" level=info msg="CreateContainer within sandbox \"3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11\"" Jan 24 03:00:50.145443 containerd[1507]: time="2026-01-24T03:00:50.145296628Z" level=info msg="StartContainer for \"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11\"" Jan 24 03:00:50.261678 systemd[1]: Started cri-containerd-46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11.scope - libcontainer container 46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11. Jan 24 03:00:50.416365 containerd[1507]: time="2026-01-24T03:00:50.416295959Z" level=info msg="StartContainer for \"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11\" returns successfully" Jan 24 03:00:50.967967 kubelet[2699]: I0124 03:00:50.967924 2699 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 03:00:51.094826 kubelet[2699]: I0124 03:00:51.084147 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5n75p" podStartSLOduration=6.4880231219999995 podStartE2EDuration="18.084097347s" podCreationTimestamp="2026-01-24 03:00:33 +0000 UTC" firstStartedPulling="2026-01-24 03:00:33.840522125 +0000 UTC m=+5.477668937" lastFinishedPulling="2026-01-24 03:00:45.436596336 +0000 UTC m=+17.073743162" observedRunningTime="2026-01-24 03:00:51.073630161 +0000 UTC m=+22.710777005" watchObservedRunningTime="2026-01-24 03:00:51.084097347 +0000 UTC m=+22.721244180" Jan 24 03:00:51.094826 kubelet[2699]: I0124 03:00:51.087034 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rs4gx" podStartSLOduration=3.337995281 podStartE2EDuration="18.087014424s" podCreationTimestamp="2026-01-24 03:00:33 +0000 UTC" firstStartedPulling="2026-01-24 03:00:34.059901138 +0000 UTC m=+5.697047950" lastFinishedPulling="2026-01-24 03:00:48.808920281 +0000 UTC m=+20.446067093" observedRunningTime="2026-01-24 03:00:50.255207231 +0000 UTC m=+21.892354061" watchObservedRunningTime="2026-01-24 03:00:51.087014424 +0000 UTC m=+22.724161289" Jan 24 03:00:51.140883 systemd[1]: Created slice kubepods-burstable-pod3335202a_1c3c_488b_9f43_b2e00fe3ae4c.slice - libcontainer container kubepods-burstable-pod3335202a_1c3c_488b_9f43_b2e00fe3ae4c.slice. Jan 24 03:00:51.152919 systemd[1]: Created slice kubepods-burstable-pod1dc6d73a_a419_41c4_971e_3d360f89c925.slice - libcontainer container kubepods-burstable-pod1dc6d73a_a419_41c4_971e_3d360f89c925.slice. Jan 24 03:00:51.277445 kubelet[2699]: I0124 03:00:51.276329 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3335202a-1c3c-488b-9f43-b2e00fe3ae4c-config-volume\") pod \"coredns-668d6bf9bc-s9ttx\" (UID: \"3335202a-1c3c-488b-9f43-b2e00fe3ae4c\") " pod="kube-system/coredns-668d6bf9bc-s9ttx" Jan 24 03:00:51.277445 kubelet[2699]: I0124 03:00:51.276443 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dc6d73a-a419-41c4-971e-3d360f89c925-config-volume\") pod \"coredns-668d6bf9bc-zvsgh\" (UID: \"1dc6d73a-a419-41c4-971e-3d360f89c925\") " pod="kube-system/coredns-668d6bf9bc-zvsgh" Jan 24 03:00:51.277445 kubelet[2699]: I0124 03:00:51.276493 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl5lm\" (UniqueName: \"kubernetes.io/projected/3335202a-1c3c-488b-9f43-b2e00fe3ae4c-kube-api-access-jl5lm\") pod \"coredns-668d6bf9bc-s9ttx\" (UID: \"3335202a-1c3c-488b-9f43-b2e00fe3ae4c\") " pod="kube-system/coredns-668d6bf9bc-s9ttx" Jan 24 03:00:51.277445 kubelet[2699]: I0124 03:00:51.276547 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnvr9\" (UniqueName: \"kubernetes.io/projected/1dc6d73a-a419-41c4-971e-3d360f89c925-kube-api-access-mnvr9\") pod \"coredns-668d6bf9bc-zvsgh\" (UID: \"1dc6d73a-a419-41c4-971e-3d360f89c925\") " pod="kube-system/coredns-668d6bf9bc-zvsgh" Jan 24 03:00:51.440859 systemd[1]: Started sshd@11-10.243.72.22:22-176.120.22.13:42502.service - OpenSSH per-connection server daemon (176.120.22.13:42502). Jan 24 03:00:51.450479 containerd[1507]: time="2026-01-24T03:00:51.449194182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s9ttx,Uid:3335202a-1c3c-488b-9f43-b2e00fe3ae4c,Namespace:kube-system,Attempt:0,}" Jan 24 03:00:51.464332 containerd[1507]: time="2026-01-24T03:00:51.463797606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zvsgh,Uid:1dc6d73a-a419-41c4-971e-3d360f89c925,Namespace:kube-system,Attempt:0,}" Jan 24 03:00:53.043356 sshd[3517]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=176.120.22.13 user=root Jan 24 03:00:53.817529 systemd-networkd[1432]: cilium_host: Link UP Jan 24 03:00:53.818549 systemd-networkd[1432]: cilium_net: Link UP Jan 24 03:00:53.818883 systemd-networkd[1432]: cilium_net: Gained carrier Jan 24 03:00:53.822862 systemd-networkd[1432]: cilium_host: Gained carrier Jan 24 03:00:53.926618 systemd-networkd[1432]: cilium_host: Gained IPv6LL Jan 24 03:00:54.003288 systemd-networkd[1432]: cilium_vxlan: Link UP Jan 24 03:00:54.003300 systemd-networkd[1432]: cilium_vxlan: Gained carrier Jan 24 03:00:54.622536 kernel: NET: Registered PF_ALG protocol family Jan 24 03:00:54.846787 systemd-networkd[1432]: cilium_net: Gained IPv6LL Jan 24 03:00:55.142903 sshd[3459]: PAM: Permission denied for root from 176.120.22.13 Jan 24 03:00:55.618219 sshd[3459]: Connection reset by authenticating user root 176.120.22.13 port 42502 [preauth] Jan 24 03:00:55.622336 systemd[1]: sshd@11-10.243.72.22:22-176.120.22.13:42502.service: Deactivated successfully. Jan 24 03:00:55.723728 systemd[1]: Started sshd@12-10.243.72.22:22-176.120.22.13:41558.service - OpenSSH per-connection server daemon (176.120.22.13:41558). Jan 24 03:00:55.742613 systemd-networkd[1432]: cilium_vxlan: Gained IPv6LL Jan 24 03:00:55.809624 systemd-networkd[1432]: lxc_health: Link UP Jan 24 03:00:55.810244 systemd-networkd[1432]: lxc_health: Gained carrier Jan 24 03:00:56.261771 systemd-networkd[1432]: lxc5547ce0ee531: Link UP Jan 24 03:00:56.275428 kernel: eth0: renamed from tmpf487a Jan 24 03:00:56.293842 systemd-networkd[1432]: lxc5547ce0ee531: Gained carrier Jan 24 03:00:56.330827 systemd-networkd[1432]: lxcb087f77de88d: Link UP Jan 24 03:00:56.340555 kernel: eth0: renamed from tmp1ed1c Jan 24 03:00:56.352909 systemd-networkd[1432]: lxcb087f77de88d: Gained carrier Jan 24 03:00:57.086603 systemd-networkd[1432]: lxc_health: Gained IPv6LL Jan 24 03:00:57.221863 systemd[1]: Started sshd@13-10.243.72.22:22-159.223.6.232:55974.service - OpenSSH per-connection server daemon (159.223.6.232:55974). Jan 24 03:00:57.358509 sshd[3897]: Invalid user webmaster from 159.223.6.232 port 55974 Jan 24 03:00:57.377802 sshd[3897]: Connection closed by invalid user webmaster 159.223.6.232 port 55974 [preauth] Jan 24 03:00:57.384197 systemd[1]: sshd@13-10.243.72.22:22-159.223.6.232:55974.service: Deactivated successfully. Jan 24 03:00:57.918719 systemd-networkd[1432]: lxc5547ce0ee531: Gained IPv6LL Jan 24 03:00:58.238720 systemd-networkd[1432]: lxcb087f77de88d: Gained IPv6LL Jan 24 03:00:58.350208 sshd[3904]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=176.120.22.13 user=root Jan 24 03:01:00.608303 sshd[3854]: PAM: Permission denied for root from 176.120.22.13 Jan 24 03:01:00.885770 sshd[3854]: Connection reset by authenticating user root 176.120.22.13 port 41558 [preauth] Jan 24 03:01:00.889222 systemd[1]: sshd@12-10.243.72.22:22-176.120.22.13:41558.service: Deactivated successfully. Jan 24 03:01:00.976877 systemd[1]: Started sshd@14-10.243.72.22:22-176.120.22.13:41584.service - OpenSSH per-connection server daemon (176.120.22.13:41584). Jan 24 03:01:02.162097 containerd[1507]: time="2026-01-24T03:01:02.161846048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:01:02.163759 containerd[1507]: time="2026-01-24T03:01:02.162321401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:01:02.163759 containerd[1507]: time="2026-01-24T03:01:02.162410347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:01:02.166414 containerd[1507]: time="2026-01-24T03:01:02.164301874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:01:02.198035 containerd[1507]: time="2026-01-24T03:01:02.197864790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:01:02.198380 containerd[1507]: time="2026-01-24T03:01:02.198302287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:01:02.198590 containerd[1507]: time="2026-01-24T03:01:02.198535306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:01:02.203570 containerd[1507]: time="2026-01-24T03:01:02.202293414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:01:02.274179 systemd[1]: Started cri-containerd-1ed1c38ff14544fe2d31ef7b7b14392cdbc4beb5bee5fc9feb3d865e7def4109.scope - libcontainer container 1ed1c38ff14544fe2d31ef7b7b14392cdbc4beb5bee5fc9feb3d865e7def4109. Jan 24 03:01:02.298541 systemd[1]: Started cri-containerd-f487a0f7fdcff3c3498cacb2a202b9edba59aad1d07367ee73410b3eb1bb564a.scope - libcontainer container f487a0f7fdcff3c3498cacb2a202b9edba59aad1d07367ee73410b3eb1bb564a. Jan 24 03:01:02.459158 containerd[1507]: time="2026-01-24T03:01:02.458019255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zvsgh,Uid:1dc6d73a-a419-41c4-971e-3d360f89c925,Namespace:kube-system,Attempt:0,} returns sandbox id \"f487a0f7fdcff3c3498cacb2a202b9edba59aad1d07367ee73410b3eb1bb564a\"" Jan 24 03:01:02.470024 containerd[1507]: time="2026-01-24T03:01:02.469380733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s9ttx,Uid:3335202a-1c3c-488b-9f43-b2e00fe3ae4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ed1c38ff14544fe2d31ef7b7b14392cdbc4beb5bee5fc9feb3d865e7def4109\"" Jan 24 03:01:02.470024 containerd[1507]: time="2026-01-24T03:01:02.469759642Z" level=info msg="CreateContainer within sandbox \"f487a0f7fdcff3c3498cacb2a202b9edba59aad1d07367ee73410b3eb1bb564a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 03:01:02.483927 containerd[1507]: time="2026-01-24T03:01:02.483405498Z" level=info msg="CreateContainer within sandbox \"1ed1c38ff14544fe2d31ef7b7b14392cdbc4beb5bee5fc9feb3d865e7def4109\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 03:01:02.522587 containerd[1507]: time="2026-01-24T03:01:02.521291884Z" level=info msg="CreateContainer within sandbox \"f487a0f7fdcff3c3498cacb2a202b9edba59aad1d07367ee73410b3eb1bb564a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6f123e8c617b8d7c99b349f97857eea835ec4e1b63f898312cfecf5adba0909\"" Jan 24 03:01:02.523490 containerd[1507]: time="2026-01-24T03:01:02.523149832Z" level=info msg="StartContainer for \"f6f123e8c617b8d7c99b349f97857eea835ec4e1b63f898312cfecf5adba0909\"" Jan 24 03:01:02.524719 containerd[1507]: time="2026-01-24T03:01:02.524491598Z" level=info msg="CreateContainer within sandbox \"1ed1c38ff14544fe2d31ef7b7b14392cdbc4beb5bee5fc9feb3d865e7def4109\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"380bb722f65d8e9effd16ffbe1283b146f8da21c3eae50cceceab05b0bf8d211\"" Jan 24 03:01:02.526559 containerd[1507]: time="2026-01-24T03:01:02.525767189Z" level=info msg="StartContainer for \"380bb722f65d8e9effd16ffbe1283b146f8da21c3eae50cceceab05b0bf8d211\"" Jan 24 03:01:02.565549 sshd[3913]: Invalid user onlime_r from 176.120.22.13 port 41584 Jan 24 03:01:02.610656 systemd[1]: Started cri-containerd-380bb722f65d8e9effd16ffbe1283b146f8da21c3eae50cceceab05b0bf8d211.scope - libcontainer container 380bb722f65d8e9effd16ffbe1283b146f8da21c3eae50cceceab05b0bf8d211. Jan 24 03:01:02.613677 systemd[1]: Started cri-containerd-f6f123e8c617b8d7c99b349f97857eea835ec4e1b63f898312cfecf5adba0909.scope - libcontainer container f6f123e8c617b8d7c99b349f97857eea835ec4e1b63f898312cfecf5adba0909. Jan 24 03:01:02.701736 containerd[1507]: time="2026-01-24T03:01:02.701454322Z" level=info msg="StartContainer for \"f6f123e8c617b8d7c99b349f97857eea835ec4e1b63f898312cfecf5adba0909\" returns successfully" Jan 24 03:01:02.721500 containerd[1507]: time="2026-01-24T03:01:02.721344656Z" level=info msg="StartContainer for \"380bb722f65d8e9effd16ffbe1283b146f8da21c3eae50cceceab05b0bf8d211\" returns successfully" Jan 24 03:01:02.896310 sshd[4073]: pam_faillock(sshd:auth): User unknown Jan 24 03:01:02.901926 sshd[3913]: Postponed keyboard-interactive for invalid user onlime_r from 176.120.22.13 port 41584 ssh2 [preauth] Jan 24 03:01:03.100367 kubelet[2699]: I0124 03:01:03.099949 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zvsgh" podStartSLOduration=30.099864936 podStartE2EDuration="30.099864936s" podCreationTimestamp="2026-01-24 03:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:01:03.099122421 +0000 UTC m=+34.736269253" watchObservedRunningTime="2026-01-24 03:01:03.099864936 +0000 UTC m=+34.737011771" Jan 24 03:01:03.150509 kubelet[2699]: I0124 03:01:03.149885 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s9ttx" podStartSLOduration=30.149861214 podStartE2EDuration="30.149861214s" podCreationTimestamp="2026-01-24 03:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:01:03.12420113 +0000 UTC m=+34.761347964" watchObservedRunningTime="2026-01-24 03:01:03.149861214 +0000 UTC m=+34.787008045" Jan 24 03:01:03.185848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3124393874.mount: Deactivated successfully. Jan 24 03:01:03.235861 sshd[4073]: pam_unix(sshd:auth): check pass; user unknown Jan 24 03:01:03.235918 sshd[4073]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=176.120.22.13 Jan 24 03:01:03.238076 sshd[4073]: pam_faillock(sshd:auth): User unknown Jan 24 03:01:06.044931 sshd[3913]: PAM: Permission denied for illegal user onlime_r from 176.120.22.13 Jan 24 03:01:06.044931 sshd[3913]: Failed keyboard-interactive/pam for invalid user onlime_r from 176.120.22.13 port 41584 ssh2 Jan 24 03:01:06.354686 sshd[3913]: Connection reset by invalid user onlime_r 176.120.22.13 port 41584 [preauth] Jan 24 03:01:06.358234 systemd[1]: sshd@14-10.243.72.22:22-176.120.22.13:41584.service: Deactivated successfully. Jan 24 03:01:06.495270 systemd[1]: Started sshd@15-10.243.72.22:22-176.120.22.13:53582.service - OpenSSH per-connection server daemon (176.120.22.13:53582). Jan 24 03:01:08.154928 sshd[4088]: Invalid user admin from 176.120.22.13 port 53582 Jan 24 03:01:08.592436 sshd[4093]: pam_faillock(sshd:auth): User unknown Jan 24 03:01:08.596414 sshd[4088]: Postponed keyboard-interactive for invalid user admin from 176.120.22.13 port 53582 ssh2 [preauth] Jan 24 03:01:09.009909 sshd[4093]: pam_unix(sshd:auth): check pass; user unknown Jan 24 03:01:09.009956 sshd[4093]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=176.120.22.13 Jan 24 03:01:09.010972 sshd[4093]: pam_faillock(sshd:auth): User unknown Jan 24 03:01:10.974494 sshd[4088]: PAM: Permission denied for illegal user admin from 176.120.22.13 Jan 24 03:01:10.975157 sshd[4088]: Failed keyboard-interactive/pam for invalid user admin from 176.120.22.13 port 53582 ssh2 Jan 24 03:01:11.322103 sshd[4088]: Connection reset by invalid user admin 176.120.22.13 port 53582 [preauth] Jan 24 03:01:11.323824 systemd[1]: sshd@15-10.243.72.22:22-176.120.22.13:53582.service: Deactivated successfully. Jan 24 03:01:11.439790 systemd[1]: Started sshd@16-10.243.72.22:22-176.120.22.13:53610.service - OpenSSH per-connection server daemon (176.120.22.13:53610). Jan 24 03:01:13.104910 sshd[4097]: Invalid user admin from 176.120.22.13 port 53610 Jan 24 03:01:13.505038 sshd[4099]: pam_faillock(sshd:auth): User unknown Jan 24 03:01:13.509655 sshd[4097]: Postponed keyboard-interactive for invalid user admin from 176.120.22.13 port 53610 ssh2 [preauth] Jan 24 03:01:13.859518 sshd[4099]: pam_unix(sshd:auth): check pass; user unknown Jan 24 03:01:13.859564 sshd[4099]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=176.120.22.13 Jan 24 03:01:13.861078 sshd[4099]: pam_faillock(sshd:auth): User unknown Jan 24 03:01:16.508912 sshd[4097]: PAM: Permission denied for illegal user admin from 176.120.22.13 Jan 24 03:01:16.509814 sshd[4097]: Failed keyboard-interactive/pam for invalid user admin from 176.120.22.13 port 53610 ssh2 Jan 24 03:01:16.852654 sshd[4097]: Connection reset by invalid user admin 176.120.22.13 port 53610 [preauth] Jan 24 03:01:16.856106 systemd[1]: sshd@16-10.243.72.22:22-176.120.22.13:53610.service: Deactivated successfully. Jan 24 03:01:41.022034 systemd[1]: Started sshd@17-10.243.72.22:22-20.161.92.111:48926.service - OpenSSH per-connection server daemon (20.161.92.111:48926). Jan 24 03:01:41.628870 sshd[4110]: Accepted publickey for core from 20.161.92.111 port 48926 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:01:41.636785 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:01:41.650704 systemd-logind[1487]: New session 12 of user core. Jan 24 03:01:41.654279 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 03:01:42.219972 systemd[1]: Started sshd@18-10.243.72.22:22-159.223.6.232:42156.service - OpenSSH per-connection server daemon (159.223.6.232:42156). Jan 24 03:01:42.352190 sshd[4121]: Invalid user webmaster from 159.223.6.232 port 42156 Jan 24 03:01:42.371283 sshd[4121]: Connection closed by invalid user webmaster 159.223.6.232 port 42156 [preauth] Jan 24 03:01:42.374126 systemd[1]: sshd@18-10.243.72.22:22-159.223.6.232:42156.service: Deactivated successfully. Jan 24 03:01:42.642353 sshd[4110]: pam_unix(sshd:session): session closed for user core Jan 24 03:01:42.657298 systemd[1]: sshd@17-10.243.72.22:22-20.161.92.111:48926.service: Deactivated successfully. Jan 24 03:01:42.660929 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 03:01:42.663243 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Jan 24 03:01:42.664890 systemd-logind[1487]: Removed session 12. Jan 24 03:01:47.754852 systemd[1]: Started sshd@19-10.243.72.22:22-20.161.92.111:50326.service - OpenSSH per-connection server daemon (20.161.92.111:50326). Jan 24 03:01:48.087777 systemd[1]: Started sshd@20-10.243.72.22:22-101.47.140.255:36212.service - OpenSSH per-connection server daemon (101.47.140.255:36212). Jan 24 03:01:48.352376 sshd[4129]: Accepted publickey for core from 20.161.92.111 port 50326 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:01:48.355310 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:01:48.365990 systemd-logind[1487]: New session 13 of user core. Jan 24 03:01:48.374675 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 03:01:48.917049 sshd[4129]: pam_unix(sshd:session): session closed for user core Jan 24 03:01:48.922592 systemd[1]: sshd@19-10.243.72.22:22-20.161.92.111:50326.service: Deactivated successfully. Jan 24 03:01:48.926918 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 03:01:48.928321 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Jan 24 03:01:48.929749 systemd-logind[1487]: Removed session 13. Jan 24 03:01:54.026951 systemd[1]: Started sshd@21-10.243.72.22:22-20.161.92.111:47740.service - OpenSSH per-connection server daemon (20.161.92.111:47740). Jan 24 03:01:54.558868 sshd[4132]: Invalid user sonar from 101.47.140.255 port 36212 Jan 24 03:01:54.608183 sshd[4145]: Accepted publickey for core from 20.161.92.111 port 47740 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:01:54.610631 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:01:54.618214 systemd-logind[1487]: New session 14 of user core. Jan 24 03:01:54.623597 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 03:01:54.743054 sshd[4132]: Received disconnect from 101.47.140.255 port 36212:11: Bye Bye [preauth] Jan 24 03:01:54.743298 sshd[4132]: Disconnected from invalid user sonar 101.47.140.255 port 36212 [preauth] Jan 24 03:01:54.747296 systemd[1]: sshd@20-10.243.72.22:22-101.47.140.255:36212.service: Deactivated successfully. Jan 24 03:01:55.113190 sshd[4145]: pam_unix(sshd:session): session closed for user core Jan 24 03:01:55.119018 systemd[1]: sshd@21-10.243.72.22:22-20.161.92.111:47740.service: Deactivated successfully. Jan 24 03:01:55.123348 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 03:01:55.126076 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Jan 24 03:01:55.128364 systemd-logind[1487]: Removed session 14. Jan 24 03:02:00.230927 systemd[1]: Started sshd@22-10.243.72.22:22-20.161.92.111:47742.service - OpenSSH per-connection server daemon (20.161.92.111:47742). Jan 24 03:02:00.812460 sshd[4161]: Accepted publickey for core from 20.161.92.111 port 47742 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:00.815021 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:00.828376 systemd-logind[1487]: New session 15 of user core. Jan 24 03:02:00.840810 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 03:02:01.324714 sshd[4161]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:01.329490 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Jan 24 03:02:01.331356 systemd[1]: sshd@22-10.243.72.22:22-20.161.92.111:47742.service: Deactivated successfully. Jan 24 03:02:01.333934 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 03:02:01.336502 systemd-logind[1487]: Removed session 15. Jan 24 03:02:06.435804 systemd[1]: Started sshd@23-10.243.72.22:22-20.161.92.111:35752.service - OpenSSH per-connection server daemon (20.161.92.111:35752). Jan 24 03:02:07.013332 sshd[4176]: Accepted publickey for core from 20.161.92.111 port 35752 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:07.015965 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:07.022955 systemd-logind[1487]: New session 16 of user core. Jan 24 03:02:07.031662 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 03:02:07.516882 sshd[4176]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:07.522020 systemd[1]: sshd@23-10.243.72.22:22-20.161.92.111:35752.service: Deactivated successfully. Jan 24 03:02:07.525757 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 03:02:07.527094 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Jan 24 03:02:07.528683 systemd-logind[1487]: Removed session 16. Jan 24 03:02:07.627884 systemd[1]: Started sshd@24-10.243.72.22:22-20.161.92.111:35758.service - OpenSSH per-connection server daemon (20.161.92.111:35758). Jan 24 03:02:08.198269 sshd[4189]: Accepted publickey for core from 20.161.92.111 port 35758 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:08.200557 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:08.209552 systemd-logind[1487]: New session 17 of user core. Jan 24 03:02:08.220011 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 03:02:08.769548 sshd[4189]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:08.774602 systemd[1]: sshd@24-10.243.72.22:22-20.161.92.111:35758.service: Deactivated successfully. Jan 24 03:02:08.777103 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 03:02:08.778745 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Jan 24 03:02:08.780598 systemd-logind[1487]: Removed session 17. Jan 24 03:02:08.873735 systemd[1]: Started sshd@25-10.243.72.22:22-20.161.92.111:35774.service - OpenSSH per-connection server daemon (20.161.92.111:35774). Jan 24 03:02:09.444031 sshd[4200]: Accepted publickey for core from 20.161.92.111 port 35774 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:09.446200 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:09.452722 systemd-logind[1487]: New session 18 of user core. Jan 24 03:02:09.457621 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 03:02:09.943814 sshd[4200]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:09.950277 systemd[1]: sshd@25-10.243.72.22:22-20.161.92.111:35774.service: Deactivated successfully. Jan 24 03:02:09.955229 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 03:02:09.957949 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Jan 24 03:02:09.960268 systemd-logind[1487]: Removed session 18. Jan 24 03:02:15.056928 systemd[1]: Started sshd@26-10.243.72.22:22-20.161.92.111:44394.service - OpenSSH per-connection server daemon (20.161.92.111:44394). Jan 24 03:02:15.633056 sshd[4212]: Accepted publickey for core from 20.161.92.111 port 44394 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:15.635485 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:15.644685 systemd-logind[1487]: New session 19 of user core. Jan 24 03:02:15.654684 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 03:02:16.146746 sshd[4212]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:16.152312 systemd[1]: sshd@26-10.243.72.22:22-20.161.92.111:44394.service: Deactivated successfully. Jan 24 03:02:16.155823 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 03:02:16.156931 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Jan 24 03:02:16.159362 systemd-logind[1487]: Removed session 19. Jan 24 03:02:21.258175 systemd[1]: Started sshd@27-10.243.72.22:22-20.161.92.111:44406.service - OpenSSH per-connection server daemon (20.161.92.111:44406). Jan 24 03:02:21.828339 sshd[4226]: Accepted publickey for core from 20.161.92.111 port 44406 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:21.829253 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:21.836163 systemd-logind[1487]: New session 20 of user core. Jan 24 03:02:21.841685 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 03:02:22.340937 sshd[4226]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:22.348545 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Jan 24 03:02:22.350002 systemd[1]: sshd@27-10.243.72.22:22-20.161.92.111:44406.service: Deactivated successfully. Jan 24 03:02:22.353821 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 03:02:22.356386 systemd-logind[1487]: Removed session 20. Jan 24 03:02:22.443379 systemd[1]: Started sshd@28-10.243.72.22:22-20.161.92.111:53206.service - OpenSSH per-connection server daemon (20.161.92.111:53206). Jan 24 03:02:23.021313 sshd[4239]: Accepted publickey for core from 20.161.92.111 port 53206 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:23.023691 sshd[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:23.030592 systemd-logind[1487]: New session 21 of user core. Jan 24 03:02:23.041623 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 03:02:23.788742 sshd[4239]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:23.794535 systemd[1]: sshd@28-10.243.72.22:22-20.161.92.111:53206.service: Deactivated successfully. Jan 24 03:02:23.798024 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 03:02:23.799956 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Jan 24 03:02:23.801477 systemd-logind[1487]: Removed session 21. Jan 24 03:02:23.891927 systemd[1]: Started sshd@29-10.243.72.22:22-20.161.92.111:53220.service - OpenSSH per-connection server daemon (20.161.92.111:53220). Jan 24 03:02:24.466895 sshd[4250]: Accepted publickey for core from 20.161.92.111 port 53220 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:24.469262 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:24.478782 systemd-logind[1487]: New session 22 of user core. Jan 24 03:02:24.484612 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 03:02:25.631301 systemd[1]: Started sshd@30-10.243.72.22:22-159.223.6.232:44664.service - OpenSSH per-connection server daemon (159.223.6.232:44664). Jan 24 03:02:25.763833 sshd[4262]: Invalid user webmaster from 159.223.6.232 port 44664 Jan 24 03:02:25.781117 sshd[4262]: Connection closed by invalid user webmaster 159.223.6.232 port 44664 [preauth] Jan 24 03:02:25.783168 systemd[1]: sshd@30-10.243.72.22:22-159.223.6.232:44664.service: Deactivated successfully. Jan 24 03:02:25.857304 sshd[4250]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:25.862694 systemd[1]: sshd@29-10.243.72.22:22-20.161.92.111:53220.service: Deactivated successfully. Jan 24 03:02:25.865502 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 03:02:25.869450 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Jan 24 03:02:25.871905 systemd-logind[1487]: Removed session 22. Jan 24 03:02:25.960567 systemd[1]: Started sshd@31-10.243.72.22:22-20.161.92.111:53230.service - OpenSSH per-connection server daemon (20.161.92.111:53230). Jan 24 03:02:26.536642 sshd[4273]: Accepted publickey for core from 20.161.92.111 port 53230 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:26.538885 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:26.549749 systemd-logind[1487]: New session 23 of user core. Jan 24 03:02:26.557712 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 03:02:27.250466 sshd[4273]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:27.258920 systemd[1]: sshd@31-10.243.72.22:22-20.161.92.111:53230.service: Deactivated successfully. Jan 24 03:02:27.263347 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 03:02:27.266495 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Jan 24 03:02:27.270645 systemd-logind[1487]: Removed session 23. Jan 24 03:02:27.354854 systemd[1]: Started sshd@32-10.243.72.22:22-20.161.92.111:53246.service - OpenSSH per-connection server daemon (20.161.92.111:53246). Jan 24 03:02:27.930250 sshd[4284]: Accepted publickey for core from 20.161.92.111 port 53246 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:27.932563 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:27.940160 systemd-logind[1487]: New session 24 of user core. Jan 24 03:02:27.951701 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 03:02:28.431070 sshd[4284]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:28.437552 systemd[1]: sshd@32-10.243.72.22:22-20.161.92.111:53246.service: Deactivated successfully. Jan 24 03:02:28.440969 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 03:02:28.442988 systemd-logind[1487]: Session 24 logged out. Waiting for processes to exit. Jan 24 03:02:28.444669 systemd-logind[1487]: Removed session 24. Jan 24 03:02:33.538885 systemd[1]: Started sshd@33-10.243.72.22:22-20.161.92.111:56942.service - OpenSSH per-connection server daemon (20.161.92.111:56942). Jan 24 03:02:34.128962 sshd[4299]: Accepted publickey for core from 20.161.92.111 port 56942 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:34.131473 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:34.138163 systemd-logind[1487]: New session 25 of user core. Jan 24 03:02:34.146674 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 03:02:34.623185 sshd[4299]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:34.627593 systemd[1]: sshd@33-10.243.72.22:22-20.161.92.111:56942.service: Deactivated successfully. Jan 24 03:02:34.630198 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 03:02:34.632472 systemd-logind[1487]: Session 25 logged out. Waiting for processes to exit. Jan 24 03:02:34.635056 systemd-logind[1487]: Removed session 25. Jan 24 03:02:39.735912 systemd[1]: Started sshd@34-10.243.72.22:22-20.161.92.111:56956.service - OpenSSH per-connection server daemon (20.161.92.111:56956). Jan 24 03:02:40.309265 sshd[4317]: Accepted publickey for core from 20.161.92.111 port 56956 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:40.312028 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:40.321844 systemd-logind[1487]: New session 26 of user core. Jan 24 03:02:40.330816 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 03:02:40.797933 sshd[4317]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:40.804291 systemd[1]: sshd@34-10.243.72.22:22-20.161.92.111:56956.service: Deactivated successfully. Jan 24 03:02:40.807489 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 03:02:40.808806 systemd-logind[1487]: Session 26 logged out. Waiting for processes to exit. Jan 24 03:02:40.811424 systemd-logind[1487]: Removed session 26. Jan 24 03:02:45.910796 systemd[1]: Started sshd@35-10.243.72.22:22-20.161.92.111:36728.service - OpenSSH per-connection server daemon (20.161.92.111:36728). Jan 24 03:02:46.487840 sshd[4329]: Accepted publickey for core from 20.161.92.111 port 36728 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:46.490580 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:46.500280 systemd-logind[1487]: New session 27 of user core. Jan 24 03:02:46.506672 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 03:02:47.009028 sshd[4329]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:47.015571 systemd[1]: sshd@35-10.243.72.22:22-20.161.92.111:36728.service: Deactivated successfully. Jan 24 03:02:47.019044 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 03:02:47.020088 systemd-logind[1487]: Session 27 logged out. Waiting for processes to exit. Jan 24 03:02:47.021627 systemd-logind[1487]: Removed session 27. Jan 24 03:02:52.113736 systemd[1]: Started sshd@36-10.243.72.22:22-20.161.92.111:36740.service - OpenSSH per-connection server daemon (20.161.92.111:36740). Jan 24 03:02:52.686942 sshd[4342]: Accepted publickey for core from 20.161.92.111 port 36740 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:52.689938 sshd[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:52.700471 systemd-logind[1487]: New session 28 of user core. Jan 24 03:02:52.710661 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 24 03:02:53.177931 sshd[4342]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:53.183504 systemd[1]: sshd@36-10.243.72.22:22-20.161.92.111:36740.service: Deactivated successfully. Jan 24 03:02:53.186544 systemd[1]: session-28.scope: Deactivated successfully. Jan 24 03:02:53.187691 systemd-logind[1487]: Session 28 logged out. Waiting for processes to exit. Jan 24 03:02:53.189623 systemd-logind[1487]: Removed session 28. Jan 24 03:02:53.287039 systemd[1]: Started sshd@37-10.243.72.22:22-20.161.92.111:59606.service - OpenSSH per-connection server daemon (20.161.92.111:59606). Jan 24 03:02:53.850305 sshd[4355]: Accepted publickey for core from 20.161.92.111 port 59606 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:53.852842 sshd[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:53.859797 systemd-logind[1487]: New session 29 of user core. Jan 24 03:02:53.865575 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 24 03:02:55.948060 systemd[1]: run-containerd-runc-k8s.io-46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11-runc.l6tNzt.mount: Deactivated successfully. Jan 24 03:02:55.999988 containerd[1507]: time="2026-01-24T03:02:55.999788607Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 03:02:56.031275 containerd[1507]: time="2026-01-24T03:02:56.031212138Z" level=info msg="StopContainer for \"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11\" with timeout 2 (s)" Jan 24 03:02:56.031863 containerd[1507]: time="2026-01-24T03:02:56.031381906Z" level=info msg="StopContainer for \"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4\" with timeout 30 (s)" Jan 24 03:02:56.032215 containerd[1507]: time="2026-01-24T03:02:56.032183630Z" level=info msg="Stop container \"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11\" with signal terminated" Jan 24 03:02:56.033479 containerd[1507]: time="2026-01-24T03:02:56.033314519Z" level=info msg="Stop container \"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4\" with signal terminated" Jan 24 03:02:56.053798 systemd-networkd[1432]: lxc_health: Link DOWN Jan 24 03:02:56.055691 systemd-networkd[1432]: lxc_health: Lost carrier Jan 24 03:02:56.071229 systemd[1]: cri-containerd-b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4.scope: Deactivated successfully. Jan 24 03:02:56.084987 systemd[1]: cri-containerd-46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11.scope: Deactivated successfully. Jan 24 03:02:56.086135 systemd[1]: cri-containerd-46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11.scope: Consumed 10.510s CPU time. Jan 24 03:02:56.140047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4-rootfs.mount: Deactivated successfully. Jan 24 03:02:56.148242 containerd[1507]: time="2026-01-24T03:02:56.147768636Z" level=info msg="shim disconnected" id=46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11 namespace=k8s.io Jan 24 03:02:56.148242 containerd[1507]: time="2026-01-24T03:02:56.147938191Z" level=warning msg="cleaning up after shim disconnected" id=46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11 namespace=k8s.io Jan 24 03:02:56.148242 containerd[1507]: time="2026-01-24T03:02:56.147971657Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 03:02:56.148173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11-rootfs.mount: Deactivated successfully. Jan 24 03:02:56.152791 containerd[1507]: time="2026-01-24T03:02:56.152559246Z" level=info msg="shim disconnected" id=b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4 namespace=k8s.io Jan 24 03:02:56.152791 containerd[1507]: time="2026-01-24T03:02:56.152618499Z" level=warning msg="cleaning up after shim disconnected" id=b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4 namespace=k8s.io Jan 24 03:02:56.152791 containerd[1507]: time="2026-01-24T03:02:56.152636688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 03:02:56.190137 containerd[1507]: time="2026-01-24T03:02:56.189807575Z" level=info msg="StopContainer for \"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4\" returns successfully" Jan 24 03:02:56.196760 containerd[1507]: time="2026-01-24T03:02:56.196625547Z" level=info msg="StopContainer for \"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11\" returns successfully" Jan 24 03:02:56.197917 containerd[1507]: time="2026-01-24T03:02:56.197668497Z" level=info msg="StopPodSandbox for \"ae958553f5a390f7a91c3c381b37d313a9824eadfcf110d37c8738872eb9acad\"" Jan 24 03:02:56.197917 containerd[1507]: time="2026-01-24T03:02:56.197735269Z" level=info msg="Container to stop \"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 03:02:56.198539 containerd[1507]: time="2026-01-24T03:02:56.198508559Z" level=info msg="StopPodSandbox for \"3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f\"" Jan 24 03:02:56.200548 containerd[1507]: time="2026-01-24T03:02:56.200446151Z" level=info msg="Container to stop \"a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 03:02:56.200481 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae958553f5a390f7a91c3c381b37d313a9824eadfcf110d37c8738872eb9acad-shm.mount: Deactivated successfully. Jan 24 03:02:56.201887 containerd[1507]: time="2026-01-24T03:02:56.200819544Z" level=info msg="Container to stop \"81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 03:02:56.201887 containerd[1507]: time="2026-01-24T03:02:56.201533386Z" level=info msg="Container to stop \"1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 03:02:56.201887 containerd[1507]: time="2026-01-24T03:02:56.201566090Z" level=info msg="Container to stop \"b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 03:02:56.201887 containerd[1507]: time="2026-01-24T03:02:56.201597668Z" level=info msg="Container to stop \"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 03:02:56.214558 systemd[1]: cri-containerd-3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f.scope: Deactivated successfully. Jan 24 03:02:56.218128 systemd[1]: cri-containerd-ae958553f5a390f7a91c3c381b37d313a9824eadfcf110d37c8738872eb9acad.scope: Deactivated successfully. Jan 24 03:02:56.263418 containerd[1507]: time="2026-01-24T03:02:56.263278162Z" level=info msg="shim disconnected" id=ae958553f5a390f7a91c3c381b37d313a9824eadfcf110d37c8738872eb9acad namespace=k8s.io Jan 24 03:02:56.263418 containerd[1507]: time="2026-01-24T03:02:56.263353037Z" level=warning msg="cleaning up after shim disconnected" id=ae958553f5a390f7a91c3c381b37d313a9824eadfcf110d37c8738872eb9acad namespace=k8s.io Jan 24 03:02:56.263418 containerd[1507]: time="2026-01-24T03:02:56.263368446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 03:02:56.271073 containerd[1507]: time="2026-01-24T03:02:56.270780811Z" level=info msg="shim disconnected" id=3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f namespace=k8s.io Jan 24 03:02:56.271366 containerd[1507]: time="2026-01-24T03:02:56.271070481Z" level=warning msg="cleaning up after shim disconnected" id=3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f namespace=k8s.io Jan 24 03:02:56.271366 containerd[1507]: time="2026-01-24T03:02:56.271091131Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 03:02:56.305545 containerd[1507]: time="2026-01-24T03:02:56.304713375Z" level=info msg="TearDown network for sandbox \"ae958553f5a390f7a91c3c381b37d313a9824eadfcf110d37c8738872eb9acad\" successfully" Jan 24 03:02:56.305545 containerd[1507]: time="2026-01-24T03:02:56.304791238Z" level=info msg="StopPodSandbox for \"ae958553f5a390f7a91c3c381b37d313a9824eadfcf110d37c8738872eb9acad\" returns successfully" Jan 24 03:02:56.305927 containerd[1507]: time="2026-01-24T03:02:56.305897928Z" level=info msg="TearDown network for sandbox \"3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f\" successfully" Jan 24 03:02:56.306194 containerd[1507]: time="2026-01-24T03:02:56.306167687Z" level=info msg="StopPodSandbox for \"3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f\" returns successfully" Jan 24 03:02:56.421998 kubelet[2699]: I0124 03:02:56.421580 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-cilium-cgroup\") pod \"0a2fce04-ab43-4673-b17c-904c779364c0\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " Jan 24 03:02:56.421998 kubelet[2699]: I0124 03:02:56.421704 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-bpf-maps\") pod \"0a2fce04-ab43-4673-b17c-904c779364c0\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " Jan 24 03:02:56.421998 kubelet[2699]: I0124 03:02:56.421801 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncs9v\" (UniqueName: \"kubernetes.io/projected/0a2fce04-ab43-4673-b17c-904c779364c0-kube-api-access-ncs9v\") pod \"0a2fce04-ab43-4673-b17c-904c779364c0\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " Jan 24 03:02:56.421998 kubelet[2699]: I0124 03:02:56.421888 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a2fce04-ab43-4673-b17c-904c779364c0-clustermesh-secrets\") pod \"0a2fce04-ab43-4673-b17c-904c779364c0\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " Jan 24 03:02:56.421998 kubelet[2699]: I0124 03:02:56.421949 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a2fce04-ab43-4673-b17c-904c779364c0-hubble-tls\") pod \"0a2fce04-ab43-4673-b17c-904c779364c0\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " Jan 24 03:02:56.423704 kubelet[2699]: I0124 03:02:56.422027 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a2fce04-ab43-4673-b17c-904c779364c0-cilium-config-path\") pod \"0a2fce04-ab43-4673-b17c-904c779364c0\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " Jan 24 03:02:56.423704 kubelet[2699]: I0124 03:02:56.422071 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-lib-modules\") pod \"0a2fce04-ab43-4673-b17c-904c779364c0\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " Jan 24 03:02:56.423704 kubelet[2699]: I0124 03:02:56.422110 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-xtables-lock\") pod \"0a2fce04-ab43-4673-b17c-904c779364c0\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " Jan 24 03:02:56.423704 kubelet[2699]: I0124 03:02:56.422134 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-cilium-run\") pod \"0a2fce04-ab43-4673-b17c-904c779364c0\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " Jan 24 03:02:56.423704 kubelet[2699]: I0124 03:02:56.422178 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrn6z\" (UniqueName: \"kubernetes.io/projected/6bfd88f7-0781-4754-a785-60cd3dfc5296-kube-api-access-mrn6z\") pod \"6bfd88f7-0781-4754-a785-60cd3dfc5296\" (UID: \"6bfd88f7-0781-4754-a785-60cd3dfc5296\") " Jan 24 03:02:56.423704 kubelet[2699]: I0124 03:02:56.422208 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-hostproc\") pod \"0a2fce04-ab43-4673-b17c-904c779364c0\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " Jan 24 03:02:56.424574 kubelet[2699]: I0124 03:02:56.422240 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-host-proc-sys-net\") pod \"0a2fce04-ab43-4673-b17c-904c779364c0\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " Jan 24 03:02:56.424574 kubelet[2699]: I0124 03:02:56.422283 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-cni-path\") pod \"0a2fce04-ab43-4673-b17c-904c779364c0\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " Jan 24 03:02:56.424574 kubelet[2699]: I0124 03:02:56.422321 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-host-proc-sys-kernel\") pod \"0a2fce04-ab43-4673-b17c-904c779364c0\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " Jan 24 03:02:56.424574 kubelet[2699]: I0124 03:02:56.422368 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-etc-cni-netd\") pod \"0a2fce04-ab43-4673-b17c-904c779364c0\" (UID: \"0a2fce04-ab43-4673-b17c-904c779364c0\") " Jan 24 03:02:56.424574 kubelet[2699]: I0124 03:02:56.422413 2699 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bfd88f7-0781-4754-a785-60cd3dfc5296-cilium-config-path\") pod \"6bfd88f7-0781-4754-a785-60cd3dfc5296\" (UID: \"6bfd88f7-0781-4754-a785-60cd3dfc5296\") " Jan 24 03:02:56.429186 kubelet[2699]: I0124 03:02:56.427970 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfd88f7-0781-4754-a785-60cd3dfc5296-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6bfd88f7-0781-4754-a785-60cd3dfc5296" (UID: "6bfd88f7-0781-4754-a785-60cd3dfc5296"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 03:02:56.429186 kubelet[2699]: I0124 03:02:56.427471 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0a2fce04-ab43-4673-b17c-904c779364c0" (UID: "0a2fce04-ab43-4673-b17c-904c779364c0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 03:02:56.429186 kubelet[2699]: I0124 03:02:56.428618 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0a2fce04-ab43-4673-b17c-904c779364c0" (UID: "0a2fce04-ab43-4673-b17c-904c779364c0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 03:02:56.429186 kubelet[2699]: I0124 03:02:56.428652 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0a2fce04-ab43-4673-b17c-904c779364c0" (UID: "0a2fce04-ab43-4673-b17c-904c779364c0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 03:02:56.429186 kubelet[2699]: I0124 03:02:56.428778 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0a2fce04-ab43-4673-b17c-904c779364c0" (UID: "0a2fce04-ab43-4673-b17c-904c779364c0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 03:02:56.438038 kubelet[2699]: I0124 03:02:56.437994 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2fce04-ab43-4673-b17c-904c779364c0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0a2fce04-ab43-4673-b17c-904c779364c0" (UID: "0a2fce04-ab43-4673-b17c-904c779364c0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 03:02:56.443230 kubelet[2699]: I0124 03:02:56.443140 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bfd88f7-0781-4754-a785-60cd3dfc5296-kube-api-access-mrn6z" (OuterVolumeSpecName: "kube-api-access-mrn6z") pod "6bfd88f7-0781-4754-a785-60cd3dfc5296" (UID: "6bfd88f7-0781-4754-a785-60cd3dfc5296"). InnerVolumeSpecName "kube-api-access-mrn6z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 03:02:56.443377 kubelet[2699]: I0124 03:02:56.443230 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-hostproc" (OuterVolumeSpecName: "hostproc") pod "0a2fce04-ab43-4673-b17c-904c779364c0" (UID: "0a2fce04-ab43-4673-b17c-904c779364c0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 03:02:56.443377 kubelet[2699]: I0124 03:02:56.443300 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0a2fce04-ab43-4673-b17c-904c779364c0" (UID: "0a2fce04-ab43-4673-b17c-904c779364c0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 03:02:56.443377 kubelet[2699]: I0124 03:02:56.443333 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-cni-path" (OuterVolumeSpecName: "cni-path") pod "0a2fce04-ab43-4673-b17c-904c779364c0" (UID: "0a2fce04-ab43-4673-b17c-904c779364c0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 03:02:56.443377 kubelet[2699]: I0124 03:02:56.443362 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0a2fce04-ab43-4673-b17c-904c779364c0" (UID: "0a2fce04-ab43-4673-b17c-904c779364c0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 03:02:56.443624 kubelet[2699]: I0124 03:02:56.443410 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0a2fce04-ab43-4673-b17c-904c779364c0" (UID: "0a2fce04-ab43-4673-b17c-904c779364c0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 03:02:56.443624 kubelet[2699]: I0124 03:02:56.443437 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2fce04-ab43-4673-b17c-904c779364c0-kube-api-access-ncs9v" (OuterVolumeSpecName: "kube-api-access-ncs9v") pod "0a2fce04-ab43-4673-b17c-904c779364c0" (UID: "0a2fce04-ab43-4673-b17c-904c779364c0"). InnerVolumeSpecName "kube-api-access-ncs9v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 03:02:56.443624 kubelet[2699]: I0124 03:02:56.443457 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2fce04-ab43-4673-b17c-904c779364c0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0a2fce04-ab43-4673-b17c-904c779364c0" (UID: "0a2fce04-ab43-4673-b17c-904c779364c0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 03:02:56.446434 kubelet[2699]: I0124 03:02:56.445083 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0a2fce04-ab43-4673-b17c-904c779364c0" (UID: "0a2fce04-ab43-4673-b17c-904c779364c0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 03:02:56.448214 kubelet[2699]: I0124 03:02:56.448038 2699 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a2fce04-ab43-4673-b17c-904c779364c0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0a2fce04-ab43-4673-b17c-904c779364c0" (UID: "0a2fce04-ab43-4673-b17c-904c779364c0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 03:02:56.457146 systemd[1]: Removed slice kubepods-besteffort-pod6bfd88f7_0781_4754_a785_60cd3dfc5296.slice - libcontainer container kubepods-besteffort-pod6bfd88f7_0781_4754_a785_60cd3dfc5296.slice. Jan 24 03:02:56.463976 kubelet[2699]: I0124 03:02:56.463902 2699 scope.go:117] "RemoveContainer" containerID="b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4" Jan 24 03:02:56.470812 containerd[1507]: time="2026-01-24T03:02:56.470756130Z" level=info msg="RemoveContainer for \"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4\"" Jan 24 03:02:56.484268 containerd[1507]: time="2026-01-24T03:02:56.482698822Z" level=info msg="RemoveContainer for \"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4\" returns successfully" Jan 24 03:02:56.486629 kubelet[2699]: I0124 03:02:56.485807 2699 scope.go:117] "RemoveContainer" containerID="b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4" Jan 24 03:02:56.494195 systemd[1]: Removed slice kubepods-burstable-pod0a2fce04_ab43_4673_b17c_904c779364c0.slice - libcontainer container kubepods-burstable-pod0a2fce04_ab43_4673_b17c_904c779364c0.slice. Jan 24 03:02:56.494347 systemd[1]: kubepods-burstable-pod0a2fce04_ab43_4673_b17c_904c779364c0.slice: Consumed 10.660s CPU time. Jan 24 03:02:56.514429 containerd[1507]: time="2026-01-24T03:02:56.496693288Z" level=error msg="ContainerStatus for \"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4\": not found" Jan 24 03:02:56.515153 kubelet[2699]: E0124 03:02:56.514895 2699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4\": not found" containerID="b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4" Jan 24 03:02:56.516135 kubelet[2699]: I0124 03:02:56.515966 2699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4"} err="failed to get container status \"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0be7354969395c01097f160b2b404d256f8a654d39adfd165f6cf08586859b4\": not found" Jan 24 03:02:56.516251 kubelet[2699]: I0124 03:02:56.516220 2699 scope.go:117] "RemoveContainer" containerID="46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11" Jan 24 03:02:56.520444 containerd[1507]: time="2026-01-24T03:02:56.520346014Z" level=info msg="RemoveContainer for \"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11\"" Jan 24 03:02:56.522973 kubelet[2699]: I0124 03:02:56.522670 2699 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-host-proc-sys-kernel\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.522973 kubelet[2699]: I0124 03:02:56.522720 2699 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bfd88f7-0781-4754-a785-60cd3dfc5296-cilium-config-path\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.522973 kubelet[2699]: I0124 03:02:56.522739 2699 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-etc-cni-netd\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.522973 kubelet[2699]: I0124 03:02:56.522756 2699 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-bpf-maps\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.522973 kubelet[2699]: I0124 03:02:56.522771 2699 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-cilium-cgroup\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.522973 kubelet[2699]: I0124 03:02:56.522785 2699 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ncs9v\" (UniqueName: \"kubernetes.io/projected/0a2fce04-ab43-4673-b17c-904c779364c0-kube-api-access-ncs9v\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.522973 kubelet[2699]: I0124 03:02:56.522800 2699 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a2fce04-ab43-4673-b17c-904c779364c0-clustermesh-secrets\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.522973 kubelet[2699]: I0124 03:02:56.522814 2699 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a2fce04-ab43-4673-b17c-904c779364c0-hubble-tls\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.524152 kubelet[2699]: I0124 03:02:56.522830 2699 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a2fce04-ab43-4673-b17c-904c779364c0-cilium-config-path\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.524152 kubelet[2699]: I0124 03:02:56.522844 2699 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-xtables-lock\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.524152 kubelet[2699]: I0124 03:02:56.522858 2699 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-cilium-run\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.524152 kubelet[2699]: I0124 03:02:56.522871 2699 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-lib-modules\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.524152 kubelet[2699]: I0124 03:02:56.522884 2699 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-hostproc\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.524152 kubelet[2699]: I0124 03:02:56.523902 2699 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-host-proc-sys-net\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.524982 kubelet[2699]: I0124 03:02:56.524784 2699 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mrn6z\" (UniqueName: \"kubernetes.io/projected/6bfd88f7-0781-4754-a785-60cd3dfc5296-kube-api-access-mrn6z\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.524982 kubelet[2699]: I0124 03:02:56.524812 2699 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a2fce04-ab43-4673-b17c-904c779364c0-cni-path\") on node \"srv-fpdmo.gb1.brightbox.com\" DevicePath \"\"" Jan 24 03:02:56.527775 containerd[1507]: time="2026-01-24T03:02:56.527513389Z" level=info msg="RemoveContainer for \"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11\" returns successfully" Jan 24 03:02:56.528309 kubelet[2699]: I0124 03:02:56.528204 2699 scope.go:117] "RemoveContainer" containerID="1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641" Jan 24 03:02:56.532136 containerd[1507]: time="2026-01-24T03:02:56.531273854Z" level=info msg="RemoveContainer for \"1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641\"" Jan 24 03:02:56.539852 containerd[1507]: time="2026-01-24T03:02:56.539697823Z" level=info msg="RemoveContainer for \"1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641\" returns successfully" Jan 24 03:02:56.540071 kubelet[2699]: I0124 03:02:56.539990 2699 scope.go:117] "RemoveContainer" containerID="b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521" Jan 24 03:02:56.541740 containerd[1507]: time="2026-01-24T03:02:56.541674685Z" level=info msg="RemoveContainer for \"b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521\"" Jan 24 03:02:56.554284 containerd[1507]: time="2026-01-24T03:02:56.554187717Z" level=info msg="RemoveContainer for \"b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521\" returns successfully" Jan 24 03:02:56.554698 kubelet[2699]: I0124 03:02:56.554649 2699 scope.go:117] "RemoveContainer" containerID="81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4" Jan 24 03:02:56.556797 containerd[1507]: time="2026-01-24T03:02:56.556730632Z" level=info msg="RemoveContainer for \"81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4\"" Jan 24 03:02:56.563901 containerd[1507]: time="2026-01-24T03:02:56.563720160Z" level=info msg="RemoveContainer for \"81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4\" returns successfully" Jan 24 03:02:56.564069 kubelet[2699]: I0124 03:02:56.563998 2699 scope.go:117] "RemoveContainer" containerID="a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc" Jan 24 03:02:56.565929 containerd[1507]: time="2026-01-24T03:02:56.565533632Z" level=info msg="RemoveContainer for \"a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc\"" Jan 24 03:02:56.569417 containerd[1507]: time="2026-01-24T03:02:56.569364642Z" level=info msg="RemoveContainer for \"a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc\" returns successfully" Jan 24 03:02:56.569862 kubelet[2699]: I0124 03:02:56.569758 2699 scope.go:117] "RemoveContainer" containerID="46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11" Jan 24 03:02:56.570228 containerd[1507]: time="2026-01-24T03:02:56.570098394Z" level=error msg="ContainerStatus for \"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11\": not found" Jan 24 03:02:56.570415 kubelet[2699]: E0124 03:02:56.570290 2699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11\": not found" containerID="46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11" Jan 24 03:02:56.570415 kubelet[2699]: I0124 03:02:56.570337 2699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11"} err="failed to get container status \"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11\": rpc error: code = NotFound desc = an error occurred when try to find container \"46c44b135ccb1586f699e6c5b98231aa5b85df1545668a61c447dd2b99ac6e11\": not found" Jan 24 03:02:56.570415 kubelet[2699]: I0124 03:02:56.570368 2699 scope.go:117] "RemoveContainer" containerID="1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641" Jan 24 03:02:56.571427 containerd[1507]: time="2026-01-24T03:02:56.570848962Z" level=error msg="ContainerStatus for \"1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641\": not found" Jan 24 03:02:56.571427 containerd[1507]: time="2026-01-24T03:02:56.571321004Z" level=error msg="ContainerStatus for \"b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521\": not found" Jan 24 03:02:56.571545 kubelet[2699]: E0124 03:02:56.571043 2699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641\": not found" containerID="1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641" Jan 24 03:02:56.571545 kubelet[2699]: I0124 03:02:56.571077 2699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641"} err="failed to get container status \"1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641\": rpc error: code = NotFound desc = an error occurred when try to find container \"1bb303194cc65361c11d33d2ad256ccb698717b9b9f7a14d045f0f149f651641\": not found" Jan 24 03:02:56.571545 kubelet[2699]: I0124 03:02:56.571102 2699 scope.go:117] "RemoveContainer" containerID="b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521" Jan 24 03:02:56.571545 kubelet[2699]: E0124 03:02:56.571487 2699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521\": not found" containerID="b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521" Jan 24 03:02:56.571545 kubelet[2699]: I0124 03:02:56.571514 2699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521"} err="failed to get container status \"b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521\": rpc error: code = NotFound desc = an error occurred when try to find container \"b28d20ca336b7138d96c2c2bec671800fa1e20cabdbad40414d833e714e2f521\": not found" Jan 24 03:02:56.571545 kubelet[2699]: I0124 03:02:56.571536 2699 scope.go:117] "RemoveContainer" containerID="81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4" Jan 24 03:02:56.572187 containerd[1507]: time="2026-01-24T03:02:56.572048893Z" level=error msg="ContainerStatus for \"81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4\": not found" Jan 24 03:02:56.572317 kubelet[2699]: E0124 03:02:56.572276 2699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4\": not found" containerID="81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4" Jan 24 03:02:56.572457 kubelet[2699]: I0124 03:02:56.572324 2699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4"} err="failed to get container status \"81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4\": rpc error: code = NotFound desc = an error occurred when try to find container \"81e2c52348215136a762fb9f8c30ff8aa150e4d67506f2118a07e0c8a54b7fc4\": not found" Jan 24 03:02:56.572457 kubelet[2699]: I0124 03:02:56.572367 2699 scope.go:117] "RemoveContainer" containerID="a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc" Jan 24 03:02:56.572670 containerd[1507]: time="2026-01-24T03:02:56.572587639Z" level=error msg="ContainerStatus for \"a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc\": not found" Jan 24 03:02:56.572807 kubelet[2699]: E0124 03:02:56.572737 2699 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc\": not found" containerID="a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc" Jan 24 03:02:56.572807 kubelet[2699]: I0124 03:02:56.572764 2699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc"} err="failed to get container status \"a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4bf2d6e9d43966e85b9daa80758e474b013e8c08b9e0e1c9312e2fac62f10dc\": not found" Jan 24 03:02:56.748162 kubelet[2699]: I0124 03:02:56.747964 2699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a2fce04-ab43-4673-b17c-904c779364c0" path="/var/lib/kubelet/pods/0a2fce04-ab43-4673-b17c-904c779364c0/volumes" Jan 24 03:02:56.751536 kubelet[2699]: I0124 03:02:56.751488 2699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bfd88f7-0781-4754-a785-60cd3dfc5296" path="/var/lib/kubelet/pods/6bfd88f7-0781-4754-a785-60cd3dfc5296/volumes" Jan 24 03:02:56.936255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae958553f5a390f7a91c3c381b37d313a9824eadfcf110d37c8738872eb9acad-rootfs.mount: Deactivated successfully. Jan 24 03:02:56.936439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f-rootfs.mount: Deactivated successfully. Jan 24 03:02:56.936547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b4283352ccc820131a1d80364c3f41323db0298f50186005a16ca8773ecaf4f-shm.mount: Deactivated successfully. Jan 24 03:02:56.936719 systemd[1]: var-lib-kubelet-pods-6bfd88f7\x2d0781\x2d4754\x2da785\x2d60cd3dfc5296-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmrn6z.mount: Deactivated successfully. Jan 24 03:02:56.936840 systemd[1]: var-lib-kubelet-pods-0a2fce04\x2dab43\x2d4673\x2db17c\x2d904c779364c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dncs9v.mount: Deactivated successfully. Jan 24 03:02:56.936965 systemd[1]: var-lib-kubelet-pods-0a2fce04\x2dab43\x2d4673\x2db17c\x2d904c779364c0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 24 03:02:56.937141 systemd[1]: var-lib-kubelet-pods-0a2fce04\x2dab43\x2d4673\x2db17c\x2d904c779364c0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 24 03:02:57.792545 sshd[4355]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:57.800277 systemd[1]: sshd@37-10.243.72.22:22-20.161.92.111:59606.service: Deactivated successfully. Jan 24 03:02:57.804000 systemd[1]: session-29.scope: Deactivated successfully. Jan 24 03:02:57.807870 systemd-logind[1487]: Session 29 logged out. Waiting for processes to exit. Jan 24 03:02:57.809767 systemd-logind[1487]: Removed session 29. Jan 24 03:02:57.903338 systemd[1]: Started sshd@38-10.243.72.22:22-20.161.92.111:59612.service - OpenSSH per-connection server daemon (20.161.92.111:59612). Jan 24 03:02:58.503978 sshd[4516]: Accepted publickey for core from 20.161.92.111 port 59612 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:02:58.506326 sshd[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:02:58.515438 systemd-logind[1487]: New session 30 of user core. Jan 24 03:02:58.521750 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 24 03:02:58.984028 kubelet[2699]: E0124 03:02:58.983906 2699 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 03:02:59.740248 kubelet[2699]: I0124 03:02:59.738818 2699 memory_manager.go:355] "RemoveStaleState removing state" podUID="0a2fce04-ab43-4673-b17c-904c779364c0" containerName="cilium-agent" Jan 24 03:02:59.740248 kubelet[2699]: I0124 03:02:59.738927 2699 memory_manager.go:355] "RemoveStaleState removing state" podUID="6bfd88f7-0781-4754-a785-60cd3dfc5296" containerName="cilium-operator" Jan 24 03:02:59.781300 sshd[4516]: pam_unix(sshd:session): session closed for user core Jan 24 03:02:59.797297 systemd[1]: sshd@38-10.243.72.22:22-20.161.92.111:59612.service: Deactivated successfully. Jan 24 03:02:59.804173 systemd[1]: session-30.scope: Deactivated successfully. Jan 24 03:02:59.810023 systemd-logind[1487]: Session 30 logged out. Waiting for processes to exit. Jan 24 03:02:59.815597 systemd-logind[1487]: Removed session 30. Jan 24 03:02:59.834164 systemd[1]: Created slice kubepods-burstable-pod8ef5967f_63b9_4038_b3ad_afd352d29962.slice - libcontainer container kubepods-burstable-pod8ef5967f_63b9_4038_b3ad_afd352d29962.slice. Jan 24 03:02:59.854604 kubelet[2699]: I0124 03:02:59.854505 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ef5967f-63b9-4038-b3ad-afd352d29962-lib-modules\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.854604 kubelet[2699]: I0124 03:02:59.854575 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ef5967f-63b9-4038-b3ad-afd352d29962-cilium-run\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.854772 kubelet[2699]: I0124 03:02:59.854619 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgk2v\" (UniqueName: \"kubernetes.io/projected/8ef5967f-63b9-4038-b3ad-afd352d29962-kube-api-access-hgk2v\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.854772 kubelet[2699]: I0124 03:02:59.854659 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ef5967f-63b9-4038-b3ad-afd352d29962-cilium-config-path\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.854772 kubelet[2699]: I0124 03:02:59.854696 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ef5967f-63b9-4038-b3ad-afd352d29962-etc-cni-netd\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.854772 kubelet[2699]: I0124 03:02:59.854752 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ef5967f-63b9-4038-b3ad-afd352d29962-cilium-cgroup\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.855033 kubelet[2699]: I0124 03:02:59.854782 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8ef5967f-63b9-4038-b3ad-afd352d29962-cilium-ipsec-secrets\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.855033 kubelet[2699]: I0124 03:02:59.854811 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ef5967f-63b9-4038-b3ad-afd352d29962-cni-path\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.855033 kubelet[2699]: I0124 03:02:59.854839 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ef5967f-63b9-4038-b3ad-afd352d29962-hostproc\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.855033 kubelet[2699]: I0124 03:02:59.854889 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ef5967f-63b9-4038-b3ad-afd352d29962-clustermesh-secrets\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.855033 kubelet[2699]: I0124 03:02:59.854923 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ef5967f-63b9-4038-b3ad-afd352d29962-hubble-tls\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.855033 kubelet[2699]: I0124 03:02:59.854971 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ef5967f-63b9-4038-b3ad-afd352d29962-xtables-lock\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.855366 kubelet[2699]: I0124 03:02:59.855006 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ef5967f-63b9-4038-b3ad-afd352d29962-host-proc-sys-net\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.855366 kubelet[2699]: I0124 03:02:59.855032 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ef5967f-63b9-4038-b3ad-afd352d29962-host-proc-sys-kernel\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.855366 kubelet[2699]: I0124 03:02:59.855091 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ef5967f-63b9-4038-b3ad-afd352d29962-bpf-maps\") pod \"cilium-svzr5\" (UID: \"8ef5967f-63b9-4038-b3ad-afd352d29962\") " pod="kube-system/cilium-svzr5" Jan 24 03:02:59.892200 systemd[1]: Started sshd@39-10.243.72.22:22-20.161.92.111:59624.service - OpenSSH per-connection server daemon (20.161.92.111:59624). Jan 24 03:03:00.143557 containerd[1507]: time="2026-01-24T03:03:00.143206603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-svzr5,Uid:8ef5967f-63b9-4038-b3ad-afd352d29962,Namespace:kube-system,Attempt:0,}" Jan 24 03:03:00.194860 containerd[1507]: time="2026-01-24T03:03:00.194689510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 03:03:00.195927 containerd[1507]: time="2026-01-24T03:03:00.195615296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 03:03:00.195927 containerd[1507]: time="2026-01-24T03:03:00.195667125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:03:00.196101 containerd[1507]: time="2026-01-24T03:03:00.196004982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 03:03:00.231706 systemd[1]: Started cri-containerd-a016cfd4652ab58b042a8f548a1423cabb3515ca7f4e18051594d2d2e9cd1982.scope - libcontainer container a016cfd4652ab58b042a8f548a1423cabb3515ca7f4e18051594d2d2e9cd1982. Jan 24 03:03:00.272446 containerd[1507]: time="2026-01-24T03:03:00.272305163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-svzr5,Uid:8ef5967f-63b9-4038-b3ad-afd352d29962,Namespace:kube-system,Attempt:0,} returns sandbox id \"a016cfd4652ab58b042a8f548a1423cabb3515ca7f4e18051594d2d2e9cd1982\"" Jan 24 03:03:00.280018 containerd[1507]: time="2026-01-24T03:03:00.279793175Z" level=info msg="CreateContainer within sandbox \"a016cfd4652ab58b042a8f548a1423cabb3515ca7f4e18051594d2d2e9cd1982\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 03:03:00.321440 containerd[1507]: time="2026-01-24T03:03:00.321156481Z" level=info msg="CreateContainer within sandbox \"a016cfd4652ab58b042a8f548a1423cabb3515ca7f4e18051594d2d2e9cd1982\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"83a930b3565856b468e6ee86f96a4a072e82e72487734117dcff7de499894bce\"" Jan 24 03:03:00.324289 containerd[1507]: time="2026-01-24T03:03:00.324234711Z" level=info msg="StartContainer for \"83a930b3565856b468e6ee86f96a4a072e82e72487734117dcff7de499894bce\"" Jan 24 03:03:00.361706 systemd[1]: Started cri-containerd-83a930b3565856b468e6ee86f96a4a072e82e72487734117dcff7de499894bce.scope - libcontainer container 83a930b3565856b468e6ee86f96a4a072e82e72487734117dcff7de499894bce. Jan 24 03:03:00.408975 containerd[1507]: time="2026-01-24T03:03:00.408693742Z" level=info msg="StartContainer for \"83a930b3565856b468e6ee86f96a4a072e82e72487734117dcff7de499894bce\" returns successfully" Jan 24 03:03:00.429894 systemd[1]: cri-containerd-83a930b3565856b468e6ee86f96a4a072e82e72487734117dcff7de499894bce.scope: Deactivated successfully. Jan 24 03:03:00.473461 sshd[4528]: Accepted publickey for core from 20.161.92.111 port 59624 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:03:00.475745 sshd[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:03:00.484534 containerd[1507]: time="2026-01-24T03:03:00.481750929Z" level=info msg="shim disconnected" id=83a930b3565856b468e6ee86f96a4a072e82e72487734117dcff7de499894bce namespace=k8s.io Jan 24 03:03:00.484534 containerd[1507]: time="2026-01-24T03:03:00.481819412Z" level=warning msg="cleaning up after shim disconnected" id=83a930b3565856b468e6ee86f96a4a072e82e72487734117dcff7de499894bce namespace=k8s.io Jan 24 03:03:00.484534 containerd[1507]: time="2026-01-24T03:03:00.481834018Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 03:03:00.485300 systemd-logind[1487]: New session 31 of user core. Jan 24 03:03:00.491628 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 24 03:03:00.874289 sshd[4528]: pam_unix(sshd:session): session closed for user core Jan 24 03:03:00.882495 systemd[1]: sshd@39-10.243.72.22:22-20.161.92.111:59624.service: Deactivated successfully. Jan 24 03:03:00.886321 systemd[1]: session-31.scope: Deactivated successfully. Jan 24 03:03:00.887617 systemd-logind[1487]: Session 31 logged out. Waiting for processes to exit. Jan 24 03:03:00.890115 systemd-logind[1487]: Removed session 31. Jan 24 03:03:00.989739 systemd[1]: Started sshd@40-10.243.72.22:22-20.161.92.111:59628.service - OpenSSH per-connection server daemon (20.161.92.111:59628). Jan 24 03:03:01.513503 containerd[1507]: time="2026-01-24T03:03:01.513364710Z" level=info msg="CreateContainer within sandbox \"a016cfd4652ab58b042a8f548a1423cabb3515ca7f4e18051594d2d2e9cd1982\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 03:03:01.535054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount715663114.mount: Deactivated successfully. Jan 24 03:03:01.538902 containerd[1507]: time="2026-01-24T03:03:01.538845485Z" level=info msg="CreateContainer within sandbox \"a016cfd4652ab58b042a8f548a1423cabb3515ca7f4e18051594d2d2e9cd1982\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6159935f19727571f207904c7aeb8f77be32b9acc62de512590e7449dc81abfc\"" Jan 24 03:03:01.539905 containerd[1507]: time="2026-01-24T03:03:01.539872484Z" level=info msg="StartContainer for \"6159935f19727571f207904c7aeb8f77be32b9acc62de512590e7449dc81abfc\"" Jan 24 03:03:01.579353 sshd[4643]: Accepted publickey for core from 20.161.92.111 port 59628 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 03:03:01.584355 sshd[4643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 03:03:01.606144 systemd-logind[1487]: New session 32 of user core. Jan 24 03:03:01.612619 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 24 03:03:01.648783 systemd[1]: Started cri-containerd-6159935f19727571f207904c7aeb8f77be32b9acc62de512590e7449dc81abfc.scope - libcontainer container 6159935f19727571f207904c7aeb8f77be32b9acc62de512590e7449dc81abfc. Jan 24 03:03:01.757527 containerd[1507]: time="2026-01-24T03:03:01.757450105Z" level=info msg="StartContainer for \"6159935f19727571f207904c7aeb8f77be32b9acc62de512590e7449dc81abfc\" returns successfully" Jan 24 03:03:01.776373 systemd[1]: cri-containerd-6159935f19727571f207904c7aeb8f77be32b9acc62de512590e7449dc81abfc.scope: Deactivated successfully. Jan 24 03:03:01.811941 containerd[1507]: time="2026-01-24T03:03:01.811586300Z" level=info msg="shim disconnected" id=6159935f19727571f207904c7aeb8f77be32b9acc62de512590e7449dc81abfc namespace=k8s.io Jan 24 03:03:01.811941 containerd[1507]: time="2026-01-24T03:03:01.811675705Z" level=warning msg="cleaning up after shim disconnected" id=6159935f19727571f207904c7aeb8f77be32b9acc62de512590e7449dc81abfc namespace=k8s.io Jan 24 03:03:01.811941 containerd[1507]: time="2026-01-24T03:03:01.811691395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 03:03:01.965385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6159935f19727571f207904c7aeb8f77be32b9acc62de512590e7449dc81abfc-rootfs.mount: Deactivated successfully. Jan 24 03:03:02.517406 containerd[1507]: time="2026-01-24T03:03:02.517218789Z" level=info msg="CreateContainer within sandbox \"a016cfd4652ab58b042a8f548a1423cabb3515ca7f4e18051594d2d2e9cd1982\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 03:03:02.548198 containerd[1507]: time="2026-01-24T03:03:02.548140225Z" level=info msg="CreateContainer within sandbox \"a016cfd4652ab58b042a8f548a1423cabb3515ca7f4e18051594d2d2e9cd1982\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"276b8670bfba20a1a0b6982a49735b573ea991b93f88bbcacbdfd5a789d7339f\"" Jan 24 03:03:02.549086 containerd[1507]: time="2026-01-24T03:03:02.548926294Z" level=info msg="StartContainer for \"276b8670bfba20a1a0b6982a49735b573ea991b93f88bbcacbdfd5a789d7339f\"" Jan 24 03:03:02.603680 systemd[1]: Started cri-containerd-276b8670bfba20a1a0b6982a49735b573ea991b93f88bbcacbdfd5a789d7339f.scope - libcontainer container 276b8670bfba20a1a0b6982a49735b573ea991b93f88bbcacbdfd5a789d7339f. Jan 24 03:03:02.653867 containerd[1507]: time="2026-01-24T03:03:02.653674614Z" level=info msg="StartContainer for \"276b8670bfba20a1a0b6982a49735b573ea991b93f88bbcacbdfd5a789d7339f\" returns successfully" Jan 24 03:03:02.663037 systemd[1]: cri-containerd-276b8670bfba20a1a0b6982a49735b573ea991b93f88bbcacbdfd5a789d7339f.scope: Deactivated successfully. Jan 24 03:03:02.710626 containerd[1507]: time="2026-01-24T03:03:02.709894802Z" level=info msg="shim disconnected" id=276b8670bfba20a1a0b6982a49735b573ea991b93f88bbcacbdfd5a789d7339f namespace=k8s.io Jan 24 03:03:02.711459 containerd[1507]: time="2026-01-24T03:03:02.710372239Z" level=warning msg="cleaning up after shim disconnected" id=276b8670bfba20a1a0b6982a49735b573ea991b93f88bbcacbdfd5a789d7339f namespace=k8s.io Jan 24 03:03:02.711569 containerd[1507]: time="2026-01-24T03:03:02.711456916Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 03:03:02.965227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-276b8670bfba20a1a0b6982a49735b573ea991b93f88bbcacbdfd5a789d7339f-rootfs.mount: Deactivated successfully. Jan 24 03:03:03.155989 kubelet[2699]: I0124 03:03:03.154533 2699 setters.go:602] "Node became not ready" node="srv-fpdmo.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T03:03:03Z","lastTransitionTime":"2026-01-24T03:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 24 03:03:03.520917 containerd[1507]: time="2026-01-24T03:03:03.520830143Z" level=info msg="CreateContainer within sandbox \"a016cfd4652ab58b042a8f548a1423cabb3515ca7f4e18051594d2d2e9cd1982\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 03:03:03.545049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount139875748.mount: Deactivated successfully. Jan 24 03:03:03.552839 containerd[1507]: time="2026-01-24T03:03:03.552745935Z" level=info msg="CreateContainer within sandbox \"a016cfd4652ab58b042a8f548a1423cabb3515ca7f4e18051594d2d2e9cd1982\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6b4e950848ae1c96b6baa853aed10b09816fb84a00d8356479add38baecc1b0b\"" Jan 24 03:03:03.555429 containerd[1507]: time="2026-01-24T03:03:03.554944072Z" level=info msg="StartContainer for \"6b4e950848ae1c96b6baa853aed10b09816fb84a00d8356479add38baecc1b0b\"" Jan 24 03:03:03.608683 systemd[1]: Started cri-containerd-6b4e950848ae1c96b6baa853aed10b09816fb84a00d8356479add38baecc1b0b.scope - libcontainer container 6b4e950848ae1c96b6baa853aed10b09816fb84a00d8356479add38baecc1b0b. Jan 24 03:03:03.648655 containerd[1507]: time="2026-01-24T03:03:03.648580587Z" level=info msg="StartContainer for \"6b4e950848ae1c96b6baa853aed10b09816fb84a00d8356479add38baecc1b0b\" returns successfully" Jan 24 03:03:03.649816 systemd[1]: cri-containerd-6b4e950848ae1c96b6baa853aed10b09816fb84a00d8356479add38baecc1b0b.scope: Deactivated successfully. Jan 24 03:03:03.689211 containerd[1507]: time="2026-01-24T03:03:03.689067345Z" level=info msg="shim disconnected" id=6b4e950848ae1c96b6baa853aed10b09816fb84a00d8356479add38baecc1b0b namespace=k8s.io Jan 24 03:03:03.689211 containerd[1507]: time="2026-01-24T03:03:03.689151975Z" level=warning msg="cleaning up after shim disconnected" id=6b4e950848ae1c96b6baa853aed10b09816fb84a00d8356479add38baecc1b0b namespace=k8s.io Jan 24 03:03:03.689211 containerd[1507]: time="2026-01-24T03:03:03.689168178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 03:03:03.713062 containerd[1507]: time="2026-01-24T03:03:03.712907279Z" level=warning msg="cleanup warnings time=\"2026-01-24T03:03:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 03:03:03.966163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b4e950848ae1c96b6baa853aed10b09816fb84a00d8356479add38baecc1b0b-rootfs.mount: Deactivated successfully. Jan 24 03:03:03.985971 kubelet[2699]: E0124 03:03:03.985865 2699 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 03:03:04.528903 containerd[1507]: time="2026-01-24T03:03:04.528518829Z" level=info msg="CreateContainer within sandbox \"a016cfd4652ab58b042a8f548a1423cabb3515ca7f4e18051594d2d2e9cd1982\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 03:03:04.555847 containerd[1507]: time="2026-01-24T03:03:04.555784866Z" level=info msg="CreateContainer within sandbox \"a016cfd4652ab58b042a8f548a1423cabb3515ca7f4e18051594d2d2e9cd1982\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d3c4bd124dd54f326a9719c4e7875ac46ae742f1d8032a1ca6bf2dd213fb154\"" Jan 24 03:03:04.557735 containerd[1507]: time="2026-01-24T03:03:04.557579629Z" level=info msg="StartContainer for \"4d3c4bd124dd54f326a9719c4e7875ac46ae742f1d8032a1ca6bf2dd213fb154\"" Jan 24 03:03:04.621628 systemd[1]: Started cri-containerd-4d3c4bd124dd54f326a9719c4e7875ac46ae742f1d8032a1ca6bf2dd213fb154.scope - libcontainer container 4d3c4bd124dd54f326a9719c4e7875ac46ae742f1d8032a1ca6bf2dd213fb154. Jan 24 03:03:04.669463 containerd[1507]: time="2026-01-24T03:03:04.669383834Z" level=info msg="StartContainer for \"4d3c4bd124dd54f326a9719c4e7875ac46ae742f1d8032a1ca6bf2dd213fb154\" returns successfully" Jan 24 03:03:05.439497 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 24 03:03:05.573745 kubelet[2699]: I0124 03:03:05.572998 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-svzr5" podStartSLOduration=6.572932206 podStartE2EDuration="6.572932206s" podCreationTimestamp="2026-01-24 03:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 03:03:05.57253087 +0000 UTC m=+157.209677703" watchObservedRunningTime="2026-01-24 03:03:05.572932206 +0000 UTC m=+157.210079019" Jan 24 03:03:08.177515 systemd[1]: Started sshd@41-10.243.72.22:22-159.223.6.232:41910.service - OpenSSH per-connection server daemon (159.223.6.232:41910). Jan 24 03:03:08.288976 sshd[5186]: Invalid user webmaster from 159.223.6.232 port 41910 Jan 24 03:03:08.308526 sshd[5186]: Connection closed by invalid user webmaster 159.223.6.232 port 41910 [preauth] Jan 24 03:03:08.312657 systemd[1]: sshd@41-10.243.72.22:22-159.223.6.232:41910.service: Deactivated successfully. Jan 24 03:03:09.370817 systemd-networkd[1432]: lxc_health: Link UP Jan 24 03:03:09.379885 systemd-networkd[1432]: lxc_health: Gained carrier Jan 24 03:03:10.526910 systemd-networkd[1432]: lxc_health: Gained IPv6LL Jan 24 03:03:11.071738 systemd[1]: run-containerd-runc-k8s.io-4d3c4bd124dd54f326a9719c4e7875ac46ae742f1d8032a1ca6bf2dd213fb154-runc.objaKQ.mount: Deactivated successfully. Jan 24 03:03:15.810240 sshd[4643]: pam_unix(sshd:session): session closed for user core Jan 24 03:03:15.832211 systemd[1]: sshd@40-10.243.72.22:22-20.161.92.111:59628.service: Deactivated successfully. Jan 24 03:03:15.838790 systemd[1]: session-32.scope: Deactivated successfully. Jan 24 03:03:15.840996 systemd-logind[1487]: Session 32 logged out. Waiting for processes to exit. Jan 24 03:03:15.843324 systemd-logind[1487]: Removed session 32.