Jan 24 01:48:46.036949 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 01:48:46.036983 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 01:48:46.036996 kernel: BIOS-provided physical RAM map: Jan 24 01:48:46.038044 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 01:48:46.038057 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 01:48:46.038067 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 01:48:46.038078 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 24 01:48:46.038088 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 24 01:48:46.038106 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 01:48:46.038116 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 01:48:46.038126 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 01:48:46.038136 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 01:48:46.038166 kernel: NX (Execute Disable) protection: active Jan 24 01:48:46.038177 kernel: APIC: Static calls initialized Jan 24 01:48:46.038191 kernel: SMBIOS 2.8 present. Jan 24 01:48:46.038207 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 24 01:48:46.038219 kernel: Hypervisor detected: KVM Jan 24 01:48:46.038239 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 01:48:46.038255 kernel: kvm-clock: using sched offset of 5336416019 cycles Jan 24 01:48:46.038267 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 01:48:46.038278 kernel: tsc: Detected 2799.998 MHz processor Jan 24 01:48:46.038290 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 01:48:46.038301 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 01:48:46.038312 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 24 01:48:46.038323 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 01:48:46.038334 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 01:48:46.038355 kernel: Using GB pages for direct mapping Jan 24 01:48:46.038366 kernel: ACPI: Early table checksum verification disabled Jan 24 01:48:46.038377 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 24 01:48:46.038388 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 01:48:46.038399 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 01:48:46.038410 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 01:48:46.038420 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 24 01:48:46.038431 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 01:48:46.038442 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 01:48:46.038463 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 01:48:46.038474 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 01:48:46.038487 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 24 01:48:46.038498 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 24 01:48:46.038509 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 24 01:48:46.038531 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 24 01:48:46.038543 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 24 01:48:46.038564 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 24 01:48:46.038576 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 24 01:48:46.038587 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 01:48:46.038603 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 01:48:46.038615 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 24 01:48:46.038626 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 24 01:48:46.038638 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 24 01:48:46.038649 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 24 01:48:46.038670 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 24 01:48:46.038681 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 24 01:48:46.038693 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 24 01:48:46.038704 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 24 01:48:46.038715 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 24 01:48:46.038726 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 24 01:48:46.038737 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 24 01:48:46.038748 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 24 01:48:46.038776 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 24 01:48:46.038800 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 24 01:48:46.038812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 24 01:48:46.038824 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 24 01:48:46.038835 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 24 01:48:46.038847 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 24 01:48:46.038858 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 24 01:48:46.038870 kernel: Zone ranges: Jan 24 01:48:46.038881 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 01:48:46.038892 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 24 01:48:46.038914 kernel: Normal empty Jan 24 01:48:46.038926 kernel: Movable zone start for each node Jan 24 01:48:46.038937 kernel: Early memory node ranges Jan 24 01:48:46.038988 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 01:48:46.039020 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 24 01:48:46.039033 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 24 01:48:46.039045 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 01:48:46.039056 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 01:48:46.039073 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 24 01:48:46.039085 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 01:48:46.039109 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 01:48:46.039121 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 01:48:46.039132 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 01:48:46.039144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 01:48:46.039156 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 01:48:46.039167 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 01:48:46.039178 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 01:48:46.039190 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 01:48:46.039201 kernel: TSC deadline timer available Jan 24 01:48:46.039223 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 24 01:48:46.039235 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 01:48:46.039247 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 01:48:46.039258 kernel: Booting paravirtualized kernel on KVM Jan 24 01:48:46.039269 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 01:48:46.039281 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 24 01:48:46.039292 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Jan 24 01:48:46.039304 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Jan 24 01:48:46.039315 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 24 01:48:46.039337 kernel: kvm-guest: PV spinlocks enabled Jan 24 01:48:46.039349 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 01:48:46.039362 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 01:48:46.039373 kernel: random: crng init done Jan 24 01:48:46.039385 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 01:48:46.039396 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 01:48:46.039408 kernel: Fallback order for Node 0: 0 Jan 24 01:48:46.039419 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 24 01:48:46.039441 kernel: Policy zone: DMA32 Jan 24 01:48:46.039458 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 01:48:46.039470 kernel: software IO TLB: area num 16. Jan 24 01:48:46.039482 kernel: Memory: 1901596K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 194760K reserved, 0K cma-reserved) Jan 24 01:48:46.039493 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 24 01:48:46.039505 kernel: Kernel/User page tables isolation: enabled Jan 24 01:48:46.039517 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 01:48:46.039528 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 01:48:46.039540 kernel: Dynamic Preempt: voluntary Jan 24 01:48:46.039573 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 01:48:46.039585 kernel: rcu: RCU event tracing is enabled. Jan 24 01:48:46.039597 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 24 01:48:46.039608 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 01:48:46.039620 kernel: Rude variant of Tasks RCU enabled. Jan 24 01:48:46.039677 kernel: Tracing variant of Tasks RCU enabled. Jan 24 01:48:46.039698 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 01:48:46.039711 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 24 01:48:46.039723 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 24 01:48:46.039735 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 01:48:46.039747 kernel: Console: colour VGA+ 80x25 Jan 24 01:48:46.039780 kernel: printk: console [tty0] enabled Jan 24 01:48:46.039793 kernel: printk: console [ttyS0] enabled Jan 24 01:48:46.039805 kernel: ACPI: Core revision 20230628 Jan 24 01:48:46.039817 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 01:48:46.039829 kernel: x2apic enabled Jan 24 01:48:46.039841 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 01:48:46.039863 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 24 01:48:46.039880 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 24 01:48:46.039893 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 01:48:46.039905 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 24 01:48:46.039917 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 24 01:48:46.039929 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 01:48:46.039941 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 01:48:46.039953 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 01:48:46.039965 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 24 01:48:46.039987 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 01:48:46.040000 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 01:48:46.040111 kernel: MDS: Mitigation: Clear CPU buffers Jan 24 01:48:46.040126 kernel: MMIO Stale Data: Unknown: No mitigations Jan 24 01:48:46.040137 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 24 01:48:46.040149 kernel: active return thunk: its_return_thunk Jan 24 01:48:46.040161 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 01:48:46.040178 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 01:48:46.040190 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 01:48:46.040202 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 01:48:46.040213 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 01:48:46.040242 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 24 01:48:46.040255 kernel: Freeing SMP alternatives memory: 32K Jan 24 01:48:46.040271 kernel: pid_max: default: 32768 minimum: 301 Jan 24 01:48:46.040283 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 01:48:46.040303 kernel: landlock: Up and running. Jan 24 01:48:46.040315 kernel: SELinux: Initializing. Jan 24 01:48:46.040327 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 01:48:46.040339 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 01:48:46.040351 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 24 01:48:46.040363 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 24 01:48:46.040375 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 24 01:48:46.040398 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 24 01:48:46.040411 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 24 01:48:46.040423 kernel: signal: max sigframe size: 1776 Jan 24 01:48:46.040447 kernel: rcu: Hierarchical SRCU implementation. Jan 24 01:48:46.040459 kernel: rcu: Max phase no-delay instances is 400. Jan 24 01:48:46.040471 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 01:48:46.040482 kernel: smp: Bringing up secondary CPUs ... Jan 24 01:48:46.040494 kernel: smpboot: x86: Booting SMP configuration: Jan 24 01:48:46.040519 kernel: .... node #0, CPUs: #1 Jan 24 01:48:46.040541 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 24 01:48:46.040554 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 01:48:46.040566 kernel: smpboot: Max logical packages: 16 Jan 24 01:48:46.040578 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 24 01:48:46.040590 kernel: devtmpfs: initialized Jan 24 01:48:46.040602 kernel: x86/mm: Memory block size: 128MB Jan 24 01:48:46.040614 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 01:48:46.040626 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 24 01:48:46.040638 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 01:48:46.040660 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 01:48:46.040673 kernel: audit: initializing netlink subsys (disabled) Jan 24 01:48:46.040697 kernel: audit: type=2000 audit(1769219324.970:1): state=initialized audit_enabled=0 res=1 Jan 24 01:48:46.040708 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 01:48:46.040720 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 01:48:46.040732 kernel: cpuidle: using governor menu Jan 24 01:48:46.040770 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 01:48:46.040783 kernel: dca service started, version 1.12.1 Jan 24 01:48:46.040795 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 01:48:46.040819 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 01:48:46.040832 kernel: PCI: Using configuration type 1 for base access Jan 24 01:48:46.040845 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 01:48:46.040857 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 01:48:46.040869 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 01:48:46.040881 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 01:48:46.040893 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 01:48:46.040905 kernel: ACPI: Added _OSI(Module Device) Jan 24 01:48:46.040917 kernel: ACPI: Added _OSI(Processor Device) Jan 24 01:48:46.040940 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 01:48:46.040952 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 01:48:46.040964 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 01:48:46.040976 kernel: ACPI: Interpreter enabled Jan 24 01:48:46.040988 kernel: ACPI: PM: (supports S0 S5) Jan 24 01:48:46.041000 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 01:48:46.041031 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 01:48:46.041043 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 01:48:46.041055 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 01:48:46.041079 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 01:48:46.041361 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 01:48:46.041543 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 24 01:48:46.041719 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 24 01:48:46.041749 kernel: PCI host bridge to bus 0000:00 Jan 24 01:48:46.041962 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 01:48:46.042163 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 01:48:46.042373 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 01:48:46.042576 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 24 01:48:46.042735 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 01:48:46.042904 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 24 01:48:46.043090 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 01:48:46.043319 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 01:48:46.043548 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 24 01:48:46.043729 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 24 01:48:46.043927 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 24 01:48:46.044116 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 24 01:48:46.044301 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 01:48:46.044495 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 24 01:48:46.044668 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 24 01:48:46.044928 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 24 01:48:46.045122 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 24 01:48:46.045313 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 24 01:48:46.045493 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 24 01:48:46.045720 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 24 01:48:46.045907 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 24 01:48:46.046153 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 24 01:48:46.046328 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 24 01:48:46.046528 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 24 01:48:46.046778 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 24 01:48:46.046977 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 24 01:48:46.047177 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 24 01:48:46.047396 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 24 01:48:46.047604 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 24 01:48:46.047835 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 24 01:48:46.048009 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 24 01:48:46.051232 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 24 01:48:46.051408 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 24 01:48:46.051594 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 24 01:48:46.051861 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 24 01:48:46.052073 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 24 01:48:46.052243 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 24 01:48:46.052409 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 24 01:48:46.052635 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 01:48:46.056166 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 01:48:46.056370 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 01:48:46.056570 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 24 01:48:46.056749 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 24 01:48:46.056947 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 01:48:46.057193 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 01:48:46.057400 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 24 01:48:46.057585 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 24 01:48:46.057814 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 24 01:48:46.057982 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 24 01:48:46.062229 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 01:48:46.062437 kernel: pci_bus 0000:02: extended config space not accessible Jan 24 01:48:46.062665 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 24 01:48:46.062867 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 24 01:48:46.063124 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 24 01:48:46.063306 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 24 01:48:46.063528 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 24 01:48:46.063700 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 24 01:48:46.063882 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 24 01:48:46.065209 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 24 01:48:46.065402 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 24 01:48:46.065669 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 24 01:48:46.065860 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 24 01:48:46.066057 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 24 01:48:46.066242 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 24 01:48:46.066417 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 24 01:48:46.066594 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 24 01:48:46.066794 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 24 01:48:46.066961 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 24 01:48:46.069224 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 24 01:48:46.069408 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 24 01:48:46.069594 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 24 01:48:46.072095 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 24 01:48:46.072301 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 24 01:48:46.072625 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 24 01:48:46.072818 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 24 01:48:46.072984 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 24 01:48:46.073217 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 24 01:48:46.073407 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 24 01:48:46.073567 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 24 01:48:46.073741 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 24 01:48:46.073785 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 01:48:46.073798 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 01:48:46.073811 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 01:48:46.073823 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 01:48:46.073850 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 01:48:46.073863 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 01:48:46.073875 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 01:48:46.073887 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 01:48:46.073900 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 01:48:46.073912 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 01:48:46.073924 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 01:48:46.073937 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 01:48:46.073949 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 01:48:46.073972 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 01:48:46.073985 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 01:48:46.073997 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 01:48:46.074024 kernel: iommu: Default domain type: Translated Jan 24 01:48:46.074046 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 01:48:46.074058 kernel: PCI: Using ACPI for IRQ routing Jan 24 01:48:46.074071 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 01:48:46.074083 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 01:48:46.074095 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 24 01:48:46.074321 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 01:48:46.074494 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 01:48:46.074647 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 01:48:46.074665 kernel: vgaarb: loaded Jan 24 01:48:46.074677 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 01:48:46.074689 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 01:48:46.074700 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 01:48:46.074712 kernel: pnp: PnP ACPI init Jan 24 01:48:46.074908 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 01:48:46.074942 kernel: pnp: PnP ACPI: found 5 devices Jan 24 01:48:46.074955 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 01:48:46.074967 kernel: NET: Registered PF_INET protocol family Jan 24 01:48:46.074979 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 01:48:46.074992 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 24 01:48:46.077028 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 01:48:46.077053 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 01:48:46.077066 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 24 01:48:46.077098 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 24 01:48:46.077111 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 01:48:46.077123 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 01:48:46.077136 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 01:48:46.077148 kernel: NET: Registered PF_XDP protocol family Jan 24 01:48:46.077328 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 24 01:48:46.077508 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 24 01:48:46.077686 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 24 01:48:46.077884 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 24 01:48:46.080145 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 24 01:48:46.080413 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 24 01:48:46.080578 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 24 01:48:46.080774 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 24 01:48:46.080942 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 24 01:48:46.081151 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 24 01:48:46.081316 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 24 01:48:46.081491 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 24 01:48:46.081664 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 24 01:48:46.081870 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 24 01:48:46.082062 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 24 01:48:46.082245 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 24 01:48:46.082435 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 24 01:48:46.082696 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 24 01:48:46.082880 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 24 01:48:46.083840 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 24 01:48:46.084050 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 24 01:48:46.084247 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 01:48:46.084438 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 24 01:48:46.084617 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 24 01:48:46.084841 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 24 01:48:46.085088 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 24 01:48:46.085269 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 24 01:48:46.085447 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 24 01:48:46.085624 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 24 01:48:46.085826 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 24 01:48:46.085995 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 24 01:48:46.086230 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 24 01:48:46.086407 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 24 01:48:46.086612 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 24 01:48:46.086820 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 24 01:48:46.086987 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 24 01:48:46.087198 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 24 01:48:46.087370 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 24 01:48:46.087533 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 24 01:48:46.087696 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 24 01:48:46.087919 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 24 01:48:46.088145 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 24 01:48:46.088315 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 24 01:48:46.088479 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 24 01:48:46.088644 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 24 01:48:46.088840 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 24 01:48:46.089120 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 24 01:48:46.089361 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 24 01:48:46.089530 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 24 01:48:46.089695 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 24 01:48:46.089865 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 01:48:46.090032 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 01:48:46.090198 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 01:48:46.090369 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 24 01:48:46.090537 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 01:48:46.090759 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 24 01:48:46.090947 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 24 01:48:46.091153 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 24 01:48:46.091311 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 24 01:48:46.091486 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 24 01:48:46.091683 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 24 01:48:46.091873 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 24 01:48:46.092071 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 24 01:48:46.092250 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 24 01:48:46.092405 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 24 01:48:46.092595 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 24 01:48:46.092777 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 24 01:48:46.092957 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 24 01:48:46.093131 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 24 01:48:46.093306 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 24 01:48:46.093470 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 24 01:48:46.093623 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 24 01:48:46.093833 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 24 01:48:46.093992 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 24 01:48:46.094255 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 24 01:48:46.094434 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 24 01:48:46.094590 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 24 01:48:46.094748 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 24 01:48:46.094964 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 24 01:48:46.095149 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 24 01:48:46.095302 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 24 01:48:46.095336 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 01:48:46.095361 kernel: PCI: CLS 0 bytes, default 64 Jan 24 01:48:46.095374 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 01:48:46.095387 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 24 01:48:46.095400 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 01:48:46.095413 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 24 01:48:46.095426 kernel: Initialise system trusted keyrings Jan 24 01:48:46.095439 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 24 01:48:46.095470 kernel: Key type asymmetric registered Jan 24 01:48:46.095483 kernel: Asymmetric key parser 'x509' registered Jan 24 01:48:46.095496 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 01:48:46.095508 kernel: io scheduler mq-deadline registered Jan 24 01:48:46.095521 kernel: io scheduler kyber registered Jan 24 01:48:46.095534 kernel: io scheduler bfq registered Jan 24 01:48:46.095726 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 24 01:48:46.095925 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 24 01:48:46.096148 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 01:48:46.096363 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 24 01:48:46.096529 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 24 01:48:46.096693 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 01:48:46.096895 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 24 01:48:46.097090 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 24 01:48:46.097320 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 01:48:46.097527 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 24 01:48:46.097693 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 24 01:48:46.097877 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 01:48:46.098097 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 24 01:48:46.098276 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 24 01:48:46.098464 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 01:48:46.098668 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 24 01:48:46.098893 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 24 01:48:46.099076 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 01:48:46.099242 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 24 01:48:46.099429 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 24 01:48:46.099606 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 01:48:46.099838 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 24 01:48:46.100090 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 24 01:48:46.100260 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 24 01:48:46.100281 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 01:48:46.100295 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 01:48:46.100309 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 01:48:46.100355 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 01:48:46.100369 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 01:48:46.100382 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 01:48:46.100395 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 01:48:46.100408 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 01:48:46.100579 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 24 01:48:46.100601 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 01:48:46.100768 kernel: rtc_cmos 00:03: registered as rtc0 Jan 24 01:48:46.100958 kernel: rtc_cmos 00:03: setting system clock to 2026-01-24T01:48:45 UTC (1769219325) Jan 24 01:48:46.101145 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 24 01:48:46.101164 kernel: intel_pstate: CPU model not supported Jan 24 01:48:46.101189 kernel: NET: Registered PF_INET6 protocol family Jan 24 01:48:46.101209 kernel: Segment Routing with IPv6 Jan 24 01:48:46.101222 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 01:48:46.101234 kernel: NET: Registered PF_PACKET protocol family Jan 24 01:48:46.101247 kernel: Key type dns_resolver registered Jan 24 01:48:46.101262 kernel: IPI shorthand broadcast: enabled Jan 24 01:48:46.101306 kernel: sched_clock: Marking stable (1734004016, 232868489)->(2101695037, -134822532) Jan 24 01:48:46.101320 kernel: registered taskstats version 1 Jan 24 01:48:46.101333 kernel: Loading compiled-in X.509 certificates Jan 24 01:48:46.101346 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 01:48:46.101359 kernel: Key type .fscrypt registered Jan 24 01:48:46.101372 kernel: Key type fscrypt-provisioning registered Jan 24 01:48:46.101384 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 01:48:46.101397 kernel: ima: Allocated hash algorithm: sha1 Jan 24 01:48:46.101441 kernel: ima: No architecture policies found Jan 24 01:48:46.101484 kernel: clk: Disabling unused clocks Jan 24 01:48:46.101498 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 01:48:46.101511 kernel: Write protecting the kernel read-only data: 36864k Jan 24 01:48:46.101524 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 01:48:46.101537 kernel: Run /init as init process Jan 24 01:48:46.101549 kernel: with arguments: Jan 24 01:48:46.101562 kernel: /init Jan 24 01:48:46.101574 kernel: with environment: Jan 24 01:48:46.101587 kernel: HOME=/ Jan 24 01:48:46.101599 kernel: TERM=linux Jan 24 01:48:46.101643 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 01:48:46.101659 systemd[1]: Detected virtualization kvm. Jan 24 01:48:46.101684 systemd[1]: Detected architecture x86-64. Jan 24 01:48:46.101698 systemd[1]: Running in initrd. Jan 24 01:48:46.101711 systemd[1]: No hostname configured, using default hostname. Jan 24 01:48:46.101724 systemd[1]: Hostname set to . Jan 24 01:48:46.101738 systemd[1]: Initializing machine ID from VM UUID. Jan 24 01:48:46.101790 systemd[1]: Queued start job for default target initrd.target. Jan 24 01:48:46.101804 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 01:48:46.101817 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 01:48:46.101831 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 01:48:46.101845 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 01:48:46.101858 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 01:48:46.101872 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 01:48:46.101914 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 01:48:46.101929 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 01:48:46.101943 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 01:48:46.101956 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 01:48:46.101970 systemd[1]: Reached target paths.target - Path Units. Jan 24 01:48:46.101983 systemd[1]: Reached target slices.target - Slice Units. Jan 24 01:48:46.101997 systemd[1]: Reached target swap.target - Swaps. Jan 24 01:48:46.102066 systemd[1]: Reached target timers.target - Timer Units. Jan 24 01:48:46.102112 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 01:48:46.102126 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 01:48:46.102140 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 01:48:46.102154 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 01:48:46.102167 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 01:48:46.102181 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 01:48:46.102194 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 01:48:46.102208 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 01:48:46.102248 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 01:48:46.102263 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 01:48:46.102284 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 01:48:46.102297 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 01:48:46.102311 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 01:48:46.102325 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 01:48:46.102338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 01:48:46.102352 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 01:48:46.102365 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 01:48:46.102410 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 01:48:46.102465 systemd-journald[203]: Collecting audit messages is disabled. Jan 24 01:48:46.102525 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 01:48:46.102540 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 01:48:46.102554 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 01:48:46.102567 kernel: Bridge firewalling registered Jan 24 01:48:46.102581 systemd-journald[203]: Journal started Jan 24 01:48:46.102618 systemd-journald[203]: Runtime Journal (/run/log/journal/e65186cb718f4de8b31b0e004d9a3c2d) is 4.7M, max 38.0M, 33.2M free. Jan 24 01:48:46.051094 systemd-modules-load[204]: Inserted module 'overlay' Jan 24 01:48:46.083110 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 24 01:48:46.143897 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 01:48:46.145267 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 01:48:46.146288 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 01:48:46.157345 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 01:48:46.159626 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 01:48:46.163241 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 01:48:46.170228 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 01:48:46.195445 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 01:48:46.198744 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 01:48:46.199863 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 01:48:46.202893 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 01:48:46.217250 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 01:48:46.221162 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 01:48:46.236542 dracut-cmdline[239]: dracut-dracut-053 Jan 24 01:48:46.242036 dracut-cmdline[239]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 01:48:46.272218 systemd-resolved[240]: Positive Trust Anchors: Jan 24 01:48:46.273131 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 01:48:46.273175 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 01:48:46.280634 systemd-resolved[240]: Defaulting to hostname 'linux'. Jan 24 01:48:46.282786 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 01:48:46.284095 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 01:48:46.348070 kernel: SCSI subsystem initialized Jan 24 01:48:46.360039 kernel: Loading iSCSI transport class v2.0-870. Jan 24 01:48:46.372038 kernel: iscsi: registered transport (tcp) Jan 24 01:48:46.398553 kernel: iscsi: registered transport (qla4xxx) Jan 24 01:48:46.398650 kernel: QLogic iSCSI HBA Driver Jan 24 01:48:46.455516 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 01:48:46.465312 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 01:48:46.505180 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 01:48:46.505264 kernel: device-mapper: uevent: version 1.0.3 Jan 24 01:48:46.508550 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 01:48:46.556071 kernel: raid6: sse2x4 gen() 13063 MB/s Jan 24 01:48:46.574064 kernel: raid6: sse2x2 gen() 8956 MB/s Jan 24 01:48:46.592810 kernel: raid6: sse2x1 gen() 9804 MB/s Jan 24 01:48:46.592855 kernel: raid6: using algorithm sse2x4 gen() 13063 MB/s Jan 24 01:48:46.611679 kernel: raid6: .... xor() 7634 MB/s, rmw enabled Jan 24 01:48:46.611762 kernel: raid6: using ssse3x2 recovery algorithm Jan 24 01:48:46.637049 kernel: xor: automatically using best checksumming function avx Jan 24 01:48:46.831059 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 01:48:46.845864 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 01:48:46.854329 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 01:48:46.873842 systemd-udevd[423]: Using default interface naming scheme 'v255'. Jan 24 01:48:46.880980 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 01:48:46.890206 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 01:48:46.911682 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Jan 24 01:48:46.952700 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 01:48:46.959223 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 01:48:47.076147 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 01:48:47.085209 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 01:48:47.116188 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 01:48:47.120870 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 01:48:47.121693 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 01:48:47.125429 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 01:48:47.131397 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 01:48:47.158693 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 01:48:47.204042 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 24 01:48:47.215436 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 01:48:47.219079 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 24 01:48:47.237457 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 01:48:47.237545 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 01:48:47.245055 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 01:48:47.260361 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 01:48:47.260392 kernel: GPT:17805311 != 125829119 Jan 24 01:48:47.260410 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 01:48:47.260427 kernel: GPT:17805311 != 125829119 Jan 24 01:48:47.260443 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 01:48:47.260460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 01:48:47.245838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 01:48:47.245911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 01:48:47.246771 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 01:48:47.268054 kernel: libata version 3.00 loaded. Jan 24 01:48:47.278301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 01:48:47.295438 kernel: AVX version of gcm_enc/dec engaged. Jan 24 01:48:47.300031 kernel: AES CTR mode by8 optimization enabled Jan 24 01:48:47.303126 kernel: ACPI: bus type USB registered Jan 24 01:48:47.306869 kernel: usbcore: registered new interface driver usbfs Jan 24 01:48:47.306906 kernel: usbcore: registered new interface driver hub Jan 24 01:48:47.309061 kernel: usbcore: registered new device driver usb Jan 24 01:48:47.342045 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (468) Jan 24 01:48:47.349049 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 24 01:48:47.349434 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 24 01:48:47.352024 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 24 01:48:47.352325 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 24 01:48:47.353095 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 24 01:48:47.353402 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 24 01:48:47.360125 kernel: hub 1-0:1.0: USB hub found Jan 24 01:48:47.360392 kernel: hub 1-0:1.0: 4 ports detected Jan 24 01:48:47.360615 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 24 01:48:47.360880 kernel: hub 2-0:1.0: USB hub found Jan 24 01:48:47.362454 kernel: hub 2-0:1.0: 4 ports detected Jan 24 01:48:47.365225 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Jan 24 01:48:47.379028 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 01:48:47.390273 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 01:48:47.384069 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 01:48:47.489592 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 01:48:47.489949 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 01:48:47.490219 kernel: scsi host0: ahci Jan 24 01:48:47.490460 kernel: scsi host1: ahci Jan 24 01:48:47.490714 kernel: scsi host2: ahci Jan 24 01:48:47.490959 kernel: scsi host3: ahci Jan 24 01:48:47.491233 kernel: scsi host4: ahci Jan 24 01:48:47.491439 kernel: scsi host5: ahci Jan 24 01:48:47.491689 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 24 01:48:47.491734 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 24 01:48:47.491774 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 24 01:48:47.491794 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 24 01:48:47.491811 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 24 01:48:47.491828 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 24 01:48:47.496233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 01:48:47.504381 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 01:48:47.510561 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 01:48:47.511461 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 01:48:47.519760 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 01:48:47.531288 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 01:48:47.535563 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 01:48:47.547051 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 01:48:47.547142 disk-uuid[567]: Primary Header is updated. Jan 24 01:48:47.547142 disk-uuid[567]: Secondary Entries is updated. Jan 24 01:48:47.547142 disk-uuid[567]: Secondary Header is updated. Jan 24 01:48:47.557037 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 01:48:47.567072 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 01:48:47.567950 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 01:48:47.605480 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 24 01:48:47.745613 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 01:48:47.749139 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 24 01:48:47.749182 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 01:48:47.750851 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 01:48:47.754101 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 01:48:47.754141 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 01:48:47.771041 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 01:48:47.791672 kernel: usbcore: registered new interface driver usbhid Jan 24 01:48:47.791747 kernel: usbhid: USB HID core driver Jan 24 01:48:47.806394 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 24 01:48:47.806444 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 24 01:48:48.566076 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 01:48:48.567472 disk-uuid[568]: The operation has completed successfully. Jan 24 01:48:48.615434 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 01:48:48.615600 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 01:48:48.644274 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 01:48:48.660130 sh[589]: Success Jan 24 01:48:48.678152 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 24 01:48:48.739995 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 01:48:48.749273 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 01:48:48.753688 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 01:48:48.775045 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 01:48:48.775101 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 01:48:48.776669 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 01:48:48.779954 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 01:48:48.780027 kernel: BTRFS info (device dm-0): using free space tree Jan 24 01:48:48.791261 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 01:48:48.792816 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 01:48:48.799231 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 01:48:48.808287 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 01:48:48.829139 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 01:48:48.829207 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 01:48:48.829226 kernel: BTRFS info (device vda6): using free space tree Jan 24 01:48:48.834035 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 01:48:48.848338 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 01:48:48.851053 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 01:48:48.859129 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 01:48:48.865488 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 01:48:49.089202 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 01:48:49.108344 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 01:48:49.134084 ignition[691]: Ignition 2.19.0 Jan 24 01:48:49.134113 ignition[691]: Stage: fetch-offline Jan 24 01:48:49.136739 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 01:48:49.134213 ignition[691]: no configs at "/usr/lib/ignition/base.d" Jan 24 01:48:49.134242 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 01:48:49.134418 ignition[691]: parsed url from cmdline: "" Jan 24 01:48:49.134425 ignition[691]: no config URL provided Jan 24 01:48:49.134435 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 01:48:49.134450 ignition[691]: no config at "/usr/lib/ignition/user.ign" Jan 24 01:48:49.134460 ignition[691]: failed to fetch config: resource requires networking Jan 24 01:48:49.134780 ignition[691]: Ignition finished successfully Jan 24 01:48:49.145610 systemd-networkd[775]: lo: Link UP Jan 24 01:48:49.145617 systemd-networkd[775]: lo: Gained carrier Jan 24 01:48:49.148138 systemd-networkd[775]: Enumeration completed Jan 24 01:48:49.148293 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 01:48:49.148669 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 01:48:49.148674 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 01:48:49.150384 systemd-networkd[775]: eth0: Link UP Jan 24 01:48:49.150389 systemd-networkd[775]: eth0: Gained carrier Jan 24 01:48:49.150401 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 01:48:49.150676 systemd[1]: Reached target network.target - Network. Jan 24 01:48:49.160259 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 01:48:49.176777 systemd-networkd[775]: eth0: DHCPv4 address 10.230.77.170/30, gateway 10.230.77.169 acquired from 10.230.77.169 Jan 24 01:48:49.194377 ignition[779]: Ignition 2.19.0 Jan 24 01:48:49.195087 ignition[779]: Stage: fetch Jan 24 01:48:49.195377 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 24 01:48:49.195402 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 01:48:49.195547 ignition[779]: parsed url from cmdline: "" Jan 24 01:48:49.195555 ignition[779]: no config URL provided Jan 24 01:48:49.195578 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 01:48:49.195598 ignition[779]: no config at "/usr/lib/ignition/user.ign" Jan 24 01:48:49.195808 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 24 01:48:49.197299 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 24 01:48:49.197328 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 24 01:48:49.211938 ignition[779]: GET result: OK Jan 24 01:48:49.212510 ignition[779]: parsing config with SHA512: 67d18cbed3f8e559b9ecd99fbde5d18767b63594bc2abafdf6b16e59d681e0139cb4bb68d7d139d46106fc8739a8930cb8f49478a8d41ab4443cd96bbf048c65 Jan 24 01:48:49.218038 unknown[779]: fetched base config from "system" Jan 24 01:48:49.218058 unknown[779]: fetched base config from "system" Jan 24 01:48:49.218556 ignition[779]: fetch: fetch complete Jan 24 01:48:49.218077 unknown[779]: fetched user config from "openstack" Jan 24 01:48:49.218564 ignition[779]: fetch: fetch passed Jan 24 01:48:49.220598 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 01:48:49.218639 ignition[779]: Ignition finished successfully Jan 24 01:48:49.229310 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 01:48:49.264748 ignition[786]: Ignition 2.19.0 Jan 24 01:48:49.264772 ignition[786]: Stage: kargs Jan 24 01:48:49.266841 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 24 01:48:49.266873 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 01:48:49.269676 ignition[786]: kargs: kargs passed Jan 24 01:48:49.269761 ignition[786]: Ignition finished successfully Jan 24 01:48:49.271594 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 01:48:49.304096 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 01:48:49.333127 ignition[793]: Ignition 2.19.0 Jan 24 01:48:49.334237 ignition[793]: Stage: disks Jan 24 01:48:49.334516 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 24 01:48:49.334537 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 01:48:49.338713 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 01:48:49.335625 ignition[793]: disks: disks passed Jan 24 01:48:49.340189 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 01:48:49.335706 ignition[793]: Ignition finished successfully Jan 24 01:48:49.341314 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 01:48:49.342626 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 01:48:49.344099 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 01:48:49.345345 systemd[1]: Reached target basic.target - Basic System. Jan 24 01:48:49.354279 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 01:48:49.374289 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 24 01:48:49.378514 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 01:48:49.385140 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 01:48:49.513033 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 01:48:49.513964 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 01:48:49.515340 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 01:48:49.526166 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 01:48:49.529148 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 01:48:49.532479 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 01:48:49.539120 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Jan 24 01:48:49.542069 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 01:48:49.542104 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 01:48:49.542131 kernel: BTRFS info (device vda6): using free space tree Jan 24 01:48:49.542872 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 24 01:48:49.551801 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 01:48:49.550838 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 01:48:49.550884 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 01:48:49.553809 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 01:48:49.555060 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 01:48:49.570641 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 01:48:49.639328 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 01:48:49.649051 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jan 24 01:48:49.651865 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 01:48:49.661034 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 01:48:49.778260 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 01:48:49.784182 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 01:48:49.788249 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 01:48:49.802587 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 01:48:49.804627 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 01:48:49.826182 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 01:48:49.842036 ignition[928]: INFO : Ignition 2.19.0 Jan 24 01:48:49.842036 ignition[928]: INFO : Stage: mount Jan 24 01:48:49.842036 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 01:48:49.842036 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 01:48:49.846242 ignition[928]: INFO : mount: mount passed Jan 24 01:48:49.846242 ignition[928]: INFO : Ignition finished successfully Jan 24 01:48:49.846971 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 01:48:50.281263 systemd-networkd[775]: eth0: Gained IPv6LL Jan 24 01:48:51.788351 systemd-networkd[775]: eth0: Ignoring DHCPv6 address 2a02:1348:179:936a:24:19ff:fee6:4daa/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:936a:24:19ff:fee6:4daa/64 assigned by NDisc. Jan 24 01:48:51.788369 systemd-networkd[775]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 24 01:48:56.734589 coreos-metadata[812]: Jan 24 01:48:56.734 WARN failed to locate config-drive, using the metadata service API instead Jan 24 01:48:56.757317 coreos-metadata[812]: Jan 24 01:48:56.757 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 24 01:48:56.770114 coreos-metadata[812]: Jan 24 01:48:56.770 INFO Fetch successful Jan 24 01:48:56.770910 coreos-metadata[812]: Jan 24 01:48:56.770 INFO wrote hostname srv-58cs2.gb1.brightbox.com to /sysroot/etc/hostname Jan 24 01:48:56.772489 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 24 01:48:56.772673 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 24 01:48:56.792170 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 01:48:56.800776 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 01:48:56.827046 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Jan 24 01:48:56.830343 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 01:48:56.830384 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 01:48:56.832081 kernel: BTRFS info (device vda6): using free space tree Jan 24 01:48:56.838046 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 01:48:56.840808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 01:48:56.881118 ignition[963]: INFO : Ignition 2.19.0 Jan 24 01:48:56.881118 ignition[963]: INFO : Stage: files Jan 24 01:48:56.882959 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 01:48:56.882959 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 01:48:56.882959 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Jan 24 01:48:56.885661 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 01:48:56.885661 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 01:48:56.887767 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 01:48:56.889027 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 01:48:56.890562 unknown[963]: wrote ssh authorized keys file for user: core Jan 24 01:48:56.891652 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 01:48:56.894461 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 01:48:56.894461 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 01:48:57.096106 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 01:48:57.450864 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 01:48:57.450864 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 24 01:48:57.456954 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 24 01:48:57.939941 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 24 01:48:58.406175 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 24 01:48:58.406175 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 24 01:48:58.406175 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 01:48:58.406175 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 01:48:58.411425 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 01:48:58.411425 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 01:48:58.411425 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 01:48:58.411425 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 01:48:58.411425 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 01:48:58.411425 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 01:48:58.411425 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 01:48:58.411425 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 01:48:58.411425 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 01:48:58.411425 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 01:48:58.411425 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 24 01:48:58.748704 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 24 01:49:02.125327 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 01:49:02.125327 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 24 01:49:02.139648 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 01:49:02.139648 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 01:49:02.139648 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 24 01:49:02.139648 ignition[963]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 24 01:49:02.139648 ignition[963]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 01:49:02.139648 ignition[963]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 01:49:02.139648 ignition[963]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 01:49:02.139648 ignition[963]: INFO : files: files passed Jan 24 01:49:02.139648 ignition[963]: INFO : Ignition finished successfully Jan 24 01:49:02.140850 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 01:49:02.154367 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 01:49:02.161721 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 01:49:02.164223 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 01:49:02.165167 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 01:49:02.190079 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 01:49:02.190079 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 01:49:02.193032 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 01:49:02.194827 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 01:49:02.196270 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 01:49:02.202209 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 01:49:02.265385 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 01:49:02.265586 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 01:49:02.267300 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 01:49:02.268537 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 01:49:02.270044 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 01:49:02.276239 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 01:49:02.294691 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 01:49:02.301241 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 01:49:02.317315 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 01:49:02.319257 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 01:49:02.320258 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 01:49:02.321774 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 01:49:02.321968 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 01:49:02.323819 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 01:49:02.324656 systemd[1]: Stopped target basic.target - Basic System. Jan 24 01:49:02.326122 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 01:49:02.327563 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 01:49:02.328942 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 01:49:02.330687 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 01:49:02.332288 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 01:49:02.333930 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 01:49:02.335323 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 01:49:02.336774 systemd[1]: Stopped target swap.target - Swaps. Jan 24 01:49:02.338095 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 01:49:02.338301 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 01:49:02.340007 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 01:49:02.340962 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 01:49:02.342341 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 01:49:02.342557 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 01:49:02.343787 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 01:49:02.343953 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 01:49:02.345993 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 01:49:02.346187 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 01:49:02.347754 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 01:49:02.347904 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 01:49:02.354252 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 01:49:02.358128 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 01:49:02.358800 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 01:49:02.359078 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 01:49:02.362605 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 01:49:02.362799 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 01:49:02.383542 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 01:49:02.383688 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 01:49:02.392629 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 01:49:02.398145 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 01:49:02.398366 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 01:49:02.405653 ignition[1015]: INFO : Ignition 2.19.0 Jan 24 01:49:02.407049 ignition[1015]: INFO : Stage: umount Jan 24 01:49:02.409024 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 01:49:02.409024 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 24 01:49:02.409024 ignition[1015]: INFO : umount: umount passed Jan 24 01:49:02.411396 ignition[1015]: INFO : Ignition finished successfully Jan 24 01:49:02.413023 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 01:49:02.413277 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 01:49:02.414830 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 01:49:02.414930 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 01:49:02.415843 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 01:49:02.415929 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 01:49:02.417232 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 01:49:02.417317 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 01:49:02.418457 systemd[1]: Stopped target network.target - Network. Jan 24 01:49:02.419622 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 01:49:02.419700 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 01:49:02.420998 systemd[1]: Stopped target paths.target - Path Units. Jan 24 01:49:02.422259 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 01:49:02.422318 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 01:49:02.423649 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 01:49:02.424906 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 01:49:02.426361 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 01:49:02.426451 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 01:49:02.427791 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 01:49:02.427865 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 01:49:02.429153 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 01:49:02.429237 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 01:49:02.430717 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 01:49:02.430832 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 01:49:02.432322 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 01:49:02.432477 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 01:49:02.433954 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 01:49:02.435708 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 01:49:02.438255 systemd-networkd[775]: eth0: DHCPv6 lease lost Jan 24 01:49:02.443772 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 01:49:02.443962 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 01:49:02.445898 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 01:49:02.446122 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 01:49:02.450105 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 01:49:02.451206 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 01:49:02.460160 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 01:49:02.462545 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 01:49:02.462625 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 01:49:02.463580 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 01:49:02.463647 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 01:49:02.464430 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 01:49:02.464494 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 01:49:02.465819 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 01:49:02.465883 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 01:49:02.467660 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 01:49:02.478500 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 01:49:02.478752 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 01:49:02.481941 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 01:49:02.482088 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 01:49:02.485210 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 01:49:02.485301 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 01:49:02.486842 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 01:49:02.486898 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 01:49:02.488434 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 01:49:02.488513 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 01:49:02.490633 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 01:49:02.490703 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 01:49:02.491967 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 01:49:02.492131 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 01:49:02.499217 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 01:49:02.500019 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 01:49:02.500092 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 01:49:02.501588 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 01:49:02.501654 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 01:49:02.503923 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 01:49:02.503989 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 01:49:02.505716 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 01:49:02.505796 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 01:49:02.517624 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 01:49:02.517787 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 01:49:02.519604 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 01:49:02.530227 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 01:49:02.542393 systemd[1]: Switching root. Jan 24 01:49:02.575608 systemd-journald[203]: Journal stopped Jan 24 01:49:04.058818 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 24 01:49:04.060090 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 01:49:04.060145 kernel: SELinux: policy capability open_perms=1 Jan 24 01:49:04.060180 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 01:49:04.060213 kernel: SELinux: policy capability always_check_network=0 Jan 24 01:49:04.060240 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 01:49:04.060294 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 01:49:04.060327 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 01:49:04.060392 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 01:49:04.060430 kernel: audit: type=1403 audit(1769219342.835:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 01:49:04.060466 systemd[1]: Successfully loaded SELinux policy in 56.923ms. Jan 24 01:49:04.060508 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.455ms. Jan 24 01:49:04.060538 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 01:49:04.060560 systemd[1]: Detected virtualization kvm. Jan 24 01:49:04.060587 systemd[1]: Detected architecture x86-64. Jan 24 01:49:04.060615 systemd[1]: Detected first boot. Jan 24 01:49:04.060636 systemd[1]: Hostname set to . Jan 24 01:49:04.060655 systemd[1]: Initializing machine ID from VM UUID. Jan 24 01:49:04.060688 zram_generator::config[1057]: No configuration found. Jan 24 01:49:04.060727 systemd[1]: Populated /etc with preset unit settings. Jan 24 01:49:04.060749 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 01:49:04.060776 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 01:49:04.060797 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 01:49:04.060824 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 01:49:04.060858 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 01:49:04.060886 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 01:49:04.060923 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 01:49:04.060953 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 01:49:04.060974 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 01:49:04.061001 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 01:49:04.061036 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 01:49:04.061070 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 01:49:04.061092 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 01:49:04.061120 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 01:49:04.061141 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 01:49:04.061178 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 01:49:04.061200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 01:49:04.061237 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 01:49:04.061257 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 01:49:04.061287 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 01:49:04.061312 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 01:49:04.061346 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 01:49:04.061380 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 01:49:04.061409 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 01:49:04.061430 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 01:49:04.061457 systemd[1]: Reached target slices.target - Slice Units. Jan 24 01:49:04.061478 systemd[1]: Reached target swap.target - Swaps. Jan 24 01:49:04.061511 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 01:49:04.061551 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 01:49:04.061585 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 01:49:04.061607 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 01:49:04.061637 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 01:49:04.061658 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 01:49:04.061685 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 01:49:04.061713 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 01:49:04.061745 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 01:49:04.061777 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 01:49:04.061799 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 01:49:04.061818 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 01:49:04.061844 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 01:49:04.061878 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 01:49:04.061898 systemd[1]: Reached target machines.target - Containers. Jan 24 01:49:04.061918 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 01:49:04.061938 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 01:49:04.063887 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 01:49:04.063959 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 01:49:04.063988 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 01:49:04.064031 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 01:49:04.064054 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 01:49:04.064074 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 01:49:04.064094 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 01:49:04.064116 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 01:49:04.064164 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 01:49:04.064186 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 01:49:04.064215 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 01:49:04.064243 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 01:49:04.064264 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 01:49:04.064284 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 01:49:04.064304 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 01:49:04.064325 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 01:49:04.064344 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 01:49:04.064376 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 01:49:04.064412 systemd[1]: Stopped verity-setup.service. Jan 24 01:49:04.064443 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 01:49:04.064465 kernel: ACPI: bus type drm_connector registered Jan 24 01:49:04.064492 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 01:49:04.064519 kernel: fuse: init (API version 7.39) Jan 24 01:49:04.064540 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 01:49:04.064560 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 01:49:04.064594 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 01:49:04.064623 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 01:49:04.064651 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 01:49:04.064680 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 01:49:04.064701 kernel: loop: module loaded Jan 24 01:49:04.064728 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 01:49:04.064762 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 01:49:04.064784 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 01:49:04.064805 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 01:49:04.064838 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 01:49:04.064862 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 01:49:04.064895 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 01:49:04.064917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 01:49:04.064937 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 01:49:04.064987 systemd-journald[1150]: Collecting audit messages is disabled. Jan 24 01:49:04.066110 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 01:49:04.066164 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 01:49:04.066186 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 01:49:04.066219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 01:49:04.066239 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 01:49:04.066273 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 01:49:04.066296 systemd-journald[1150]: Journal started Jan 24 01:49:04.066327 systemd-journald[1150]: Runtime Journal (/run/log/journal/e65186cb718f4de8b31b0e004d9a3c2d) is 4.7M, max 38.0M, 33.2M free. Jan 24 01:49:03.630629 systemd[1]: Queued start job for default target multi-user.target. Jan 24 01:49:03.650943 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 01:49:03.651713 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 01:49:04.070082 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 01:49:04.071472 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 01:49:04.089848 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 01:49:04.097132 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 01:49:04.106115 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 01:49:04.108122 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 01:49:04.108167 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 01:49:04.111647 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 01:49:04.119648 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 01:49:04.129285 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 01:49:04.130281 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 01:49:04.141215 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 01:49:04.143696 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 01:49:04.145041 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 01:49:04.149619 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 01:49:04.151065 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 01:49:04.158263 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 01:49:04.162625 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 01:49:04.176327 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 01:49:04.183023 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 01:49:04.184221 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 01:49:04.185808 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 01:49:04.218186 systemd-journald[1150]: Time spent on flushing to /var/log/journal/e65186cb718f4de8b31b0e004d9a3c2d is 238.212ms for 1143 entries. Jan 24 01:49:04.218186 systemd-journald[1150]: System Journal (/var/log/journal/e65186cb718f4de8b31b0e004d9a3c2d) is 8.0M, max 584.8M, 576.8M free. Jan 24 01:49:04.553394 systemd-journald[1150]: Received client request to flush runtime journal. Jan 24 01:49:04.553759 kernel: loop0: detected capacity change from 0 to 140768 Jan 24 01:49:04.553812 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 01:49:04.554024 kernel: loop1: detected capacity change from 0 to 8 Jan 24 01:49:04.240944 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 01:49:04.242544 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 01:49:04.250279 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 01:49:04.427499 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 24 01:49:04.427520 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 24 01:49:04.442022 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 01:49:04.444468 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 01:49:04.458236 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 01:49:04.487102 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 01:49:04.500247 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 01:49:04.529834 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 01:49:04.541218 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 01:49:04.557685 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 01:49:04.577486 kernel: loop2: detected capacity change from 0 to 142488 Jan 24 01:49:04.591776 udevadm[1209]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 24 01:49:04.609798 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 01:49:04.629300 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 01:49:04.642148 kernel: loop3: detected capacity change from 0 to 229808 Jan 24 01:49:04.709225 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jan 24 01:49:04.709254 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jan 24 01:49:04.717030 kernel: loop4: detected capacity change from 0 to 140768 Jan 24 01:49:04.722286 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 01:49:04.765171 kernel: loop5: detected capacity change from 0 to 8 Jan 24 01:49:04.776038 kernel: loop6: detected capacity change from 0 to 142488 Jan 24 01:49:04.818033 kernel: loop7: detected capacity change from 0 to 229808 Jan 24 01:49:04.851479 (sd-merge)[1218]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 24 01:49:04.853307 (sd-merge)[1218]: Merged extensions into '/usr'. Jan 24 01:49:04.864773 systemd[1]: Reloading requested from client PID 1190 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 01:49:04.866853 systemd[1]: Reloading... Jan 24 01:49:05.050132 zram_generator::config[1244]: No configuration found. Jan 24 01:49:05.255094 ldconfig[1185]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 01:49:05.270295 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 01:49:05.335543 systemd[1]: Reloading finished in 467 ms. Jan 24 01:49:05.373454 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 01:49:05.379885 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 01:49:05.392284 systemd[1]: Starting ensure-sysext.service... Jan 24 01:49:05.403634 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 01:49:05.420396 systemd[1]: Reloading requested from client PID 1301 ('systemctl') (unit ensure-sysext.service)... Jan 24 01:49:05.420569 systemd[1]: Reloading... Jan 24 01:49:05.521861 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 01:49:05.523546 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 01:49:05.525560 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 01:49:05.526000 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Jan 24 01:49:05.528173 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Jan 24 01:49:05.535304 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 01:49:05.535329 systemd-tmpfiles[1302]: Skipping /boot Jan 24 01:49:05.537058 zram_generator::config[1328]: No configuration found. Jan 24 01:49:05.570238 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 01:49:05.570259 systemd-tmpfiles[1302]: Skipping /boot Jan 24 01:49:05.718348 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 01:49:05.785508 systemd[1]: Reloading finished in 364 ms. Jan 24 01:49:05.809935 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 01:49:05.815727 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 01:49:05.827232 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 01:49:05.835927 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 01:49:05.840556 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 01:49:05.853233 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 01:49:05.863731 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 01:49:05.874218 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 01:49:05.888923 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 01:49:05.892849 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 01:49:05.893492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 01:49:05.896063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 01:49:05.905319 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 01:49:05.911510 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 01:49:05.912449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 01:49:05.912590 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 01:49:05.918837 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 01:49:05.919533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 01:49:05.919973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 01:49:05.920417 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 01:49:05.923230 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 01:49:05.927608 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 01:49:05.932645 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 01:49:05.932953 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 01:49:05.939914 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 01:49:05.940837 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 01:49:05.949583 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 01:49:05.950360 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 01:49:05.953522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 01:49:05.953778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 01:49:05.961125 systemd[1]: Finished ensure-sysext.service. Jan 24 01:49:05.965185 systemd-udevd[1392]: Using default interface naming scheme 'v255'. Jan 24 01:49:05.977543 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 01:49:05.993566 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 01:49:05.995098 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 01:49:06.003125 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 01:49:06.010715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 01:49:06.011037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 01:49:06.012383 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 01:49:06.012712 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 01:49:06.014533 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 01:49:06.017214 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 01:49:06.031379 augenrules[1422]: No rules Jan 24 01:49:06.034250 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 01:49:06.046631 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 01:49:06.058190 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 01:49:06.061091 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 01:49:06.063248 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 01:49:06.067134 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 01:49:06.194235 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 01:49:06.227393 systemd-networkd[1432]: lo: Link UP Jan 24 01:49:06.227834 systemd-networkd[1432]: lo: Gained carrier Jan 24 01:49:06.229159 systemd-networkd[1432]: Enumeration completed Jan 24 01:49:06.229472 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 01:49:06.242269 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 01:49:06.443445 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 01:49:06.445230 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 01:49:06.469414 systemd-resolved[1390]: Positive Trust Anchors: Jan 24 01:49:06.469434 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 01:49:06.469477 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 01:49:06.476622 systemd-resolved[1390]: Using system hostname 'srv-58cs2.gb1.brightbox.com'. Jan 24 01:49:06.479360 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 01:49:06.480283 systemd[1]: Reached target network.target - Network. Jan 24 01:49:06.480926 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 01:49:06.512037 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1450) Jan 24 01:49:06.601058 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 01:49:06.622925 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 01:49:06.626567 kernel: ACPI: button: Power Button [PWRF] Jan 24 01:49:06.625120 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 01:49:06.628149 systemd-networkd[1432]: eth0: Link UP Jan 24 01:49:06.628163 systemd-networkd[1432]: eth0: Gained carrier Jan 24 01:49:06.628184 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 01:49:06.637038 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 01:49:06.651127 systemd-networkd[1432]: eth0: DHCPv4 address 10.230.77.170/30, gateway 10.230.77.169 acquired from 10.230.77.169 Jan 24 01:49:06.652499 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. Jan 24 01:49:06.669325 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 01:49:06.678446 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 01:49:06.711035 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 24 01:49:06.712088 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 01:49:06.729052 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 01:49:06.735933 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 01:49:06.736244 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 01:49:06.816405 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 01:49:07.006974 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 01:49:07.009131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 01:49:07.019380 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 01:49:07.038027 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 01:49:07.077526 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 01:49:07.078665 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 01:49:07.079379 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 01:49:07.080254 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 01:49:07.081080 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 01:49:07.082219 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 01:49:07.083069 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 01:49:07.083835 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 01:49:07.090431 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 01:49:07.090515 systemd[1]: Reached target paths.target - Path Units. Jan 24 01:49:07.091167 systemd[1]: Reached target timers.target - Timer Units. Jan 24 01:49:07.093161 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 01:49:07.095878 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 01:49:07.105503 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 01:49:07.107934 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 01:49:07.109369 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 01:49:07.110217 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 01:49:07.110861 systemd[1]: Reached target basic.target - Basic System. Jan 24 01:49:07.111606 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 01:49:07.111665 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 01:49:07.115164 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 01:49:07.123205 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 01:49:07.128253 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 01:49:07.133830 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 01:49:07.143147 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 01:49:07.146752 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 01:49:07.147506 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 01:49:07.153228 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 01:49:07.157740 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 01:49:07.161218 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 01:49:07.171252 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 01:49:07.270727 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 01:49:07.272358 jq[1483]: false Jan 24 01:49:07.273260 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 01:49:07.275412 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 01:49:07.284261 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 01:49:07.289660 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 01:49:07.294650 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 01:49:07.296115 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 01:49:07.309228 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 01:49:07.315481 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 01:49:07.315806 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 01:49:07.334216 extend-filesystems[1485]: Found loop4 Jan 24 01:49:07.336867 extend-filesystems[1485]: Found loop5 Jan 24 01:49:07.336867 extend-filesystems[1485]: Found loop6 Jan 24 01:49:07.336867 extend-filesystems[1485]: Found loop7 Jan 24 01:49:07.336867 extend-filesystems[1485]: Found vda Jan 24 01:49:07.336867 extend-filesystems[1485]: Found vda1 Jan 24 01:49:07.336867 extend-filesystems[1485]: Found vda2 Jan 24 01:49:07.336867 extend-filesystems[1485]: Found vda3 Jan 24 01:49:07.336867 extend-filesystems[1485]: Found usr Jan 24 01:49:07.336867 extend-filesystems[1485]: Found vda4 Jan 24 01:49:07.336867 extend-filesystems[1485]: Found vda6 Jan 24 01:49:07.336867 extend-filesystems[1485]: Found vda7 Jan 24 01:49:07.336867 extend-filesystems[1485]: Found vda9 Jan 24 01:49:07.336867 extend-filesystems[1485]: Checking size of /dev/vda9 Jan 24 01:49:07.373185 extend-filesystems[1485]: Resized partition /dev/vda9 Jan 24 01:49:07.350079 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 01:49:07.349819 dbus-daemon[1481]: [system] SELinux support is enabled Jan 24 01:49:07.375875 extend-filesystems[1514]: resize2fs 1.47.1 (20-May-2024) Jan 24 01:49:07.381234 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 24 01:49:07.363431 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 01:49:07.381446 jq[1493]: true Jan 24 01:49:07.363481 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 01:49:07.364316 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 01:49:07.364344 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 01:49:07.388563 dbus-daemon[1481]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1432 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 24 01:49:07.406456 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1430) Jan 24 01:49:07.410305 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 24 01:49:07.410847 (ntainerd)[1505]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 01:49:07.422644 jq[1507]: true Jan 24 01:49:07.438128 update_engine[1491]: I20260124 01:49:07.435151 1491 main.cc:92] Flatcar Update Engine starting Jan 24 01:49:07.438714 tar[1497]: linux-amd64/LICENSE Jan 24 01:49:07.438714 tar[1497]: linux-amd64/helm Jan 24 01:49:07.455796 systemd[1]: Started update-engine.service - Update Engine. Jan 24 01:49:07.478241 update_engine[1491]: I20260124 01:49:07.472362 1491 update_check_scheduler.cc:74] Next update check in 11m59s Jan 24 01:49:07.467307 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 01:49:07.468868 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 01:49:07.470017 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 01:49:07.541200 systemd-logind[1490]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 01:49:07.541242 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 01:49:07.547097 systemd-logind[1490]: New seat seat0. Jan 24 01:49:07.552903 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 01:49:08.886694 systemd-resolved[1390]: Clock change detected. Flushing caches. Jan 24 01:49:08.895128 systemd-timesyncd[1416]: Contacted time server 212.71.233.44:123 (0.flatcar.pool.ntp.org). Jan 24 01:49:08.906732 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Jan 24 01:49:08.895465 systemd-timesyncd[1416]: Initial clock synchronization to Sat 2026-01-24 01:49:08.886506 UTC. Jan 24 01:49:08.908263 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 01:49:08.920537 systemd[1]: Starting sshkeys.service... Jan 24 01:49:08.939416 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 24 01:49:08.939612 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 24 01:49:08.947604 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 24 01:49:08.943739 dbus-daemon[1481]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1520 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 24 01:49:08.952615 systemd[1]: Starting polkit.service - Authorization Manager... Jan 24 01:49:08.963940 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 01:49:09.026466 extend-filesystems[1514]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 01:49:09.026466 extend-filesystems[1514]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 24 01:49:09.026466 extend-filesystems[1514]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 24 01:49:09.004756 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 01:49:09.037899 extend-filesystems[1485]: Resized filesystem in /dev/vda9 Jan 24 01:49:09.030572 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 01:49:09.030894 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 01:49:09.079677 polkitd[1542]: Started polkitd version 121 Jan 24 01:49:09.131963 polkitd[1542]: Loading rules from directory /etc/polkit-1/rules.d Jan 24 01:49:09.132078 polkitd[1542]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 24 01:49:09.135505 polkitd[1542]: Finished loading, compiling and executing 2 rules Jan 24 01:49:09.137642 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 24 01:49:09.137908 systemd[1]: Started polkit.service - Authorization Manager. Jan 24 01:49:09.142010 polkitd[1542]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 24 01:49:09.144564 systemd-networkd[1432]: eth0: Gained IPv6LL Jan 24 01:49:09.170915 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 01:49:09.171956 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 01:49:09.178149 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 01:49:09.211743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 01:49:09.221626 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 01:49:09.235101 systemd-hostnamed[1520]: Hostname set to (static) Jan 24 01:49:09.308588 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 01:49:09.323077 containerd[1505]: time="2026-01-24T01:49:09.322933655Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 01:49:09.375188 containerd[1505]: time="2026-01-24T01:49:09.372650490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 01:49:09.376306 containerd[1505]: time="2026-01-24T01:49:09.375588092Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 01:49:09.376306 containerd[1505]: time="2026-01-24T01:49:09.375630066Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 01:49:09.376306 containerd[1505]: time="2026-01-24T01:49:09.375653946Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 01:49:09.376306 containerd[1505]: time="2026-01-24T01:49:09.375914206Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 01:49:09.376306 containerd[1505]: time="2026-01-24T01:49:09.375940752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 01:49:09.376306 containerd[1505]: time="2026-01-24T01:49:09.376061321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 01:49:09.376306 containerd[1505]: time="2026-01-24T01:49:09.376084271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 01:49:09.376623 containerd[1505]: time="2026-01-24T01:49:09.376334632Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 01:49:09.376623 containerd[1505]: time="2026-01-24T01:49:09.376358763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 01:49:09.376623 containerd[1505]: time="2026-01-24T01:49:09.376378650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 01:49:09.376623 containerd[1505]: time="2026-01-24T01:49:09.376394934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 01:49:09.376623 containerd[1505]: time="2026-01-24T01:49:09.376547195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 01:49:09.377071 containerd[1505]: time="2026-01-24T01:49:09.377032187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 01:49:09.378149 containerd[1505]: time="2026-01-24T01:49:09.377232010Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 01:49:09.378149 containerd[1505]: time="2026-01-24T01:49:09.377261925Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 01:49:09.378149 containerd[1505]: time="2026-01-24T01:49:09.377388024Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 01:49:09.378149 containerd[1505]: time="2026-01-24T01:49:09.377482536Z" level=info msg="metadata content store policy set" policy=shared Jan 24 01:49:09.383715 containerd[1505]: time="2026-01-24T01:49:09.383674585Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 01:49:09.383786 containerd[1505]: time="2026-01-24T01:49:09.383761104Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 01:49:09.383861 containerd[1505]: time="2026-01-24T01:49:09.383788658Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 01:49:09.383861 containerd[1505]: time="2026-01-24T01:49:09.383852898Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 01:49:09.383937 containerd[1505]: time="2026-01-24T01:49:09.383908479Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 01:49:09.384184 containerd[1505]: time="2026-01-24T01:49:09.384123879Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 01:49:09.385695 containerd[1505]: time="2026-01-24T01:49:09.384826977Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 01:49:09.385695 containerd[1505]: time="2026-01-24T01:49:09.385005656Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 01:49:09.385695 containerd[1505]: time="2026-01-24T01:49:09.385031635Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 01:49:09.385695 containerd[1505]: time="2026-01-24T01:49:09.385051675Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 01:49:09.385695 containerd[1505]: time="2026-01-24T01:49:09.385073117Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 01:49:09.385695 containerd[1505]: time="2026-01-24T01:49:09.385092490Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 01:49:09.385695 containerd[1505]: time="2026-01-24T01:49:09.385112183Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 01:49:09.385695 containerd[1505]: time="2026-01-24T01:49:09.385132445Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 01:49:09.385695 containerd[1505]: time="2026-01-24T01:49:09.385152810Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 01:49:09.385695 containerd[1505]: time="2026-01-24T01:49:09.385196563Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 01:49:09.385695 containerd[1505]: time="2026-01-24T01:49:09.385217208Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 01:49:09.385695 containerd[1505]: time="2026-01-24T01:49:09.385234918Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 01:49:09.385695 containerd[1505]: time="2026-01-24T01:49:09.385271539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.385695 containerd[1505]: time="2026-01-24T01:49:09.385295132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385314218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385335056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385354132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385392532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385451661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385502296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385527183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385550907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385569616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385588786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385607103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385648146Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385687541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385710541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386256 containerd[1505]: time="2026-01-24T01:49:09.385728164Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 01:49:09.386758 containerd[1505]: time="2026-01-24T01:49:09.386007343Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 01:49:09.386758 containerd[1505]: time="2026-01-24T01:49:09.386051861Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 01:49:09.386758 containerd[1505]: time="2026-01-24T01:49:09.386156843Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 01:49:09.386758 containerd[1505]: time="2026-01-24T01:49:09.386203374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 01:49:09.386758 containerd[1505]: time="2026-01-24T01:49:09.386220958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.386758 containerd[1505]: time="2026-01-24T01:49:09.386245573Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 01:49:09.386758 containerd[1505]: time="2026-01-24T01:49:09.386263147Z" level=info msg="NRI interface is disabled by configuration." Jan 24 01:49:09.386758 containerd[1505]: time="2026-01-24T01:49:09.386278689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.386723677Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.386805752Z" level=info msg="Connect containerd service" Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.386867387Z" level=info msg="using legacy CRI server" Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.386883905Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.387048009Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.388070356Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.388206818Z" level=info msg="Start subscribing containerd event" Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.388278274Z" level=info msg="Start recovering state" Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.388378363Z" level=info msg="Start event monitor" Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.388404059Z" level=info msg="Start snapshots syncer" Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.388456857Z" level=info msg="Start cni network conf syncer for default" Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.388475816Z" level=info msg="Start streaming server" Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.389338529Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.389443485Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 01:49:09.387032 containerd[1505]: time="2026-01-24T01:49:09.389969133Z" level=info msg="containerd successfully booted in 0.090255s" Jan 24 01:49:09.390078 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 01:49:09.978131 sshd_keygen[1519]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 01:49:10.024070 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 01:49:10.039782 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 01:49:10.069992 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 01:49:10.070512 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 01:49:10.100786 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 01:49:10.126220 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 01:49:10.138849 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 01:49:10.147793 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 01:49:10.149583 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 01:49:10.265203 tar[1497]: linux-amd64/README.md Jan 24 01:49:10.281744 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 01:49:10.653555 systemd-networkd[1432]: eth0: Ignoring DHCPv6 address 2a02:1348:179:936a:24:19ff:fee6:4daa/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:936a:24:19ff:fee6:4daa/64 assigned by NDisc. Jan 24 01:49:10.653568 systemd-networkd[1432]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 24 01:49:11.046743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 01:49:11.063016 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 01:49:11.884857 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 01:49:11.896668 systemd[1]: Started sshd@0-10.230.77.170:22-20.161.92.111:38674.service - OpenSSH per-connection server daemon (20.161.92.111:38674). Jan 24 01:49:12.083994 kubelet[1606]: E0124 01:49:12.083777 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 01:49:12.085912 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 01:49:12.086216 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 01:49:12.086737 systemd[1]: kubelet.service: Consumed 2.125s CPU time. Jan 24 01:49:12.472764 sshd[1613]: Accepted publickey for core from 20.161.92.111 port 38674 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:49:12.475686 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:49:12.491182 systemd-logind[1490]: New session 1 of user core. Jan 24 01:49:12.493578 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 01:49:12.505783 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 01:49:12.531900 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 01:49:12.540828 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 01:49:12.555317 (systemd)[1620]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 01:49:12.700322 systemd[1620]: Queued start job for default target default.target. Jan 24 01:49:12.709924 systemd[1620]: Created slice app.slice - User Application Slice. Jan 24 01:49:12.709971 systemd[1620]: Reached target paths.target - Paths. Jan 24 01:49:12.709993 systemd[1620]: Reached target timers.target - Timers. Jan 24 01:49:12.712329 systemd[1620]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 01:49:12.729347 systemd[1620]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 01:49:12.729578 systemd[1620]: Reached target sockets.target - Sockets. Jan 24 01:49:12.729603 systemd[1620]: Reached target basic.target - Basic System. Jan 24 01:49:12.729690 systemd[1620]: Reached target default.target - Main User Target. Jan 24 01:49:12.729758 systemd[1620]: Startup finished in 164ms. Jan 24 01:49:12.729909 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 01:49:12.741515 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 01:49:13.163694 systemd[1]: Started sshd@1-10.230.77.170:22-20.161.92.111:51130.service - OpenSSH per-connection server daemon (20.161.92.111:51130). Jan 24 01:49:13.736368 sshd[1632]: Accepted publickey for core from 20.161.92.111 port 51130 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:49:13.738496 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:49:13.746365 systemd-logind[1490]: New session 2 of user core. Jan 24 01:49:13.756468 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 01:49:14.142554 sshd[1632]: pam_unix(sshd:session): session closed for user core Jan 24 01:49:14.147321 systemd[1]: sshd@1-10.230.77.170:22-20.161.92.111:51130.service: Deactivated successfully. Jan 24 01:49:14.149668 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 01:49:14.150689 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. Jan 24 01:49:14.152065 systemd-logind[1490]: Removed session 2. Jan 24 01:49:14.247892 systemd[1]: Started sshd@2-10.230.77.170:22-20.161.92.111:51146.service - OpenSSH per-connection server daemon (20.161.92.111:51146). Jan 24 01:49:14.808695 sshd[1639]: Accepted publickey for core from 20.161.92.111 port 51146 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:49:14.810778 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:49:14.817024 systemd-logind[1490]: New session 3 of user core. Jan 24 01:49:14.826483 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 01:49:15.184476 login[1596]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 24 01:49:15.192258 systemd-logind[1490]: New session 4 of user core. Jan 24 01:49:15.197601 login[1594]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 24 01:49:15.207504 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 01:49:15.216801 sshd[1639]: pam_unix(sshd:session): session closed for user core Jan 24 01:49:15.217132 systemd-logind[1490]: New session 5 of user core. Jan 24 01:49:15.226489 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 01:49:15.227193 systemd[1]: sshd@2-10.230.77.170:22-20.161.92.111:51146.service: Deactivated successfully. Jan 24 01:49:15.230646 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 01:49:15.232241 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. Jan 24 01:49:15.236934 systemd-logind[1490]: Removed session 3. Jan 24 01:49:15.545050 coreos-metadata[1480]: Jan 24 01:49:15.544 WARN failed to locate config-drive, using the metadata service API instead Jan 24 01:49:15.569985 coreos-metadata[1480]: Jan 24 01:49:15.569 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 24 01:49:15.578084 coreos-metadata[1480]: Jan 24 01:49:15.578 INFO Fetch failed with 404: resource not found Jan 24 01:49:15.578084 coreos-metadata[1480]: Jan 24 01:49:15.578 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 24 01:49:15.578673 coreos-metadata[1480]: Jan 24 01:49:15.578 INFO Fetch successful Jan 24 01:49:15.578904 coreos-metadata[1480]: Jan 24 01:49:15.578 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 24 01:49:15.594152 coreos-metadata[1480]: Jan 24 01:49:15.594 INFO Fetch successful Jan 24 01:49:15.594498 coreos-metadata[1480]: Jan 24 01:49:15.594 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 24 01:49:15.609789 coreos-metadata[1480]: Jan 24 01:49:15.609 INFO Fetch successful Jan 24 01:49:15.610088 coreos-metadata[1480]: Jan 24 01:49:15.610 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 24 01:49:15.624930 coreos-metadata[1480]: Jan 24 01:49:15.624 INFO Fetch successful Jan 24 01:49:15.625183 coreos-metadata[1480]: Jan 24 01:49:15.625 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 24 01:49:15.643027 coreos-metadata[1480]: Jan 24 01:49:15.642 INFO Fetch successful Jan 24 01:49:15.677976 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 01:49:15.678871 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 01:49:16.199439 coreos-metadata[1546]: Jan 24 01:49:16.199 WARN failed to locate config-drive, using the metadata service API instead Jan 24 01:49:16.221140 coreos-metadata[1546]: Jan 24 01:49:16.221 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 24 01:49:16.244603 coreos-metadata[1546]: Jan 24 01:49:16.244 INFO Fetch successful Jan 24 01:49:16.244924 coreos-metadata[1546]: Jan 24 01:49:16.244 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 24 01:49:16.269727 coreos-metadata[1546]: Jan 24 01:49:16.269 INFO Fetch successful Jan 24 01:49:16.272233 unknown[1546]: wrote ssh authorized keys file for user: core Jan 24 01:49:16.292027 update-ssh-keys[1679]: Updated "/home/core/.ssh/authorized_keys" Jan 24 01:49:16.292957 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 01:49:16.295468 systemd[1]: Finished sshkeys.service. Jan 24 01:49:16.298600 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 01:49:16.303411 systemd[1]: Startup finished in 1.911s (kernel) + 17.070s (initrd) + 12.390s (userspace) = 31.372s. Jan 24 01:49:22.336683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 01:49:22.345463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 01:49:22.549141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 01:49:22.566743 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 01:49:22.627694 kubelet[1690]: E0124 01:49:22.627237 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 01:49:22.633047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 01:49:22.633460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 01:49:25.332548 systemd[1]: Started sshd@3-10.230.77.170:22-20.161.92.111:52288.service - OpenSSH per-connection server daemon (20.161.92.111:52288). Jan 24 01:49:25.896692 sshd[1697]: Accepted publickey for core from 20.161.92.111 port 52288 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:49:25.899364 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:49:25.906419 systemd-logind[1490]: New session 6 of user core. Jan 24 01:49:25.916390 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 01:49:26.303093 sshd[1697]: pam_unix(sshd:session): session closed for user core Jan 24 01:49:26.307328 systemd[1]: sshd@3-10.230.77.170:22-20.161.92.111:52288.service: Deactivated successfully. Jan 24 01:49:26.309698 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 01:49:26.311632 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. Jan 24 01:49:26.313107 systemd-logind[1490]: Removed session 6. Jan 24 01:49:26.406525 systemd[1]: Started sshd@4-10.230.77.170:22-20.161.92.111:52300.service - OpenSSH per-connection server daemon (20.161.92.111:52300). Jan 24 01:49:26.964663 sshd[1704]: Accepted publickey for core from 20.161.92.111 port 52300 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:49:26.966885 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:49:26.975097 systemd-logind[1490]: New session 7 of user core. Jan 24 01:49:26.980416 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 01:49:27.363586 sshd[1704]: pam_unix(sshd:session): session closed for user core Jan 24 01:49:27.367251 systemd[1]: sshd@4-10.230.77.170:22-20.161.92.111:52300.service: Deactivated successfully. Jan 24 01:49:27.369427 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 01:49:27.371540 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. Jan 24 01:49:27.373089 systemd-logind[1490]: Removed session 7. Jan 24 01:49:27.466450 systemd[1]: Started sshd@5-10.230.77.170:22-20.161.92.111:52304.service - OpenSSH per-connection server daemon (20.161.92.111:52304). Jan 24 01:49:28.032412 sshd[1711]: Accepted publickey for core from 20.161.92.111 port 52304 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:49:28.034456 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:49:28.042379 systemd-logind[1490]: New session 8 of user core. Jan 24 01:49:28.047400 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 01:49:28.435740 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 24 01:49:28.440815 systemd[1]: sshd@5-10.230.77.170:22-20.161.92.111:52304.service: Deactivated successfully. Jan 24 01:49:28.442982 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 01:49:28.443859 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. Jan 24 01:49:28.445696 systemd-logind[1490]: Removed session 8. Jan 24 01:49:28.552636 systemd[1]: Started sshd@6-10.230.77.170:22-20.161.92.111:52320.service - OpenSSH per-connection server daemon (20.161.92.111:52320). Jan 24 01:49:29.112412 sshd[1718]: Accepted publickey for core from 20.161.92.111 port 52320 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:49:29.114525 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:49:29.122909 systemd-logind[1490]: New session 9 of user core. Jan 24 01:49:29.129375 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 01:49:29.440088 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 01:49:29.440567 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 01:49:29.463658 sudo[1721]: pam_unix(sudo:session): session closed for user root Jan 24 01:49:29.554014 sshd[1718]: pam_unix(sshd:session): session closed for user core Jan 24 01:49:29.558987 systemd[1]: sshd@6-10.230.77.170:22-20.161.92.111:52320.service: Deactivated successfully. Jan 24 01:49:29.561029 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 01:49:29.561937 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. Jan 24 01:49:29.563328 systemd-logind[1490]: Removed session 9. Jan 24 01:49:29.654739 systemd[1]: Started sshd@7-10.230.77.170:22-20.161.92.111:52328.service - OpenSSH per-connection server daemon (20.161.92.111:52328). Jan 24 01:49:30.230846 sshd[1726]: Accepted publickey for core from 20.161.92.111 port 52328 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:49:30.232939 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:49:30.240359 systemd-logind[1490]: New session 10 of user core. Jan 24 01:49:30.247419 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 01:49:30.547813 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 01:49:30.549070 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 01:49:30.554819 sudo[1730]: pam_unix(sudo:session): session closed for user root Jan 24 01:49:30.562997 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 01:49:30.563535 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 01:49:30.581614 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 01:49:30.585839 auditctl[1733]: No rules Jan 24 01:49:30.586364 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 01:49:30.586653 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 01:49:30.594747 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 01:49:30.628405 augenrules[1751]: No rules Jan 24 01:49:30.630087 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 01:49:30.631853 sudo[1729]: pam_unix(sudo:session): session closed for user root Jan 24 01:49:30.721683 sshd[1726]: pam_unix(sshd:session): session closed for user core Jan 24 01:49:30.724951 systemd[1]: sshd@7-10.230.77.170:22-20.161.92.111:52328.service: Deactivated successfully. Jan 24 01:49:30.726929 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 01:49:30.729008 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. Jan 24 01:49:30.730624 systemd-logind[1490]: Removed session 10. Jan 24 01:49:30.825559 systemd[1]: Started sshd@8-10.230.77.170:22-20.161.92.111:52344.service - OpenSSH per-connection server daemon (20.161.92.111:52344). Jan 24 01:49:31.405228 sshd[1759]: Accepted publickey for core from 20.161.92.111 port 52344 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:49:31.407770 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:49:31.417506 systemd-logind[1490]: New session 11 of user core. Jan 24 01:49:31.430513 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 01:49:31.722763 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 01:49:31.723295 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 01:49:32.310743 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 01:49:32.310752 (dockerd)[1778]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 01:49:32.883642 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 01:49:32.895494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 01:49:32.948028 dockerd[1778]: time="2026-01-24T01:49:32.947907656Z" level=info msg="Starting up" Jan 24 01:49:33.193444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 01:49:33.204666 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 01:49:33.311315 kubelet[1796]: E0124 01:49:33.310148 1796 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 01:49:33.316148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 01:49:33.316440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 01:49:33.367215 dockerd[1778]: time="2026-01-24T01:49:33.365647964Z" level=info msg="Loading containers: start." Jan 24 01:49:33.523252 kernel: Initializing XFRM netlink socket Jan 24 01:49:33.642346 systemd-networkd[1432]: docker0: Link UP Jan 24 01:49:33.664297 dockerd[1778]: time="2026-01-24T01:49:33.664252200Z" level=info msg="Loading containers: done." Jan 24 01:49:33.690016 dockerd[1778]: time="2026-01-24T01:49:33.689396141Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 01:49:33.690016 dockerd[1778]: time="2026-01-24T01:49:33.689556039Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 01:49:33.690016 dockerd[1778]: time="2026-01-24T01:49:33.689697162Z" level=info msg="Daemon has completed initialization" Jan 24 01:49:33.745741 dockerd[1778]: time="2026-01-24T01:49:33.745648099Z" level=info msg="API listen on /run/docker.sock" Jan 24 01:49:33.746177 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 01:49:34.914054 containerd[1505]: time="2026-01-24T01:49:34.912737252Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 24 01:49:35.658691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3779744886.mount: Deactivated successfully. Jan 24 01:49:37.772256 containerd[1505]: time="2026-01-24T01:49:37.771201532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:37.773697 containerd[1505]: time="2026-01-24T01:49:37.773608459Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114720" Jan 24 01:49:37.775158 containerd[1505]: time="2026-01-24T01:49:37.775098083Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:37.780039 containerd[1505]: time="2026-01-24T01:49:37.779977604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:37.782152 containerd[1505]: time="2026-01-24T01:49:37.781030490Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.868140378s" Jan 24 01:49:37.782152 containerd[1505]: time="2026-01-24T01:49:37.781112208Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 24 01:49:37.782473 containerd[1505]: time="2026-01-24T01:49:37.782333668Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 24 01:49:40.681479 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 24 01:49:40.951184 containerd[1505]: time="2026-01-24T01:49:40.951017061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:40.952538 containerd[1505]: time="2026-01-24T01:49:40.952432018Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016789" Jan 24 01:49:40.953406 containerd[1505]: time="2026-01-24T01:49:40.953370826Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:40.957445 containerd[1505]: time="2026-01-24T01:49:40.957384125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:40.960256 containerd[1505]: time="2026-01-24T01:49:40.959082112Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 3.176695151s" Jan 24 01:49:40.960256 containerd[1505]: time="2026-01-24T01:49:40.959137994Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 24 01:49:40.960768 containerd[1505]: time="2026-01-24T01:49:40.960722454Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 24 01:49:42.806700 containerd[1505]: time="2026-01-24T01:49:42.806605517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:42.808376 containerd[1505]: time="2026-01-24T01:49:42.808121990Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158110" Jan 24 01:49:42.809306 containerd[1505]: time="2026-01-24T01:49:42.809260077Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:42.815211 containerd[1505]: time="2026-01-24T01:49:42.813467047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:42.815211 containerd[1505]: time="2026-01-24T01:49:42.815047307Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.854269142s" Jan 24 01:49:42.815211 containerd[1505]: time="2026-01-24T01:49:42.815084585Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 24 01:49:42.815948 containerd[1505]: time="2026-01-24T01:49:42.815889404Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 24 01:49:43.321705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 01:49:43.329427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 01:49:43.738903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 01:49:43.752613 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 01:49:43.894684 kubelet[2010]: E0124 01:49:43.894578 2010 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 01:49:43.898966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 01:49:43.899283 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 01:49:44.547776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3863462308.mount: Deactivated successfully. Jan 24 01:49:45.471051 containerd[1505]: time="2026-01-24T01:49:45.469884201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:45.472010 containerd[1505]: time="2026-01-24T01:49:45.471961338Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930104" Jan 24 01:49:45.473033 containerd[1505]: time="2026-01-24T01:49:45.472975607Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:45.475904 containerd[1505]: time="2026-01-24T01:49:45.475828714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:45.477463 containerd[1505]: time="2026-01-24T01:49:45.477276840Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.661190574s" Jan 24 01:49:45.477463 containerd[1505]: time="2026-01-24T01:49:45.477320474Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 24 01:49:45.479597 containerd[1505]: time="2026-01-24T01:49:45.479566856Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 24 01:49:46.007270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount374742712.mount: Deactivated successfully. Jan 24 01:49:47.557023 containerd[1505]: time="2026-01-24T01:49:47.556940773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:47.558620 containerd[1505]: time="2026-01-24T01:49:47.558565352Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jan 24 01:49:47.560471 containerd[1505]: time="2026-01-24T01:49:47.559784593Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:47.563910 containerd[1505]: time="2026-01-24T01:49:47.563873919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:47.566236 containerd[1505]: time="2026-01-24T01:49:47.566157289Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.086439926s" Jan 24 01:49:47.566325 containerd[1505]: time="2026-01-24T01:49:47.566240696Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 24 01:49:47.566902 containerd[1505]: time="2026-01-24T01:49:47.566869695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 01:49:48.441822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305531116.mount: Deactivated successfully. Jan 24 01:49:48.448891 containerd[1505]: time="2026-01-24T01:49:48.447691413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:48.448891 containerd[1505]: time="2026-01-24T01:49:48.448845781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 24 01:49:48.449262 containerd[1505]: time="2026-01-24T01:49:48.449230436Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:48.452419 containerd[1505]: time="2026-01-24T01:49:48.452386824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:48.453852 containerd[1505]: time="2026-01-24T01:49:48.453795667Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 886.876579ms" Jan 24 01:49:48.453980 containerd[1505]: time="2026-01-24T01:49:48.453848667Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 01:49:48.454798 containerd[1505]: time="2026-01-24T01:49:48.454765544Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 24 01:49:49.022950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1654406730.mount: Deactivated successfully. Jan 24 01:49:52.666346 containerd[1505]: time="2026-01-24T01:49:52.666249287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:52.668889 containerd[1505]: time="2026-01-24T01:49:52.668797172Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926235" Jan 24 01:49:52.670342 containerd[1505]: time="2026-01-24T01:49:52.670264570Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:52.674199 containerd[1505]: time="2026-01-24T01:49:52.673949965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:49:52.676473 containerd[1505]: time="2026-01-24T01:49:52.675817828Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.221003212s" Jan 24 01:49:52.676473 containerd[1505]: time="2026-01-24T01:49:52.675888599Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 24 01:49:53.935776 update_engine[1491]: I20260124 01:49:53.934468 1491 update_attempter.cc:509] Updating boot flags... Jan 24 01:49:53.949815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 24 01:49:53.964281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 01:49:54.087297 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2168) Jan 24 01:49:54.304461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 01:49:54.327669 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 01:49:54.356211 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2171) Jan 24 01:49:54.490536 kubelet[2179]: E0124 01:49:54.490460 2179 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 01:49:54.494675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 01:49:54.495123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 01:49:59.265064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 01:49:59.272516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 01:49:59.309043 systemd[1]: Reloading requested from client PID 2194 ('systemctl') (unit session-11.scope)... Jan 24 01:49:59.309122 systemd[1]: Reloading... Jan 24 01:49:59.527659 zram_generator::config[2233]: No configuration found. Jan 24 01:49:59.649692 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 01:49:59.759568 systemd[1]: Reloading finished in 449 ms. Jan 24 01:49:59.835402 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 01:49:59.835548 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 01:49:59.836081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 01:49:59.851674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 01:50:00.007039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 01:50:00.018646 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 01:50:00.118146 kubelet[2301]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 01:50:00.118146 kubelet[2301]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 01:50:00.118146 kubelet[2301]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 01:50:00.120668 kubelet[2301]: I0124 01:50:00.120352 2301 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 01:50:00.805194 kubelet[2301]: I0124 01:50:00.804520 2301 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 01:50:00.805194 kubelet[2301]: I0124 01:50:00.804580 2301 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 01:50:00.805525 kubelet[2301]: I0124 01:50:00.805504 2301 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 01:50:00.840623 kubelet[2301]: I0124 01:50:00.840575 2301 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 01:50:00.841421 kubelet[2301]: E0124 01:50:00.841350 2301 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.77.170:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.77.170:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 01:50:00.859657 kubelet[2301]: E0124 01:50:00.859594 2301 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 01:50:00.859891 kubelet[2301]: I0124 01:50:00.859868 2301 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 01:50:00.872492 kubelet[2301]: I0124 01:50:00.872461 2301 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 01:50:00.876370 kubelet[2301]: I0124 01:50:00.876311 2301 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 01:50:00.879503 kubelet[2301]: I0124 01:50:00.876485 2301 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-58cs2.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 01:50:00.880224 kubelet[2301]: I0124 01:50:00.879895 2301 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 01:50:00.880224 kubelet[2301]: I0124 01:50:00.879929 2301 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 01:50:00.881577 kubelet[2301]: I0124 01:50:00.881186 2301 state_mem.go:36] "Initialized new in-memory state store" Jan 24 01:50:00.886274 kubelet[2301]: I0124 01:50:00.886248 2301 kubelet.go:480] "Attempting to sync node with API server" Jan 24 01:50:00.886444 kubelet[2301]: I0124 01:50:00.886421 2301 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 01:50:00.886600 kubelet[2301]: I0124 01:50:00.886580 2301 kubelet.go:386] "Adding apiserver pod source" Jan 24 01:50:00.886741 kubelet[2301]: I0124 01:50:00.886722 2301 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 01:50:00.895057 kubelet[2301]: E0124 01:50:00.894882 2301 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.77.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-58cs2.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.77.170:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 01:50:00.910796 kubelet[2301]: I0124 01:50:00.910728 2301 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 01:50:00.911904 kubelet[2301]: I0124 01:50:00.911781 2301 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 01:50:00.916171 kubelet[2301]: W0124 01:50:00.915096 2301 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 01:50:00.919389 kubelet[2301]: E0124 01:50:00.919341 2301 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.77.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.77.170:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 01:50:00.927323 kubelet[2301]: I0124 01:50:00.927287 2301 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 01:50:00.927565 kubelet[2301]: I0124 01:50:00.927545 2301 server.go:1289] "Started kubelet" Jan 24 01:50:00.930801 kubelet[2301]: I0124 01:50:00.930778 2301 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 01:50:00.938140 kubelet[2301]: E0124 01:50:00.933624 2301 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.77.170:6443/api/v1/namespaces/default/events\": dial tcp 10.230.77.170:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-58cs2.gb1.brightbox.com.188d87abe20315f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-58cs2.gb1.brightbox.com,UID:srv-58cs2.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-58cs2.gb1.brightbox.com,},FirstTimestamp:2026-01-24 01:50:00.927483383 +0000 UTC m=+0.903377571,LastTimestamp:2026-01-24 01:50:00.927483383 +0000 UTC m=+0.903377571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-58cs2.gb1.brightbox.com,}" Jan 24 01:50:00.940322 kubelet[2301]: I0124 01:50:00.938563 2301 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 01:50:00.942461 kubelet[2301]: I0124 01:50:00.942432 2301 server.go:317] "Adding debug handlers to kubelet server" Jan 24 01:50:00.944837 kubelet[2301]: I0124 01:50:00.943570 2301 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 01:50:00.950717 kubelet[2301]: E0124 01:50:00.945456 2301 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-58cs2.gb1.brightbox.com\" not found" Jan 24 01:50:00.950717 kubelet[2301]: I0124 01:50:00.948357 2301 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 01:50:00.950717 kubelet[2301]: I0124 01:50:00.949159 2301 reconciler.go:26] "Reconciler: start to sync state" Jan 24 01:50:00.950717 kubelet[2301]: E0124 01:50:00.949354 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.77.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-58cs2.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.77.170:6443: connect: connection refused" interval="200ms" Jan 24 01:50:00.950717 kubelet[2301]: E0124 01:50:00.949485 2301 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.77.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.77.170:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 01:50:00.950717 kubelet[2301]: I0124 01:50:00.950076 2301 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 01:50:00.951497 kubelet[2301]: I0124 01:50:00.951472 2301 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 01:50:00.952761 kubelet[2301]: I0124 01:50:00.952732 2301 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 01:50:00.953892 kubelet[2301]: I0124 01:50:00.953783 2301 factory.go:223] Registration of the systemd container factory successfully Jan 24 01:50:00.953999 kubelet[2301]: I0124 01:50:00.953964 2301 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 01:50:00.956555 kubelet[2301]: E0124 01:50:00.956525 2301 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 01:50:00.957335 kubelet[2301]: I0124 01:50:00.957312 2301 factory.go:223] Registration of the containerd container factory successfully Jan 24 01:50:00.988240 kubelet[2301]: I0124 01:50:00.987567 2301 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 01:50:00.988240 kubelet[2301]: I0124 01:50:00.987592 2301 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 01:50:00.988240 kubelet[2301]: I0124 01:50:00.987632 2301 state_mem.go:36] "Initialized new in-memory state store" Jan 24 01:50:00.989955 kubelet[2301]: I0124 01:50:00.989906 2301 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 01:50:00.991946 kubelet[2301]: I0124 01:50:00.991922 2301 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 01:50:00.992067 kubelet[2301]: I0124 01:50:00.992047 2301 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 01:50:00.992298 kubelet[2301]: I0124 01:50:00.992272 2301 policy_none.go:49] "None policy: Start" Jan 24 01:50:00.992375 kubelet[2301]: I0124 01:50:00.992313 2301 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 01:50:00.992375 kubelet[2301]: I0124 01:50:00.992341 2301 state_mem.go:35] "Initializing new in-memory state store" Jan 24 01:50:00.992751 kubelet[2301]: I0124 01:50:00.992272 2301 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 01:50:00.992806 kubelet[2301]: I0124 01:50:00.992746 2301 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 01:50:00.992875 kubelet[2301]: E0124 01:50:00.992837 2301 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 01:50:00.998709 kubelet[2301]: E0124 01:50:00.997519 2301 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.77.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.77.170:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 01:50:01.005437 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 01:50:01.016298 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 01:50:01.021107 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 01:50:01.033896 kubelet[2301]: E0124 01:50:01.033002 2301 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 01:50:01.033896 kubelet[2301]: I0124 01:50:01.033312 2301 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 01:50:01.033896 kubelet[2301]: I0124 01:50:01.033448 2301 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 01:50:01.034086 kubelet[2301]: I0124 01:50:01.034035 2301 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 01:50:01.037238 kubelet[2301]: E0124 01:50:01.036913 2301 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 01:50:01.037238 kubelet[2301]: E0124 01:50:01.037100 2301 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-58cs2.gb1.brightbox.com\" not found" Jan 24 01:50:01.126905 systemd[1]: Created slice kubepods-burstable-pod11bb0ba02ac01736057fcc4b6fbe93e1.slice - libcontainer container kubepods-burstable-pod11bb0ba02ac01736057fcc4b6fbe93e1.slice. Jan 24 01:50:01.136988 kubelet[2301]: I0124 01:50:01.136940 2301 kubelet_node_status.go:75] "Attempting to register node" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.138436 kubelet[2301]: E0124 01:50:01.137390 2301 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.77.170:6443/api/v1/nodes\": dial tcp 10.230.77.170:6443: connect: connection refused" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.142964 kubelet[2301]: E0124 01:50:01.142937 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-58cs2.gb1.brightbox.com\" not found" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.148681 systemd[1]: Created slice kubepods-burstable-podeff92030516f055d832acad82314f372.slice - libcontainer container kubepods-burstable-podeff92030516f055d832acad82314f372.slice. Jan 24 01:50:01.150228 kubelet[2301]: I0124 01:50:01.149689 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11bb0ba02ac01736057fcc4b6fbe93e1-ca-certs\") pod \"kube-apiserver-srv-58cs2.gb1.brightbox.com\" (UID: \"11bb0ba02ac01736057fcc4b6fbe93e1\") " pod="kube-system/kube-apiserver-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.150228 kubelet[2301]: I0124 01:50:01.149731 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11bb0ba02ac01736057fcc4b6fbe93e1-k8s-certs\") pod \"kube-apiserver-srv-58cs2.gb1.brightbox.com\" (UID: \"11bb0ba02ac01736057fcc4b6fbe93e1\") " pod="kube-system/kube-apiserver-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.150228 kubelet[2301]: E0124 01:50:01.149692 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.77.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-58cs2.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.77.170:6443: connect: connection refused" interval="400ms" Jan 24 01:50:01.150228 kubelet[2301]: I0124 01:50:01.149775 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11bb0ba02ac01736057fcc4b6fbe93e1-usr-share-ca-certificates\") pod \"kube-apiserver-srv-58cs2.gb1.brightbox.com\" (UID: \"11bb0ba02ac01736057fcc4b6fbe93e1\") " pod="kube-system/kube-apiserver-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.150228 kubelet[2301]: I0124 01:50:01.149814 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eff92030516f055d832acad82314f372-ca-certs\") pod \"kube-controller-manager-srv-58cs2.gb1.brightbox.com\" (UID: \"eff92030516f055d832acad82314f372\") " pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.150530 kubelet[2301]: I0124 01:50:01.149844 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eff92030516f055d832acad82314f372-flexvolume-dir\") pod \"kube-controller-manager-srv-58cs2.gb1.brightbox.com\" (UID: \"eff92030516f055d832acad82314f372\") " pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.150530 kubelet[2301]: I0124 01:50:01.149918 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eff92030516f055d832acad82314f372-k8s-certs\") pod \"kube-controller-manager-srv-58cs2.gb1.brightbox.com\" (UID: \"eff92030516f055d832acad82314f372\") " pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.150530 kubelet[2301]: I0124 01:50:01.149948 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eff92030516f055d832acad82314f372-kubeconfig\") pod \"kube-controller-manager-srv-58cs2.gb1.brightbox.com\" (UID: \"eff92030516f055d832acad82314f372\") " pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.150530 kubelet[2301]: I0124 01:50:01.149976 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/842eaa6f705bf94a4307856581bf9699-kubeconfig\") pod \"kube-scheduler-srv-58cs2.gb1.brightbox.com\" (UID: \"842eaa6f705bf94a4307856581bf9699\") " pod="kube-system/kube-scheduler-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.150530 kubelet[2301]: I0124 01:50:01.150028 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eff92030516f055d832acad82314f372-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-58cs2.gb1.brightbox.com\" (UID: \"eff92030516f055d832acad82314f372\") " pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.152999 kubelet[2301]: E0124 01:50:01.152725 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-58cs2.gb1.brightbox.com\" not found" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.155982 systemd[1]: Created slice kubepods-burstable-pod842eaa6f705bf94a4307856581bf9699.slice - libcontainer container kubepods-burstable-pod842eaa6f705bf94a4307856581bf9699.slice. Jan 24 01:50:01.161091 kubelet[2301]: E0124 01:50:01.161048 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-58cs2.gb1.brightbox.com\" not found" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.341211 kubelet[2301]: I0124 01:50:01.341174 2301 kubelet_node_status.go:75] "Attempting to register node" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.341842 kubelet[2301]: E0124 01:50:01.341767 2301 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.77.170:6443/api/v1/nodes\": dial tcp 10.230.77.170:6443: connect: connection refused" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.446620 containerd[1505]: time="2026-01-24T01:50:01.445671261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-58cs2.gb1.brightbox.com,Uid:11bb0ba02ac01736057fcc4b6fbe93e1,Namespace:kube-system,Attempt:0,}" Jan 24 01:50:01.460952 containerd[1505]: time="2026-01-24T01:50:01.460553897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-58cs2.gb1.brightbox.com,Uid:eff92030516f055d832acad82314f372,Namespace:kube-system,Attempt:0,}" Jan 24 01:50:01.463245 containerd[1505]: time="2026-01-24T01:50:01.463178576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-58cs2.gb1.brightbox.com,Uid:842eaa6f705bf94a4307856581bf9699,Namespace:kube-system,Attempt:0,}" Jan 24 01:50:01.550718 kubelet[2301]: E0124 01:50:01.550663 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.77.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-58cs2.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.77.170:6443: connect: connection refused" interval="800ms" Jan 24 01:50:01.746302 kubelet[2301]: I0124 01:50:01.745478 2301 kubelet_node_status.go:75] "Attempting to register node" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.746302 kubelet[2301]: E0124 01:50:01.745932 2301 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.77.170:6443/api/v1/nodes\": dial tcp 10.230.77.170:6443: connect: connection refused" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:01.783989 kubelet[2301]: E0124 01:50:01.783905 2301 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.77.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-58cs2.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.77.170:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 01:50:01.887904 kubelet[2301]: E0124 01:50:01.887793 2301 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.77.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.77.170:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 01:50:02.020053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount318570238.mount: Deactivated successfully. Jan 24 01:50:02.023040 kubelet[2301]: E0124 01:50:02.022824 2301 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.77.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.77.170:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 01:50:02.027189 containerd[1505]: time="2026-01-24T01:50:02.027006376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 01:50:02.029425 containerd[1505]: time="2026-01-24T01:50:02.029180421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 24 01:50:02.033196 containerd[1505]: time="2026-01-24T01:50:02.031894739Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 01:50:02.035103 containerd[1505]: time="2026-01-24T01:50:02.035071419Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 01:50:02.036548 containerd[1505]: time="2026-01-24T01:50:02.036509185Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 01:50:02.036721 containerd[1505]: time="2026-01-24T01:50:02.036690280Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 01:50:02.038077 containerd[1505]: time="2026-01-24T01:50:02.038040899Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 01:50:02.038304 containerd[1505]: time="2026-01-24T01:50:02.038272828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 01:50:02.041602 containerd[1505]: time="2026-01-24T01:50:02.041568633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 595.750385ms" Jan 24 01:50:02.045400 containerd[1505]: time="2026-01-24T01:50:02.044868279Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 584.188184ms" Jan 24 01:50:02.048951 containerd[1505]: time="2026-01-24T01:50:02.048910494Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 585.663528ms" Jan 24 01:50:02.195434 kubelet[2301]: E0124 01:50:02.195381 2301 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.77.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.77.170:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 01:50:02.271536 containerd[1505]: time="2026-01-24T01:50:02.269820672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:50:02.271536 containerd[1505]: time="2026-01-24T01:50:02.269927971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:50:02.271536 containerd[1505]: time="2026-01-24T01:50:02.269950457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:02.271536 containerd[1505]: time="2026-01-24T01:50:02.270104928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:02.287201 containerd[1505]: time="2026-01-24T01:50:02.286261239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:50:02.287201 containerd[1505]: time="2026-01-24T01:50:02.286484199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:50:02.287201 containerd[1505]: time="2026-01-24T01:50:02.286737193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:02.287485 containerd[1505]: time="2026-01-24T01:50:02.287339941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:02.296052 containerd[1505]: time="2026-01-24T01:50:02.295584557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:50:02.296052 containerd[1505]: time="2026-01-24T01:50:02.295674947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:50:02.296052 containerd[1505]: time="2026-01-24T01:50:02.295693805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:02.296052 containerd[1505]: time="2026-01-24T01:50:02.295804774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:02.313228 systemd[1]: Started cri-containerd-0bb382b5c6e3c513de317252efdc6f3c8dbd523c3eb48d3730ba2f38390edf35.scope - libcontainer container 0bb382b5c6e3c513de317252efdc6f3c8dbd523c3eb48d3730ba2f38390edf35. Jan 24 01:50:02.343376 systemd[1]: Started cri-containerd-16a896a0628ebf1dad4a212663b81d51dac9e5c049f435962865a2be029a7ca6.scope - libcontainer container 16a896a0628ebf1dad4a212663b81d51dac9e5c049f435962865a2be029a7ca6. Jan 24 01:50:02.351308 kubelet[2301]: E0124 01:50:02.351234 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.77.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-58cs2.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.77.170:6443: connect: connection refused" interval="1.6s" Jan 24 01:50:02.370360 systemd[1]: Started cri-containerd-07bc08b4b4850dbdbada965bf178af9b8f88c0a003b79c69152eb62970d5352f.scope - libcontainer container 07bc08b4b4850dbdbada965bf178af9b8f88c0a003b79c69152eb62970d5352f. Jan 24 01:50:02.463544 containerd[1505]: time="2026-01-24T01:50:02.463327293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-58cs2.gb1.brightbox.com,Uid:11bb0ba02ac01736057fcc4b6fbe93e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"16a896a0628ebf1dad4a212663b81d51dac9e5c049f435962865a2be029a7ca6\"" Jan 24 01:50:02.477719 containerd[1505]: time="2026-01-24T01:50:02.477369791Z" level=info msg="CreateContainer within sandbox \"16a896a0628ebf1dad4a212663b81d51dac9e5c049f435962865a2be029a7ca6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 01:50:02.479557 containerd[1505]: time="2026-01-24T01:50:02.479486884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-58cs2.gb1.brightbox.com,Uid:842eaa6f705bf94a4307856581bf9699,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bb382b5c6e3c513de317252efdc6f3c8dbd523c3eb48d3730ba2f38390edf35\"" Jan 24 01:50:02.484608 containerd[1505]: time="2026-01-24T01:50:02.484406508Z" level=info msg="CreateContainer within sandbox \"0bb382b5c6e3c513de317252efdc6f3c8dbd523c3eb48d3730ba2f38390edf35\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 01:50:02.493272 containerd[1505]: time="2026-01-24T01:50:02.492844061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-58cs2.gb1.brightbox.com,Uid:eff92030516f055d832acad82314f372,Namespace:kube-system,Attempt:0,} returns sandbox id \"07bc08b4b4850dbdbada965bf178af9b8f88c0a003b79c69152eb62970d5352f\"" Jan 24 01:50:02.514950 containerd[1505]: time="2026-01-24T01:50:02.514904620Z" level=info msg="CreateContainer within sandbox \"07bc08b4b4850dbdbada965bf178af9b8f88c0a003b79c69152eb62970d5352f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 01:50:02.518412 containerd[1505]: time="2026-01-24T01:50:02.518366438Z" level=info msg="CreateContainer within sandbox \"16a896a0628ebf1dad4a212663b81d51dac9e5c049f435962865a2be029a7ca6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2e3cde639acfe91516476b901aa28e087febca9aa30d96004ecbc38816b9d5d3\"" Jan 24 01:50:02.519236 containerd[1505]: time="2026-01-24T01:50:02.519048630Z" level=info msg="StartContainer for \"2e3cde639acfe91516476b901aa28e087febca9aa30d96004ecbc38816b9d5d3\"" Jan 24 01:50:02.521693 containerd[1505]: time="2026-01-24T01:50:02.521594098Z" level=info msg="CreateContainer within sandbox \"0bb382b5c6e3c513de317252efdc6f3c8dbd523c3eb48d3730ba2f38390edf35\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6d184f0bd1de6a342615bb157a74bc8af8c203c74e8f314d19ef8a20aa14a552\"" Jan 24 01:50:02.524130 containerd[1505]: time="2026-01-24T01:50:02.522893537Z" level=info msg="StartContainer for \"6d184f0bd1de6a342615bb157a74bc8af8c203c74e8f314d19ef8a20aa14a552\"" Jan 24 01:50:02.545792 containerd[1505]: time="2026-01-24T01:50:02.545737681Z" level=info msg="CreateContainer within sandbox \"07bc08b4b4850dbdbada965bf178af9b8f88c0a003b79c69152eb62970d5352f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3b9d4e5afd63c91dc5c3662c3732f799a1d4c9eb8103da639c125357c3d1fc60\"" Jan 24 01:50:02.547159 containerd[1505]: time="2026-01-24T01:50:02.547129151Z" level=info msg="StartContainer for \"3b9d4e5afd63c91dc5c3662c3732f799a1d4c9eb8103da639c125357c3d1fc60\"" Jan 24 01:50:02.549520 kubelet[2301]: I0124 01:50:02.549494 2301 kubelet_node_status.go:75] "Attempting to register node" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:02.550201 kubelet[2301]: E0124 01:50:02.550132 2301 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.77.170:6443/api/v1/nodes\": dial tcp 10.230.77.170:6443: connect: connection refused" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:02.569900 systemd[1]: Started cri-containerd-2e3cde639acfe91516476b901aa28e087febca9aa30d96004ecbc38816b9d5d3.scope - libcontainer container 2e3cde639acfe91516476b901aa28e087febca9aa30d96004ecbc38816b9d5d3. Jan 24 01:50:02.582392 systemd[1]: Started cri-containerd-6d184f0bd1de6a342615bb157a74bc8af8c203c74e8f314d19ef8a20aa14a552.scope - libcontainer container 6d184f0bd1de6a342615bb157a74bc8af8c203c74e8f314d19ef8a20aa14a552. Jan 24 01:50:02.618386 systemd[1]: Started cri-containerd-3b9d4e5afd63c91dc5c3662c3732f799a1d4c9eb8103da639c125357c3d1fc60.scope - libcontainer container 3b9d4e5afd63c91dc5c3662c3732f799a1d4c9eb8103da639c125357c3d1fc60. Jan 24 01:50:02.688234 containerd[1505]: time="2026-01-24T01:50:02.688152047Z" level=info msg="StartContainer for \"2e3cde639acfe91516476b901aa28e087febca9aa30d96004ecbc38816b9d5d3\" returns successfully" Jan 24 01:50:02.693943 containerd[1505]: time="2026-01-24T01:50:02.693906173Z" level=info msg="StartContainer for \"3b9d4e5afd63c91dc5c3662c3732f799a1d4c9eb8103da639c125357c3d1fc60\" returns successfully" Jan 24 01:50:02.728460 containerd[1505]: time="2026-01-24T01:50:02.728407371Z" level=info msg="StartContainer for \"6d184f0bd1de6a342615bb157a74bc8af8c203c74e8f314d19ef8a20aa14a552\" returns successfully" Jan 24 01:50:02.969584 kubelet[2301]: E0124 01:50:02.969520 2301 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.77.170:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.77.170:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 01:50:03.021229 kubelet[2301]: E0124 01:50:03.016551 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-58cs2.gb1.brightbox.com\" not found" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:03.027718 kubelet[2301]: E0124 01:50:03.025155 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-58cs2.gb1.brightbox.com\" not found" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:03.033186 kubelet[2301]: E0124 01:50:03.031448 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-58cs2.gb1.brightbox.com\" not found" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:04.031376 kubelet[2301]: E0124 01:50:04.030269 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-58cs2.gb1.brightbox.com\" not found" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:04.032606 kubelet[2301]: E0124 01:50:04.032439 2301 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-58cs2.gb1.brightbox.com\" not found" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:04.155395 kubelet[2301]: I0124 01:50:04.155196 2301 kubelet_node_status.go:75] "Attempting to register node" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:05.659462 kubelet[2301]: E0124 01:50:05.659396 2301 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-58cs2.gb1.brightbox.com\" not found" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:05.687695 kubelet[2301]: I0124 01:50:05.687297 2301 kubelet_node_status.go:78] "Successfully registered node" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:05.749260 kubelet[2301]: I0124 01:50:05.749195 2301 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:05.764467 kubelet[2301]: E0124 01:50:05.764418 2301 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-58cs2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:05.764467 kubelet[2301]: I0124 01:50:05.764460 2301 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:05.767810 kubelet[2301]: E0124 01:50:05.767582 2301 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-58cs2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:05.767810 kubelet[2301]: I0124 01:50:05.767612 2301 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:05.771413 kubelet[2301]: E0124 01:50:05.771362 2301 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-58cs2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:05.914023 kubelet[2301]: I0124 01:50:05.913795 2301 apiserver.go:52] "Watching apiserver" Jan 24 01:50:05.949924 kubelet[2301]: I0124 01:50:05.949837 2301 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 01:50:06.249966 kubelet[2301]: I0124 01:50:06.249798 2301 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:06.252675 kubelet[2301]: E0124 01:50:06.252449 2301 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-58cs2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:07.157567 kubelet[2301]: I0124 01:50:07.157520 2301 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:07.168424 kubelet[2301]: I0124 01:50:07.168139 2301 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 01:50:07.579965 systemd[1]: Reloading requested from client PID 2593 ('systemctl') (unit session-11.scope)... Jan 24 01:50:07.580552 systemd[1]: Reloading... Jan 24 01:50:07.699640 zram_generator::config[2638]: No configuration found. Jan 24 01:50:07.887456 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 01:50:08.024504 systemd[1]: Reloading finished in 443 ms. Jan 24 01:50:08.085754 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 01:50:08.103438 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 01:50:08.103886 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 01:50:08.104023 systemd[1]: kubelet.service: Consumed 1.392s CPU time, 129.5M memory peak, 0B memory swap peak. Jan 24 01:50:08.111449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 01:50:08.304655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 01:50:08.317673 (kubelet)[2696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 01:50:08.475398 kubelet[2696]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 01:50:08.475398 kubelet[2696]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 01:50:08.475398 kubelet[2696]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 01:50:08.476048 kubelet[2696]: I0124 01:50:08.475510 2696 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 01:50:08.496848 kubelet[2696]: I0124 01:50:08.496799 2696 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 01:50:08.496848 kubelet[2696]: I0124 01:50:08.496838 2696 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 01:50:08.498417 kubelet[2696]: I0124 01:50:08.498381 2696 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 01:50:08.502399 kubelet[2696]: I0124 01:50:08.502366 2696 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 01:50:08.515121 kubelet[2696]: I0124 01:50:08.513672 2696 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 01:50:08.527274 kubelet[2696]: E0124 01:50:08.527135 2696 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 01:50:08.527485 kubelet[2696]: I0124 01:50:08.527464 2696 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 01:50:08.537331 kubelet[2696]: I0124 01:50:08.537006 2696 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 01:50:08.538481 kubelet[2696]: I0124 01:50:08.538428 2696 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 01:50:08.538830 kubelet[2696]: I0124 01:50:08.538586 2696 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-58cs2.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 01:50:08.540182 kubelet[2696]: I0124 01:50:08.539109 2696 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 01:50:08.540289 kubelet[2696]: I0124 01:50:08.540216 2696 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 01:50:08.541567 kubelet[2696]: I0124 01:50:08.541542 2696 state_mem.go:36] "Initialized new in-memory state store" Jan 24 01:50:08.544606 kubelet[2696]: I0124 01:50:08.544569 2696 kubelet.go:480] "Attempting to sync node with API server" Jan 24 01:50:08.544606 kubelet[2696]: I0124 01:50:08.544604 2696 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 01:50:08.544755 kubelet[2696]: I0124 01:50:08.544643 2696 kubelet.go:386] "Adding apiserver pod source" Jan 24 01:50:08.544755 kubelet[2696]: I0124 01:50:08.544675 2696 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 01:50:08.559526 kubelet[2696]: I0124 01:50:08.559360 2696 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 01:50:08.560105 kubelet[2696]: I0124 01:50:08.560062 2696 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 01:50:08.573051 kubelet[2696]: I0124 01:50:08.573017 2696 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 01:50:08.578748 kubelet[2696]: I0124 01:50:08.573093 2696 server.go:1289] "Started kubelet" Jan 24 01:50:08.578748 kubelet[2696]: I0124 01:50:08.573245 2696 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 01:50:08.578748 kubelet[2696]: I0124 01:50:08.573910 2696 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 01:50:08.578748 kubelet[2696]: I0124 01:50:08.574424 2696 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 01:50:08.581565 kubelet[2696]: I0124 01:50:08.579569 2696 server.go:317] "Adding debug handlers to kubelet server" Jan 24 01:50:08.595197 kubelet[2696]: I0124 01:50:08.595042 2696 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 01:50:08.611119 kubelet[2696]: I0124 01:50:08.610734 2696 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 01:50:08.611407 kubelet[2696]: I0124 01:50:08.611386 2696 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 01:50:08.611981 kubelet[2696]: I0124 01:50:08.611959 2696 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 01:50:08.619332 kubelet[2696]: I0124 01:50:08.619306 2696 reconciler.go:26] "Reconciler: start to sync state" Jan 24 01:50:08.624357 kubelet[2696]: E0124 01:50:08.624238 2696 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 01:50:08.635538 kubelet[2696]: I0124 01:50:08.635500 2696 factory.go:223] Registration of the containerd container factory successfully Jan 24 01:50:08.635538 kubelet[2696]: I0124 01:50:08.635532 2696 factory.go:223] Registration of the systemd container factory successfully Jan 24 01:50:08.636457 kubelet[2696]: I0124 01:50:08.635633 2696 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 01:50:08.638503 sudo[2712]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 24 01:50:08.639073 sudo[2712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 24 01:50:08.699356 kubelet[2696]: I0124 01:50:08.695254 2696 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 01:50:08.707890 kubelet[2696]: I0124 01:50:08.707744 2696 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 01:50:08.708901 kubelet[2696]: I0124 01:50:08.708436 2696 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 01:50:08.709060 kubelet[2696]: I0124 01:50:08.709040 2696 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 01:50:08.709314 kubelet[2696]: I0124 01:50:08.709209 2696 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 01:50:08.710890 kubelet[2696]: E0124 01:50:08.709675 2696 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 01:50:08.811755 kubelet[2696]: E0124 01:50:08.811213 2696 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 24 01:50:08.833252 kubelet[2696]: I0124 01:50:08.833218 2696 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 01:50:08.833252 kubelet[2696]: I0124 01:50:08.833244 2696 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 01:50:08.833555 kubelet[2696]: I0124 01:50:08.833270 2696 state_mem.go:36] "Initialized new in-memory state store" Jan 24 01:50:08.833555 kubelet[2696]: I0124 01:50:08.833463 2696 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 01:50:08.833555 kubelet[2696]: I0124 01:50:08.833483 2696 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 01:50:08.833555 kubelet[2696]: I0124 01:50:08.833510 2696 policy_none.go:49] "None policy: Start" Jan 24 01:50:08.833555 kubelet[2696]: I0124 01:50:08.833524 2696 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 01:50:08.833555 kubelet[2696]: I0124 01:50:08.833540 2696 state_mem.go:35] "Initializing new in-memory state store" Jan 24 01:50:08.834043 kubelet[2696]: I0124 01:50:08.833668 2696 state_mem.go:75] "Updated machine memory state" Jan 24 01:50:08.846192 kubelet[2696]: E0124 01:50:08.845889 2696 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 01:50:08.847831 kubelet[2696]: I0124 01:50:08.847311 2696 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 01:50:08.847831 kubelet[2696]: I0124 01:50:08.847339 2696 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 01:50:08.850983 kubelet[2696]: I0124 01:50:08.850848 2696 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 01:50:08.856353 kubelet[2696]: E0124 01:50:08.856326 2696 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 01:50:08.977456 kubelet[2696]: I0124 01:50:08.977085 2696 kubelet_node_status.go:75] "Attempting to register node" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:08.988881 kubelet[2696]: I0124 01:50:08.988839 2696 kubelet_node_status.go:124] "Node was previously registered" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:08.989647 kubelet[2696]: I0124 01:50:08.989245 2696 kubelet_node_status.go:78] "Successfully registered node" node="srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.014010 kubelet[2696]: I0124 01:50:09.012470 2696 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.014010 kubelet[2696]: I0124 01:50:09.012953 2696 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.015738 kubelet[2696]: I0124 01:50:09.013519 2696 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.025913 kubelet[2696]: I0124 01:50:09.025880 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11bb0ba02ac01736057fcc4b6fbe93e1-k8s-certs\") pod \"kube-apiserver-srv-58cs2.gb1.brightbox.com\" (UID: \"11bb0ba02ac01736057fcc4b6fbe93e1\") " pod="kube-system/kube-apiserver-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.026563 kubelet[2696]: I0124 01:50:09.026328 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eff92030516f055d832acad82314f372-flexvolume-dir\") pod \"kube-controller-manager-srv-58cs2.gb1.brightbox.com\" (UID: \"eff92030516f055d832acad82314f372\") " pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.026563 kubelet[2696]: I0124 01:50:09.026376 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/842eaa6f705bf94a4307856581bf9699-kubeconfig\") pod \"kube-scheduler-srv-58cs2.gb1.brightbox.com\" (UID: \"842eaa6f705bf94a4307856581bf9699\") " pod="kube-system/kube-scheduler-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.026563 kubelet[2696]: I0124 01:50:09.026408 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11bb0ba02ac01736057fcc4b6fbe93e1-ca-certs\") pod \"kube-apiserver-srv-58cs2.gb1.brightbox.com\" (UID: \"11bb0ba02ac01736057fcc4b6fbe93e1\") " pod="kube-system/kube-apiserver-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.026563 kubelet[2696]: I0124 01:50:09.026434 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11bb0ba02ac01736057fcc4b6fbe93e1-usr-share-ca-certificates\") pod \"kube-apiserver-srv-58cs2.gb1.brightbox.com\" (UID: \"11bb0ba02ac01736057fcc4b6fbe93e1\") " pod="kube-system/kube-apiserver-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.026563 kubelet[2696]: I0124 01:50:09.026462 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eff92030516f055d832acad82314f372-ca-certs\") pod \"kube-controller-manager-srv-58cs2.gb1.brightbox.com\" (UID: \"eff92030516f055d832acad82314f372\") " pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.026851 kubelet[2696]: I0124 01:50:09.026493 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eff92030516f055d832acad82314f372-k8s-certs\") pod \"kube-controller-manager-srv-58cs2.gb1.brightbox.com\" (UID: \"eff92030516f055d832acad82314f372\") " pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.026851 kubelet[2696]: I0124 01:50:09.026520 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eff92030516f055d832acad82314f372-kubeconfig\") pod \"kube-controller-manager-srv-58cs2.gb1.brightbox.com\" (UID: \"eff92030516f055d832acad82314f372\") " pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.026851 kubelet[2696]: I0124 01:50:09.026580 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eff92030516f055d832acad82314f372-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-58cs2.gb1.brightbox.com\" (UID: \"eff92030516f055d832acad82314f372\") " pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.032767 kubelet[2696]: I0124 01:50:09.032179 2696 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 01:50:09.033290 kubelet[2696]: I0124 01:50:09.032983 2696 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 01:50:09.033290 kubelet[2696]: I0124 01:50:09.033136 2696 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 01:50:09.033938 kubelet[2696]: E0124 01:50:09.033633 2696 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-58cs2.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.524236 sudo[2712]: pam_unix(sudo:session): session closed for user root Jan 24 01:50:09.555153 kubelet[2696]: I0124 01:50:09.555018 2696 apiserver.go:52] "Watching apiserver" Jan 24 01:50:09.620265 kubelet[2696]: I0124 01:50:09.620153 2696 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 01:50:09.758193 kubelet[2696]: I0124 01:50:09.755660 2696 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.768146 kubelet[2696]: I0124 01:50:09.767510 2696 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 01:50:09.768600 kubelet[2696]: E0124 01:50:09.768543 2696 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-58cs2.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-58cs2.gb1.brightbox.com" Jan 24 01:50:09.804717 kubelet[2696]: I0124 01:50:09.803706 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-58cs2.gb1.brightbox.com" podStartSLOduration=0.803665452 podStartE2EDuration="803.665452ms" podCreationTimestamp="2026-01-24 01:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 01:50:09.803308219 +0000 UTC m=+1.443489753" watchObservedRunningTime="2026-01-24 01:50:09.803665452 +0000 UTC m=+1.443846979" Jan 24 01:50:09.804717 kubelet[2696]: I0124 01:50:09.803854 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-58cs2.gb1.brightbox.com" podStartSLOduration=0.80384757 podStartE2EDuration="803.84757ms" podCreationTimestamp="2026-01-24 01:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 01:50:09.792360222 +0000 UTC m=+1.432541760" watchObservedRunningTime="2026-01-24 01:50:09.80384757 +0000 UTC m=+1.444029148" Jan 24 01:50:11.499614 sudo[1762]: pam_unix(sudo:session): session closed for user root Jan 24 01:50:11.592683 sshd[1759]: pam_unix(sshd:session): session closed for user core Jan 24 01:50:11.600185 systemd[1]: sshd@8-10.230.77.170:22-20.161.92.111:52344.service: Deactivated successfully. Jan 24 01:50:11.603632 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 01:50:11.603970 systemd[1]: session-11.scope: Consumed 8.999s CPU time, 145.1M memory peak, 0B memory swap peak. Jan 24 01:50:11.605112 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. Jan 24 01:50:11.607086 systemd-logind[1490]: Removed session 11. Jan 24 01:50:13.325176 kubelet[2696]: I0124 01:50:13.325117 2696 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 01:50:13.326140 containerd[1505]: time="2026-01-24T01:50:13.325539552Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 01:50:13.327202 kubelet[2696]: I0124 01:50:13.326690 2696 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 01:50:14.325323 kubelet[2696]: I0124 01:50:14.325231 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-58cs2.gb1.brightbox.com" podStartSLOduration=7.32520974 podStartE2EDuration="7.32520974s" podCreationTimestamp="2026-01-24 01:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 01:50:09.816143286 +0000 UTC m=+1.456324822" watchObservedRunningTime="2026-01-24 01:50:14.32520974 +0000 UTC m=+5.965391286" Jan 24 01:50:14.340089 systemd[1]: Created slice kubepods-besteffort-podcc7230d0_8c2d_4b22_b4d2_fc7de07d51df.slice - libcontainer container kubepods-besteffort-podcc7230d0_8c2d_4b22_b4d2_fc7de07d51df.slice. Jan 24 01:50:14.354858 kubelet[2696]: I0124 01:50:14.354802 2696 status_manager.go:895] "Failed to get status for pod" podUID="cc7230d0-8c2d-4b22-b4d2-fc7de07d51df" pod="kube-system/kube-proxy-lwvrr" err="pods \"kube-proxy-lwvrr\" is forbidden: User \"system:node:srv-58cs2.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-58cs2.gb1.brightbox.com' and this object" Jan 24 01:50:14.355046 kubelet[2696]: E0124 01:50:14.355009 2696 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:srv-58cs2.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-58cs2.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Jan 24 01:50:14.355131 kubelet[2696]: E0124 01:50:14.355099 2696 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-58cs2.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-58cs2.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Jan 24 01:50:14.365229 kubelet[2696]: I0124 01:50:14.365176 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cc7230d0-8c2d-4b22-b4d2-fc7de07d51df-kube-proxy\") pod \"kube-proxy-lwvrr\" (UID: \"cc7230d0-8c2d-4b22-b4d2-fc7de07d51df\") " pod="kube-system/kube-proxy-lwvrr" Jan 24 01:50:14.365229 kubelet[2696]: I0124 01:50:14.365234 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc7230d0-8c2d-4b22-b4d2-fc7de07d51df-xtables-lock\") pod \"kube-proxy-lwvrr\" (UID: \"cc7230d0-8c2d-4b22-b4d2-fc7de07d51df\") " pod="kube-system/kube-proxy-lwvrr" Jan 24 01:50:14.365457 kubelet[2696]: I0124 01:50:14.365263 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h5jz\" (UniqueName: \"kubernetes.io/projected/cc7230d0-8c2d-4b22-b4d2-fc7de07d51df-kube-api-access-6h5jz\") pod \"kube-proxy-lwvrr\" (UID: \"cc7230d0-8c2d-4b22-b4d2-fc7de07d51df\") " pod="kube-system/kube-proxy-lwvrr" Jan 24 01:50:14.365457 kubelet[2696]: I0124 01:50:14.365297 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc7230d0-8c2d-4b22-b4d2-fc7de07d51df-lib-modules\") pod \"kube-proxy-lwvrr\" (UID: \"cc7230d0-8c2d-4b22-b4d2-fc7de07d51df\") " pod="kube-system/kube-proxy-lwvrr" Jan 24 01:50:14.377369 systemd[1]: Created slice kubepods-burstable-podb3af5ffb_b970_4918_a67e_ee602022fa1d.slice - libcontainer container kubepods-burstable-podb3af5ffb_b970_4918_a67e_ee602022fa1d.slice. Jan 24 01:50:14.465858 kubelet[2696]: I0124 01:50:14.465808 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-bpf-maps\") pod \"cilium-lsnjs\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " pod="kube-system/cilium-lsnjs" Jan 24 01:50:14.465858 kubelet[2696]: I0124 01:50:14.465864 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3af5ffb-b970-4918-a67e-ee602022fa1d-cilium-config-path\") pod \"cilium-lsnjs\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " pod="kube-system/cilium-lsnjs" Jan 24 01:50:14.466069 kubelet[2696]: I0124 01:50:14.465894 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-host-proc-sys-net\") pod \"cilium-lsnjs\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " pod="kube-system/cilium-lsnjs" Jan 24 01:50:14.466069 kubelet[2696]: I0124 01:50:14.465921 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-host-proc-sys-kernel\") pod \"cilium-lsnjs\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " pod="kube-system/cilium-lsnjs" Jan 24 01:50:14.466069 kubelet[2696]: I0124 01:50:14.465990 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-cni-path\") pod \"cilium-lsnjs\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " pod="kube-system/cilium-lsnjs" Jan 24 01:50:14.466069 kubelet[2696]: I0124 01:50:14.466021 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-hostproc\") pod \"cilium-lsnjs\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " pod="kube-system/cilium-lsnjs" Jan 24 01:50:14.466069 kubelet[2696]: I0124 01:50:14.466049 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-cilium-cgroup\") pod \"cilium-lsnjs\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " pod="kube-system/cilium-lsnjs" Jan 24 01:50:14.467330 kubelet[2696]: I0124 01:50:14.466092 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-lib-modules\") pod \"cilium-lsnjs\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " pod="kube-system/cilium-lsnjs" Jan 24 01:50:14.467330 kubelet[2696]: I0124 01:50:14.466121 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-xtables-lock\") pod \"cilium-lsnjs\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " pod="kube-system/cilium-lsnjs" Jan 24 01:50:14.467330 kubelet[2696]: I0124 01:50:14.466149 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b3af5ffb-b970-4918-a67e-ee602022fa1d-hubble-tls\") pod \"cilium-lsnjs\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " pod="kube-system/cilium-lsnjs" Jan 24 01:50:14.467472 kubelet[2696]: I0124 01:50:14.467408 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-etc-cni-netd\") pod \"cilium-lsnjs\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " pod="kube-system/cilium-lsnjs" Jan 24 01:50:14.467557 kubelet[2696]: I0124 01:50:14.467468 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b3af5ffb-b970-4918-a67e-ee602022fa1d-clustermesh-secrets\") pod \"cilium-lsnjs\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " pod="kube-system/cilium-lsnjs" Jan 24 01:50:14.468209 kubelet[2696]: I0124 01:50:14.467499 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc9kt\" (UniqueName: \"kubernetes.io/projected/b3af5ffb-b970-4918-a67e-ee602022fa1d-kube-api-access-dc9kt\") pod \"cilium-lsnjs\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " pod="kube-system/cilium-lsnjs" Jan 24 01:50:14.468371 kubelet[2696]: I0124 01:50:14.468275 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-cilium-run\") pod \"cilium-lsnjs\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " pod="kube-system/cilium-lsnjs" Jan 24 01:50:14.478637 systemd[1]: Created slice kubepods-besteffort-podafac28d7_e69a_40d2_8641_7d5e2f9bc553.slice - libcontainer container kubepods-besteffort-podafac28d7_e69a_40d2_8641_7d5e2f9bc553.slice. Jan 24 01:50:14.569557 kubelet[2696]: I0124 01:50:14.569498 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqv4b\" (UniqueName: \"kubernetes.io/projected/afac28d7-e69a-40d2-8641-7d5e2f9bc553-kube-api-access-vqv4b\") pod \"cilium-operator-6c4d7847fc-9s2kr\" (UID: \"afac28d7-e69a-40d2-8641-7d5e2f9bc553\") " pod="kube-system/cilium-operator-6c4d7847fc-9s2kr" Jan 24 01:50:14.569829 kubelet[2696]: I0124 01:50:14.569688 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afac28d7-e69a-40d2-8641-7d5e2f9bc553-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-9s2kr\" (UID: \"afac28d7-e69a-40d2-8641-7d5e2f9bc553\") " pod="kube-system/cilium-operator-6c4d7847fc-9s2kr" Jan 24 01:50:15.468292 kubelet[2696]: E0124 01:50:15.467619 2696 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 24 01:50:15.468292 kubelet[2696]: E0124 01:50:15.467825 2696 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cc7230d0-8c2d-4b22-b4d2-fc7de07d51df-kube-proxy podName:cc7230d0-8c2d-4b22-b4d2-fc7de07d51df nodeName:}" failed. No retries permitted until 2026-01-24 01:50:15.967790229 +0000 UTC m=+7.607971762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/cc7230d0-8c2d-4b22-b4d2-fc7de07d51df-kube-proxy") pod "kube-proxy-lwvrr" (UID: "cc7230d0-8c2d-4b22-b4d2-fc7de07d51df") : failed to sync configmap cache: timed out waiting for the condition Jan 24 01:50:15.513264 kubelet[2696]: E0124 01:50:15.513128 2696 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 01:50:15.513264 kubelet[2696]: E0124 01:50:15.513274 2696 projected.go:194] Error preparing data for projected volume kube-api-access-6h5jz for pod kube-system/kube-proxy-lwvrr: failed to sync configmap cache: timed out waiting for the condition Jan 24 01:50:15.513956 kubelet[2696]: E0124 01:50:15.513404 2696 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc7230d0-8c2d-4b22-b4d2-fc7de07d51df-kube-api-access-6h5jz podName:cc7230d0-8c2d-4b22-b4d2-fc7de07d51df nodeName:}" failed. No retries permitted until 2026-01-24 01:50:16.013379401 +0000 UTC m=+7.653560932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6h5jz" (UniqueName: "kubernetes.io/projected/cc7230d0-8c2d-4b22-b4d2-fc7de07d51df-kube-api-access-6h5jz") pod "kube-proxy-lwvrr" (UID: "cc7230d0-8c2d-4b22-b4d2-fc7de07d51df") : failed to sync configmap cache: timed out waiting for the condition Jan 24 01:50:15.591924 kubelet[2696]: E0124 01:50:15.591505 2696 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 01:50:15.591924 kubelet[2696]: E0124 01:50:15.591597 2696 projected.go:194] Error preparing data for projected volume kube-api-access-dc9kt for pod kube-system/cilium-lsnjs: failed to sync configmap cache: timed out waiting for the condition Jan 24 01:50:15.591924 kubelet[2696]: E0124 01:50:15.591759 2696 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3af5ffb-b970-4918-a67e-ee602022fa1d-kube-api-access-dc9kt podName:b3af5ffb-b970-4918-a67e-ee602022fa1d nodeName:}" failed. No retries permitted until 2026-01-24 01:50:16.091726742 +0000 UTC m=+7.731908261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dc9kt" (UniqueName: "kubernetes.io/projected/b3af5ffb-b970-4918-a67e-ee602022fa1d-kube-api-access-dc9kt") pod "cilium-lsnjs" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d") : failed to sync configmap cache: timed out waiting for the condition Jan 24 01:50:15.679204 kubelet[2696]: E0124 01:50:15.678777 2696 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 01:50:15.679204 kubelet[2696]: E0124 01:50:15.678839 2696 projected.go:194] Error preparing data for projected volume kube-api-access-vqv4b for pod kube-system/cilium-operator-6c4d7847fc-9s2kr: failed to sync configmap cache: timed out waiting for the condition Jan 24 01:50:15.679204 kubelet[2696]: E0124 01:50:15.679006 2696 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afac28d7-e69a-40d2-8641-7d5e2f9bc553-kube-api-access-vqv4b podName:afac28d7-e69a-40d2-8641-7d5e2f9bc553 nodeName:}" failed. No retries permitted until 2026-01-24 01:50:16.178906361 +0000 UTC m=+7.819087880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vqv4b" (UniqueName: "kubernetes.io/projected/afac28d7-e69a-40d2-8641-7d5e2f9bc553-kube-api-access-vqv4b") pod "cilium-operator-6c4d7847fc-9s2kr" (UID: "afac28d7-e69a-40d2-8641-7d5e2f9bc553") : failed to sync configmap cache: timed out waiting for the condition Jan 24 01:50:16.150114 containerd[1505]: time="2026-01-24T01:50:16.150038736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lwvrr,Uid:cc7230d0-8c2d-4b22-b4d2-fc7de07d51df,Namespace:kube-system,Attempt:0,}" Jan 24 01:50:16.214134 containerd[1505]: time="2026-01-24T01:50:16.213801611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:50:16.214134 containerd[1505]: time="2026-01-24T01:50:16.213901520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:50:16.214134 containerd[1505]: time="2026-01-24T01:50:16.213935903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:16.214134 containerd[1505]: time="2026-01-24T01:50:16.214066246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:16.245364 systemd[1]: Started cri-containerd-5cc3f8f6655c049cb2f556cb6f413e72d9bd3c09248746954bf6128df674f5ce.scope - libcontainer container 5cc3f8f6655c049cb2f556cb6f413e72d9bd3c09248746954bf6128df674f5ce. Jan 24 01:50:16.284950 containerd[1505]: time="2026-01-24T01:50:16.284890625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9s2kr,Uid:afac28d7-e69a-40d2-8641-7d5e2f9bc553,Namespace:kube-system,Attempt:0,}" Jan 24 01:50:16.285737 containerd[1505]: time="2026-01-24T01:50:16.285703442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lwvrr,Uid:cc7230d0-8c2d-4b22-b4d2-fc7de07d51df,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cc3f8f6655c049cb2f556cb6f413e72d9bd3c09248746954bf6128df674f5ce\"" Jan 24 01:50:16.295554 containerd[1505]: time="2026-01-24T01:50:16.295496005Z" level=info msg="CreateContainer within sandbox \"5cc3f8f6655c049cb2f556cb6f413e72d9bd3c09248746954bf6128df674f5ce\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 01:50:16.322214 containerd[1505]: time="2026-01-24T01:50:16.322130948Z" level=info msg="CreateContainer within sandbox \"5cc3f8f6655c049cb2f556cb6f413e72d9bd3c09248746954bf6128df674f5ce\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7f578b76306b2536e15d7d7143c15eef9d409bb05ccbc7bc375ce57005a7a63f\"" Jan 24 01:50:16.325632 containerd[1505]: time="2026-01-24T01:50:16.325309900Z" level=info msg="StartContainer for \"7f578b76306b2536e15d7d7143c15eef9d409bb05ccbc7bc375ce57005a7a63f\"" Jan 24 01:50:16.337391 containerd[1505]: time="2026-01-24T01:50:16.337254617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:50:16.337391 containerd[1505]: time="2026-01-24T01:50:16.337345004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:50:16.337391 containerd[1505]: time="2026-01-24T01:50:16.337362299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:16.338064 containerd[1505]: time="2026-01-24T01:50:16.337840977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:16.367695 systemd[1]: Started cri-containerd-fd9705f7591fcf28af86eea921047759f89b12a0de3ae038274da700ce91b97b.scope - libcontainer container fd9705f7591fcf28af86eea921047759f89b12a0de3ae038274da700ce91b97b. Jan 24 01:50:16.396397 systemd[1]: Started cri-containerd-7f578b76306b2536e15d7d7143c15eef9d409bb05ccbc7bc375ce57005a7a63f.scope - libcontainer container 7f578b76306b2536e15d7d7143c15eef9d409bb05ccbc7bc375ce57005a7a63f. Jan 24 01:50:16.460243 containerd[1505]: time="2026-01-24T01:50:16.459874725Z" level=info msg="StartContainer for \"7f578b76306b2536e15d7d7143c15eef9d409bb05ccbc7bc375ce57005a7a63f\" returns successfully" Jan 24 01:50:16.469804 containerd[1505]: time="2026-01-24T01:50:16.469762826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9s2kr,Uid:afac28d7-e69a-40d2-8641-7d5e2f9bc553,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd9705f7591fcf28af86eea921047759f89b12a0de3ae038274da700ce91b97b\"" Jan 24 01:50:16.476181 containerd[1505]: time="2026-01-24T01:50:16.476041436Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 24 01:50:16.491029 containerd[1505]: time="2026-01-24T01:50:16.490589273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lsnjs,Uid:b3af5ffb-b970-4918-a67e-ee602022fa1d,Namespace:kube-system,Attempt:0,}" Jan 24 01:50:16.530543 containerd[1505]: time="2026-01-24T01:50:16.530340584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:50:16.530543 containerd[1505]: time="2026-01-24T01:50:16.530431295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:50:16.530543 containerd[1505]: time="2026-01-24T01:50:16.530449192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:16.531094 containerd[1505]: time="2026-01-24T01:50:16.531012427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:16.558474 systemd[1]: Started cri-containerd-3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae.scope - libcontainer container 3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae. Jan 24 01:50:16.602564 containerd[1505]: time="2026-01-24T01:50:16.602354200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lsnjs,Uid:b3af5ffb-b970-4918-a67e-ee602022fa1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae\"" Jan 24 01:50:16.798062 kubelet[2696]: I0124 01:50:16.797494 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lwvrr" podStartSLOduration=2.797473384 podStartE2EDuration="2.797473384s" podCreationTimestamp="2026-01-24 01:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 01:50:16.79740431 +0000 UTC m=+8.437585849" watchObservedRunningTime="2026-01-24 01:50:16.797473384 +0000 UTC m=+8.437654919" Jan 24 01:50:18.278761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697412483.mount: Deactivated successfully. Jan 24 01:50:19.125378 containerd[1505]: time="2026-01-24T01:50:19.125319769Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:50:19.127271 containerd[1505]: time="2026-01-24T01:50:19.126640013Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 24 01:50:19.127271 containerd[1505]: time="2026-01-24T01:50:19.127205437Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:50:19.129732 containerd[1505]: time="2026-01-24T01:50:19.129685490Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.653588687s" Jan 24 01:50:19.129977 containerd[1505]: time="2026-01-24T01:50:19.129847897Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 24 01:50:19.135142 containerd[1505]: time="2026-01-24T01:50:19.134896835Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 24 01:50:19.138459 containerd[1505]: time="2026-01-24T01:50:19.138394848Z" level=info msg="CreateContainer within sandbox \"fd9705f7591fcf28af86eea921047759f89b12a0de3ae038274da700ce91b97b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 24 01:50:19.157411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059113.mount: Deactivated successfully. Jan 24 01:50:19.170308 containerd[1505]: time="2026-01-24T01:50:19.170264115Z" level=info msg="CreateContainer within sandbox \"fd9705f7591fcf28af86eea921047759f89b12a0de3ae038274da700ce91b97b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16\"" Jan 24 01:50:19.171907 containerd[1505]: time="2026-01-24T01:50:19.171001415Z" level=info msg="StartContainer for \"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16\"" Jan 24 01:50:19.217402 systemd[1]: Started cri-containerd-1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16.scope - libcontainer container 1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16. Jan 24 01:50:19.254968 containerd[1505]: time="2026-01-24T01:50:19.254917226Z" level=info msg="StartContainer for \"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16\" returns successfully" Jan 24 01:50:20.815977 kubelet[2696]: I0124 01:50:20.815854 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-9s2kr" podStartSLOduration=4.15522584 podStartE2EDuration="6.815832775s" podCreationTimestamp="2026-01-24 01:50:14 +0000 UTC" firstStartedPulling="2026-01-24 01:50:16.473257402 +0000 UTC m=+8.113438940" lastFinishedPulling="2026-01-24 01:50:19.133864349 +0000 UTC m=+10.774045875" observedRunningTime="2026-01-24 01:50:19.887425316 +0000 UTC m=+11.527606860" watchObservedRunningTime="2026-01-24 01:50:20.815832775 +0000 UTC m=+12.456014302" Jan 24 01:50:26.218263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111046155.mount: Deactivated successfully. Jan 24 01:50:29.438530 containerd[1505]: time="2026-01-24T01:50:29.438420211Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:50:29.440258 containerd[1505]: time="2026-01-24T01:50:29.440207605Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 24 01:50:29.441197 containerd[1505]: time="2026-01-24T01:50:29.440680296Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 01:50:29.443655 containerd[1505]: time="2026-01-24T01:50:29.443317199Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.308365391s" Jan 24 01:50:29.443655 containerd[1505]: time="2026-01-24T01:50:29.443377758Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 24 01:50:29.451392 containerd[1505]: time="2026-01-24T01:50:29.451351440Z" level=info msg="CreateContainer within sandbox \"3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 01:50:29.519057 containerd[1505]: time="2026-01-24T01:50:29.518864620Z" level=info msg="CreateContainer within sandbox \"3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb\"" Jan 24 01:50:29.520367 containerd[1505]: time="2026-01-24T01:50:29.519932055Z" level=info msg="StartContainer for \"df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb\"" Jan 24 01:50:29.626646 systemd[1]: Started cri-containerd-df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb.scope - libcontainer container df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb. Jan 24 01:50:29.677796 containerd[1505]: time="2026-01-24T01:50:29.677725164Z" level=info msg="StartContainer for \"df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb\" returns successfully" Jan 24 01:50:29.689273 systemd[1]: cri-containerd-df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb.scope: Deactivated successfully. Jan 24 01:50:29.979335 containerd[1505]: time="2026-01-24T01:50:29.965818937Z" level=info msg="shim disconnected" id=df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb namespace=k8s.io Jan 24 01:50:29.979335 containerd[1505]: time="2026-01-24T01:50:29.979249858Z" level=warning msg="cleaning up after shim disconnected" id=df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb namespace=k8s.io Jan 24 01:50:29.979335 containerd[1505]: time="2026-01-24T01:50:29.979286643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:50:30.503240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb-rootfs.mount: Deactivated successfully. Jan 24 01:50:30.898848 containerd[1505]: time="2026-01-24T01:50:30.898606170Z" level=info msg="CreateContainer within sandbox \"3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 01:50:30.914951 containerd[1505]: time="2026-01-24T01:50:30.914899403Z" level=info msg="CreateContainer within sandbox \"3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b\"" Jan 24 01:50:30.916140 containerd[1505]: time="2026-01-24T01:50:30.916060466Z" level=info msg="StartContainer for \"430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b\"" Jan 24 01:50:30.963498 systemd[1]: Started cri-containerd-430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b.scope - libcontainer container 430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b. Jan 24 01:50:31.010434 containerd[1505]: time="2026-01-24T01:50:31.010171079Z" level=info msg="StartContainer for \"430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b\" returns successfully" Jan 24 01:50:31.026603 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 01:50:31.026997 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 01:50:31.027129 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 24 01:50:31.033665 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 01:50:31.036616 systemd[1]: cri-containerd-430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b.scope: Deactivated successfully. Jan 24 01:50:31.067979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b-rootfs.mount: Deactivated successfully. Jan 24 01:50:31.073866 containerd[1505]: time="2026-01-24T01:50:31.073564851Z" level=info msg="shim disconnected" id=430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b namespace=k8s.io Jan 24 01:50:31.073866 containerd[1505]: time="2026-01-24T01:50:31.073642287Z" level=warning msg="cleaning up after shim disconnected" id=430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b namespace=k8s.io Jan 24 01:50:31.073866 containerd[1505]: time="2026-01-24T01:50:31.073659586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:50:31.140840 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 01:50:31.903424 containerd[1505]: time="2026-01-24T01:50:31.903238025Z" level=info msg="CreateContainer within sandbox \"3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 01:50:31.936878 containerd[1505]: time="2026-01-24T01:50:31.936782455Z" level=info msg="CreateContainer within sandbox \"3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe\"" Jan 24 01:50:31.937804 containerd[1505]: time="2026-01-24T01:50:31.937575541Z" level=info msg="StartContainer for \"baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe\"" Jan 24 01:50:31.978152 systemd[1]: run-containerd-runc-k8s.io-baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe-runc.9pxLUv.mount: Deactivated successfully. Jan 24 01:50:31.995427 systemd[1]: Started cri-containerd-baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe.scope - libcontainer container baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe. Jan 24 01:50:32.040092 containerd[1505]: time="2026-01-24T01:50:32.040041647Z" level=info msg="StartContainer for \"baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe\" returns successfully" Jan 24 01:50:32.049270 systemd[1]: cri-containerd-baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe.scope: Deactivated successfully. Jan 24 01:50:32.087997 containerd[1505]: time="2026-01-24T01:50:32.087871807Z" level=info msg="shim disconnected" id=baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe namespace=k8s.io Jan 24 01:50:32.087997 containerd[1505]: time="2026-01-24T01:50:32.087996602Z" level=warning msg="cleaning up after shim disconnected" id=baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe namespace=k8s.io Jan 24 01:50:32.088688 containerd[1505]: time="2026-01-24T01:50:32.088012065Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:50:32.109678 containerd[1505]: time="2026-01-24T01:50:32.109585731Z" level=warning msg="cleanup warnings time=\"2026-01-24T01:50:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 01:50:32.908617 containerd[1505]: time="2026-01-24T01:50:32.908547314Z" level=info msg="CreateContainer within sandbox \"3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 01:50:32.927141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe-rootfs.mount: Deactivated successfully. Jan 24 01:50:32.933615 containerd[1505]: time="2026-01-24T01:50:32.933450943Z" level=info msg="CreateContainer within sandbox \"3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37\"" Jan 24 01:50:32.936197 containerd[1505]: time="2026-01-24T01:50:32.935406148Z" level=info msg="StartContainer for \"125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37\"" Jan 24 01:50:32.974883 systemd[1]: run-containerd-runc-k8s.io-125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37-runc.VTufgb.mount: Deactivated successfully. Jan 24 01:50:32.990347 systemd[1]: Started cri-containerd-125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37.scope - libcontainer container 125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37. Jan 24 01:50:33.027887 systemd[1]: cri-containerd-125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37.scope: Deactivated successfully. Jan 24 01:50:33.031302 containerd[1505]: time="2026-01-24T01:50:33.031067618Z" level=info msg="StartContainer for \"125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37\" returns successfully" Jan 24 01:50:33.058020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37-rootfs.mount: Deactivated successfully. Jan 24 01:50:33.070231 containerd[1505]: time="2026-01-24T01:50:33.070123230Z" level=info msg="shim disconnected" id=125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37 namespace=k8s.io Jan 24 01:50:33.070542 containerd[1505]: time="2026-01-24T01:50:33.070295226Z" level=warning msg="cleaning up after shim disconnected" id=125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37 namespace=k8s.io Jan 24 01:50:33.070542 containerd[1505]: time="2026-01-24T01:50:33.070316043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:50:33.915875 containerd[1505]: time="2026-01-24T01:50:33.915716174Z" level=info msg="CreateContainer within sandbox \"3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 01:50:33.948398 containerd[1505]: time="2026-01-24T01:50:33.947926584Z" level=info msg="CreateContainer within sandbox \"3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3\"" Jan 24 01:50:33.950274 containerd[1505]: time="2026-01-24T01:50:33.948815229Z" level=info msg="StartContainer for \"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3\"" Jan 24 01:50:33.992382 systemd[1]: Started cri-containerd-331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3.scope - libcontainer container 331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3. Jan 24 01:50:34.037938 containerd[1505]: time="2026-01-24T01:50:34.036247037Z" level=info msg="StartContainer for \"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3\" returns successfully" Jan 24 01:50:34.287797 kubelet[2696]: I0124 01:50:34.287621 2696 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 01:50:34.375531 systemd[1]: Created slice kubepods-burstable-podd9d5c062_231d_4060_9f84_a3c3da8f666e.slice - libcontainer container kubepods-burstable-podd9d5c062_231d_4060_9f84_a3c3da8f666e.slice. Jan 24 01:50:34.387414 systemd[1]: Created slice kubepods-burstable-pod73a690fe_965f_4cfc_a749_a073ec99ba3d.slice - libcontainer container kubepods-burstable-pod73a690fe_965f_4cfc_a749_a073ec99ba3d.slice. Jan 24 01:50:34.420438 kubelet[2696]: I0124 01:50:34.420357 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9d5c062-231d-4060-9f84-a3c3da8f666e-config-volume\") pod \"coredns-674b8bbfcf-x6z7c\" (UID: \"d9d5c062-231d-4060-9f84-a3c3da8f666e\") " pod="kube-system/coredns-674b8bbfcf-x6z7c" Jan 24 01:50:34.420438 kubelet[2696]: I0124 01:50:34.420438 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73a690fe-965f-4cfc-a749-a073ec99ba3d-config-volume\") pod \"coredns-674b8bbfcf-rq47j\" (UID: \"73a690fe-965f-4cfc-a749-a073ec99ba3d\") " pod="kube-system/coredns-674b8bbfcf-rq47j" Jan 24 01:50:34.420713 kubelet[2696]: I0124 01:50:34.420484 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nsf6\" (UniqueName: \"kubernetes.io/projected/73a690fe-965f-4cfc-a749-a073ec99ba3d-kube-api-access-9nsf6\") pod \"coredns-674b8bbfcf-rq47j\" (UID: \"73a690fe-965f-4cfc-a749-a073ec99ba3d\") " pod="kube-system/coredns-674b8bbfcf-rq47j" Jan 24 01:50:34.420713 kubelet[2696]: I0124 01:50:34.420532 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwdf5\" (UniqueName: \"kubernetes.io/projected/d9d5c062-231d-4060-9f84-a3c3da8f666e-kube-api-access-wwdf5\") pod \"coredns-674b8bbfcf-x6z7c\" (UID: \"d9d5c062-231d-4060-9f84-a3c3da8f666e\") " pod="kube-system/coredns-674b8bbfcf-x6z7c" Jan 24 01:50:34.694199 containerd[1505]: time="2026-01-24T01:50:34.693793132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x6z7c,Uid:d9d5c062-231d-4060-9f84-a3c3da8f666e,Namespace:kube-system,Attempt:0,}" Jan 24 01:50:34.697003 containerd[1505]: time="2026-01-24T01:50:34.696685679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rq47j,Uid:73a690fe-965f-4cfc-a749-a073ec99ba3d,Namespace:kube-system,Attempt:0,}" Jan 24 01:50:36.781497 systemd-networkd[1432]: cilium_host: Link UP Jan 24 01:50:36.781799 systemd-networkd[1432]: cilium_net: Link UP Jan 24 01:50:36.782148 systemd-networkd[1432]: cilium_net: Gained carrier Jan 24 01:50:36.782493 systemd-networkd[1432]: cilium_host: Gained carrier Jan 24 01:50:36.782734 systemd-networkd[1432]: cilium_net: Gained IPv6LL Jan 24 01:50:36.783024 systemd-networkd[1432]: cilium_host: Gained IPv6LL Jan 24 01:50:36.958419 systemd-networkd[1432]: cilium_vxlan: Link UP Jan 24 01:50:36.958431 systemd-networkd[1432]: cilium_vxlan: Gained carrier Jan 24 01:50:37.490295 kernel: NET: Registered PF_ALG protocol family Jan 24 01:50:38.167368 systemd-networkd[1432]: cilium_vxlan: Gained IPv6LL Jan 24 01:50:38.559542 systemd-networkd[1432]: lxc_health: Link UP Jan 24 01:50:38.564811 systemd-networkd[1432]: lxc_health: Gained carrier Jan 24 01:50:38.834253 systemd-networkd[1432]: lxc982d4d34c075: Link UP Jan 24 01:50:38.853628 kernel: eth0: renamed from tmp0df65 Jan 24 01:50:38.865909 systemd-networkd[1432]: lxc55bf80c5438a: Link UP Jan 24 01:50:38.873138 systemd-networkd[1432]: lxc982d4d34c075: Gained carrier Jan 24 01:50:38.873303 kernel: eth0: renamed from tmp9f85c Jan 24 01:50:38.887536 systemd-networkd[1432]: lxc55bf80c5438a: Gained carrier Jan 24 01:50:39.639414 systemd-networkd[1432]: lxc_health: Gained IPv6LL Jan 24 01:50:40.280361 systemd-networkd[1432]: lxc55bf80c5438a: Gained IPv6LL Jan 24 01:50:40.527197 kubelet[2696]: I0124 01:50:40.523527 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lsnjs" podStartSLOduration=13.683543257 podStartE2EDuration="26.523489815s" podCreationTimestamp="2026-01-24 01:50:14 +0000 UTC" firstStartedPulling="2026-01-24 01:50:16.604706528 +0000 UTC m=+8.244888060" lastFinishedPulling="2026-01-24 01:50:29.444653099 +0000 UTC m=+21.084834618" observedRunningTime="2026-01-24 01:50:34.96374892 +0000 UTC m=+26.603930454" watchObservedRunningTime="2026-01-24 01:50:40.523489815 +0000 UTC m=+32.163671348" Jan 24 01:50:40.791481 systemd-networkd[1432]: lxc982d4d34c075: Gained IPv6LL Jan 24 01:50:44.625050 containerd[1505]: time="2026-01-24T01:50:44.624829571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:50:44.625050 containerd[1505]: time="2026-01-24T01:50:44.625009293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:50:44.625851 containerd[1505]: time="2026-01-24T01:50:44.625028677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:44.636131 containerd[1505]: time="2026-01-24T01:50:44.630214893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:44.679939 containerd[1505]: time="2026-01-24T01:50:44.678745534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:50:44.679939 containerd[1505]: time="2026-01-24T01:50:44.678815972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:50:44.679939 containerd[1505]: time="2026-01-24T01:50:44.678832651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:44.679939 containerd[1505]: time="2026-01-24T01:50:44.678951450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:50:44.706417 systemd[1]: Started cri-containerd-9f85c43e5fc00237e880cb60f266c4365ecda5203ad39d5bb0c3b6a078fa3b42.scope - libcontainer container 9f85c43e5fc00237e880cb60f266c4365ecda5203ad39d5bb0c3b6a078fa3b42. Jan 24 01:50:44.742408 systemd[1]: Started cri-containerd-0df6502af8a91176774cd33999e189ec920b99e8c0f866b213ebc2711b47bdfa.scope - libcontainer container 0df6502af8a91176774cd33999e189ec920b99e8c0f866b213ebc2711b47bdfa. Jan 24 01:50:44.849048 containerd[1505]: time="2026-01-24T01:50:44.848869813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rq47j,Uid:73a690fe-965f-4cfc-a749-a073ec99ba3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f85c43e5fc00237e880cb60f266c4365ecda5203ad39d5bb0c3b6a078fa3b42\"" Jan 24 01:50:44.867197 containerd[1505]: time="2026-01-24T01:50:44.866936462Z" level=info msg="CreateContainer within sandbox \"9f85c43e5fc00237e880cb60f266c4365ecda5203ad39d5bb0c3b6a078fa3b42\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 01:50:44.902754 containerd[1505]: time="2026-01-24T01:50:44.902514675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x6z7c,Uid:d9d5c062-231d-4060-9f84-a3c3da8f666e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0df6502af8a91176774cd33999e189ec920b99e8c0f866b213ebc2711b47bdfa\"" Jan 24 01:50:44.917045 containerd[1505]: time="2026-01-24T01:50:44.916938280Z" level=info msg="CreateContainer within sandbox \"0df6502af8a91176774cd33999e189ec920b99e8c0f866b213ebc2711b47bdfa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 01:50:44.927481 containerd[1505]: time="2026-01-24T01:50:44.927418814Z" level=info msg="CreateContainer within sandbox \"9f85c43e5fc00237e880cb60f266c4365ecda5203ad39d5bb0c3b6a078fa3b42\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c8bca1154011f5cb25556b09c04ebb683e68e83bfd168b635ecf5c4571150bd\"" Jan 24 01:50:44.930848 containerd[1505]: time="2026-01-24T01:50:44.930800457Z" level=info msg="StartContainer for \"7c8bca1154011f5cb25556b09c04ebb683e68e83bfd168b635ecf5c4571150bd\"" Jan 24 01:50:44.937951 containerd[1505]: time="2026-01-24T01:50:44.937839009Z" level=info msg="CreateContainer within sandbox \"0df6502af8a91176774cd33999e189ec920b99e8c0f866b213ebc2711b47bdfa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"50c069ac884445616f4f3c11f7d52a6ada98fbf403680b59656da3c2801c93d1\"" Jan 24 01:50:44.940331 containerd[1505]: time="2026-01-24T01:50:44.939953138Z" level=info msg="StartContainer for \"50c069ac884445616f4f3c11f7d52a6ada98fbf403680b59656da3c2801c93d1\"" Jan 24 01:50:44.998415 systemd[1]: Started cri-containerd-50c069ac884445616f4f3c11f7d52a6ada98fbf403680b59656da3c2801c93d1.scope - libcontainer container 50c069ac884445616f4f3c11f7d52a6ada98fbf403680b59656da3c2801c93d1. Jan 24 01:50:45.001854 systemd[1]: Started cri-containerd-7c8bca1154011f5cb25556b09c04ebb683e68e83bfd168b635ecf5c4571150bd.scope - libcontainer container 7c8bca1154011f5cb25556b09c04ebb683e68e83bfd168b635ecf5c4571150bd. Jan 24 01:50:45.058763 containerd[1505]: time="2026-01-24T01:50:45.058709944Z" level=info msg="StartContainer for \"50c069ac884445616f4f3c11f7d52a6ada98fbf403680b59656da3c2801c93d1\" returns successfully" Jan 24 01:50:45.059052 containerd[1505]: time="2026-01-24T01:50:45.058787131Z" level=info msg="StartContainer for \"7c8bca1154011f5cb25556b09c04ebb683e68e83bfd168b635ecf5c4571150bd\" returns successfully" Jan 24 01:50:45.641238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1709788404.mount: Deactivated successfully. Jan 24 01:50:45.992604 kubelet[2696]: I0124 01:50:45.992273 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rq47j" podStartSLOduration=31.992225957 podStartE2EDuration="31.992225957s" podCreationTimestamp="2026-01-24 01:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 01:50:45.990070506 +0000 UTC m=+37.630252050" watchObservedRunningTime="2026-01-24 01:50:45.992225957 +0000 UTC m=+37.632407495" Jan 24 01:50:46.997202 kubelet[2696]: I0124 01:50:46.996499 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x6z7c" podStartSLOduration=32.996479016 podStartE2EDuration="32.996479016s" podCreationTimestamp="2026-01-24 01:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 01:50:46.004764619 +0000 UTC m=+37.644946145" watchObservedRunningTime="2026-01-24 01:50:46.996479016 +0000 UTC m=+38.636660548" Jan 24 01:51:25.858554 systemd[1]: Started sshd@9-10.230.77.170:22-20.161.92.111:46376.service - OpenSSH per-connection server daemon (20.161.92.111:46376). Jan 24 01:51:26.465606 sshd[4091]: Accepted publickey for core from 20.161.92.111 port 46376 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:51:26.467735 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:51:26.478342 systemd-logind[1490]: New session 12 of user core. Jan 24 01:51:26.487469 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 01:51:27.444508 sshd[4091]: pam_unix(sshd:session): session closed for user core Jan 24 01:51:27.449802 systemd[1]: sshd@9-10.230.77.170:22-20.161.92.111:46376.service: Deactivated successfully. Jan 24 01:51:27.452674 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 01:51:27.453732 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. Jan 24 01:51:27.455282 systemd-logind[1490]: Removed session 12. Jan 24 01:51:32.555604 systemd[1]: Started sshd@10-10.230.77.170:22-20.161.92.111:39832.service - OpenSSH per-connection server daemon (20.161.92.111:39832). Jan 24 01:51:33.139832 sshd[4105]: Accepted publickey for core from 20.161.92.111 port 39832 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:51:33.142441 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:51:33.151413 systemd-logind[1490]: New session 13 of user core. Jan 24 01:51:33.157416 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 01:51:33.642127 sshd[4105]: pam_unix(sshd:session): session closed for user core Jan 24 01:51:33.648845 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. Jan 24 01:51:33.650403 systemd[1]: sshd@10-10.230.77.170:22-20.161.92.111:39832.service: Deactivated successfully. Jan 24 01:51:33.653198 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 01:51:33.654687 systemd-logind[1490]: Removed session 13. Jan 24 01:51:38.750750 systemd[1]: Started sshd@11-10.230.77.170:22-20.161.92.111:39834.service - OpenSSH per-connection server daemon (20.161.92.111:39834). Jan 24 01:51:39.352216 sshd[4120]: Accepted publickey for core from 20.161.92.111 port 39834 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:51:39.354126 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:51:39.365240 systemd-logind[1490]: New session 14 of user core. Jan 24 01:51:39.373392 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 01:51:39.869995 sshd[4120]: pam_unix(sshd:session): session closed for user core Jan 24 01:51:39.878553 systemd[1]: sshd@11-10.230.77.170:22-20.161.92.111:39834.service: Deactivated successfully. Jan 24 01:51:39.882538 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 01:51:39.883990 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. Jan 24 01:51:39.886749 systemd-logind[1490]: Removed session 14. Jan 24 01:51:44.974600 systemd[1]: Started sshd@12-10.230.77.170:22-20.161.92.111:46544.service - OpenSSH per-connection server daemon (20.161.92.111:46544). Jan 24 01:51:45.545199 sshd[4135]: Accepted publickey for core from 20.161.92.111 port 46544 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:51:45.548352 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:51:45.557709 systemd-logind[1490]: New session 15 of user core. Jan 24 01:51:45.566519 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 01:51:46.038367 sshd[4135]: pam_unix(sshd:session): session closed for user core Jan 24 01:51:46.043729 systemd[1]: sshd@12-10.230.77.170:22-20.161.92.111:46544.service: Deactivated successfully. Jan 24 01:51:46.046822 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 01:51:46.048111 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. Jan 24 01:51:46.049727 systemd-logind[1490]: Removed session 15. Jan 24 01:51:46.146602 systemd[1]: Started sshd@13-10.230.77.170:22-20.161.92.111:46552.service - OpenSSH per-connection server daemon (20.161.92.111:46552). Jan 24 01:51:46.713639 sshd[4149]: Accepted publickey for core from 20.161.92.111 port 46552 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:51:46.715790 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:51:46.721969 systemd-logind[1490]: New session 16 of user core. Jan 24 01:51:46.733644 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 01:51:47.288067 sshd[4149]: pam_unix(sshd:session): session closed for user core Jan 24 01:51:47.294332 systemd[1]: sshd@13-10.230.77.170:22-20.161.92.111:46552.service: Deactivated successfully. Jan 24 01:51:47.298471 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 01:51:47.300108 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. Jan 24 01:51:47.301933 systemd-logind[1490]: Removed session 16. Jan 24 01:51:47.395497 systemd[1]: Started sshd@14-10.230.77.170:22-20.161.92.111:46558.service - OpenSSH per-connection server daemon (20.161.92.111:46558). Jan 24 01:51:47.980066 sshd[4162]: Accepted publickey for core from 20.161.92.111 port 46558 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:51:47.982150 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:51:47.988965 systemd-logind[1490]: New session 17 of user core. Jan 24 01:51:47.991415 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 01:51:48.482836 sshd[4162]: pam_unix(sshd:session): session closed for user core Jan 24 01:51:48.487345 systemd[1]: sshd@14-10.230.77.170:22-20.161.92.111:46558.service: Deactivated successfully. Jan 24 01:51:48.490699 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 01:51:48.493451 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. Jan 24 01:51:48.495232 systemd-logind[1490]: Removed session 17. Jan 24 01:51:53.590566 systemd[1]: Started sshd@15-10.230.77.170:22-20.161.92.111:33022.service - OpenSSH per-connection server daemon (20.161.92.111:33022). Jan 24 01:51:54.155970 sshd[4174]: Accepted publickey for core from 20.161.92.111 port 33022 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:51:54.158394 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:51:54.167136 systemd-logind[1490]: New session 18 of user core. Jan 24 01:51:54.177394 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 01:51:54.646602 sshd[4174]: pam_unix(sshd:session): session closed for user core Jan 24 01:51:54.651459 systemd[1]: sshd@15-10.230.77.170:22-20.161.92.111:33022.service: Deactivated successfully. Jan 24 01:51:54.654755 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 01:51:54.656257 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. Jan 24 01:51:54.657961 systemd-logind[1490]: Removed session 18. Jan 24 01:51:59.755617 systemd[1]: Started sshd@16-10.230.77.170:22-20.161.92.111:33036.service - OpenSSH per-connection server daemon (20.161.92.111:33036). Jan 24 01:52:00.318044 sshd[4187]: Accepted publickey for core from 20.161.92.111 port 33036 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:52:00.320155 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:52:00.325883 systemd-logind[1490]: New session 19 of user core. Jan 24 01:52:00.335444 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 01:52:00.806271 sshd[4187]: pam_unix(sshd:session): session closed for user core Jan 24 01:52:00.811411 systemd[1]: sshd@16-10.230.77.170:22-20.161.92.111:33036.service: Deactivated successfully. Jan 24 01:52:00.813684 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 01:52:00.814827 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. Jan 24 01:52:00.816437 systemd-logind[1490]: Removed session 19. Jan 24 01:52:00.906448 systemd[1]: Started sshd@17-10.230.77.170:22-20.161.92.111:33046.service - OpenSSH per-connection server daemon (20.161.92.111:33046). Jan 24 01:52:01.481465 sshd[4200]: Accepted publickey for core from 20.161.92.111 port 33046 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:52:01.483738 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:52:01.491679 systemd-logind[1490]: New session 20 of user core. Jan 24 01:52:01.499389 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 01:52:02.355368 sshd[4200]: pam_unix(sshd:session): session closed for user core Jan 24 01:52:02.364100 systemd[1]: sshd@17-10.230.77.170:22-20.161.92.111:33046.service: Deactivated successfully. Jan 24 01:52:02.367757 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 01:52:02.369256 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. Jan 24 01:52:02.370534 systemd-logind[1490]: Removed session 20. Jan 24 01:52:02.455426 systemd[1]: Started sshd@18-10.230.77.170:22-20.161.92.111:60290.service - OpenSSH per-connection server daemon (20.161.92.111:60290). Jan 24 01:52:03.034997 sshd[4211]: Accepted publickey for core from 20.161.92.111 port 60290 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:52:03.037124 sshd[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:52:03.043716 systemd-logind[1490]: New session 21 of user core. Jan 24 01:52:03.051420 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 01:52:04.772345 sshd[4211]: pam_unix(sshd:session): session closed for user core Jan 24 01:52:04.776941 systemd[1]: sshd@18-10.230.77.170:22-20.161.92.111:60290.service: Deactivated successfully. Jan 24 01:52:04.779118 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 01:52:04.781017 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. Jan 24 01:52:04.782806 systemd-logind[1490]: Removed session 21. Jan 24 01:52:04.875539 systemd[1]: Started sshd@19-10.230.77.170:22-20.161.92.111:60294.service - OpenSSH per-connection server daemon (20.161.92.111:60294). Jan 24 01:52:05.450985 sshd[4230]: Accepted publickey for core from 20.161.92.111 port 60294 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:52:05.453454 sshd[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:52:05.460981 systemd-logind[1490]: New session 22 of user core. Jan 24 01:52:05.468449 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 01:52:06.140361 sshd[4230]: pam_unix(sshd:session): session closed for user core Jan 24 01:52:06.146557 systemd[1]: sshd@19-10.230.77.170:22-20.161.92.111:60294.service: Deactivated successfully. Jan 24 01:52:06.148922 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 01:52:06.150026 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. Jan 24 01:52:06.152442 systemd-logind[1490]: Removed session 22. Jan 24 01:52:06.251542 systemd[1]: Started sshd@20-10.230.77.170:22-20.161.92.111:60296.service - OpenSSH per-connection server daemon (20.161.92.111:60296). Jan 24 01:52:06.834136 sshd[4241]: Accepted publickey for core from 20.161.92.111 port 60296 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:52:06.836131 sshd[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:52:06.842336 systemd-logind[1490]: New session 23 of user core. Jan 24 01:52:06.850424 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 01:52:07.338970 sshd[4241]: pam_unix(sshd:session): session closed for user core Jan 24 01:52:07.346799 systemd[1]: sshd@20-10.230.77.170:22-20.161.92.111:60296.service: Deactivated successfully. Jan 24 01:52:07.349753 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 01:52:07.350979 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. Jan 24 01:52:07.353071 systemd-logind[1490]: Removed session 23. Jan 24 01:52:12.440506 systemd[1]: Started sshd@21-10.230.77.170:22-20.161.92.111:51370.service - OpenSSH per-connection server daemon (20.161.92.111:51370). Jan 24 01:52:13.018591 sshd[4256]: Accepted publickey for core from 20.161.92.111 port 51370 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:52:13.020733 sshd[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:52:13.026879 systemd-logind[1490]: New session 24 of user core. Jan 24 01:52:13.035364 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 01:52:13.495185 sshd[4256]: pam_unix(sshd:session): session closed for user core Jan 24 01:52:13.501190 systemd[1]: sshd@21-10.230.77.170:22-20.161.92.111:51370.service: Deactivated successfully. Jan 24 01:52:13.503747 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 01:52:13.505304 systemd-logind[1490]: Session 24 logged out. Waiting for processes to exit. Jan 24 01:52:13.507118 systemd-logind[1490]: Removed session 24. Jan 24 01:52:18.602804 systemd[1]: Started sshd@22-10.230.77.170:22-20.161.92.111:51382.service - OpenSSH per-connection server daemon (20.161.92.111:51382). Jan 24 01:52:19.176384 sshd[4272]: Accepted publickey for core from 20.161.92.111 port 51382 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:52:19.178706 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:52:19.186442 systemd-logind[1490]: New session 25 of user core. Jan 24 01:52:19.196477 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 01:52:19.687697 sshd[4272]: pam_unix(sshd:session): session closed for user core Jan 24 01:52:19.695233 systemd[1]: sshd@22-10.230.77.170:22-20.161.92.111:51382.service: Deactivated successfully. Jan 24 01:52:19.701473 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 01:52:19.705105 systemd-logind[1490]: Session 25 logged out. Waiting for processes to exit. Jan 24 01:52:19.707944 systemd-logind[1490]: Removed session 25. Jan 24 01:52:24.794523 systemd[1]: Started sshd@23-10.230.77.170:22-20.161.92.111:37628.service - OpenSSH per-connection server daemon (20.161.92.111:37628). Jan 24 01:52:25.367538 sshd[4285]: Accepted publickey for core from 20.161.92.111 port 37628 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:52:25.369809 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:52:25.377200 systemd-logind[1490]: New session 26 of user core. Jan 24 01:52:25.389470 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 01:52:25.848914 sshd[4285]: pam_unix(sshd:session): session closed for user core Jan 24 01:52:25.854143 systemd[1]: sshd@23-10.230.77.170:22-20.161.92.111:37628.service: Deactivated successfully. Jan 24 01:52:25.857216 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 01:52:25.858311 systemd-logind[1490]: Session 26 logged out. Waiting for processes to exit. Jan 24 01:52:25.860025 systemd-logind[1490]: Removed session 26. Jan 24 01:52:25.956555 systemd[1]: Started sshd@24-10.230.77.170:22-20.161.92.111:37642.service - OpenSSH per-connection server daemon (20.161.92.111:37642). Jan 24 01:52:26.532670 sshd[4298]: Accepted publickey for core from 20.161.92.111 port 37642 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:52:26.534829 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:52:26.541059 systemd-logind[1490]: New session 27 of user core. Jan 24 01:52:26.548683 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 01:52:28.700447 containerd[1505]: time="2026-01-24T01:52:28.700353073Z" level=info msg="StopContainer for \"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16\" with timeout 30 (s)" Jan 24 01:52:28.704750 containerd[1505]: time="2026-01-24T01:52:28.704035666Z" level=info msg="Stop container \"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16\" with signal terminated" Jan 24 01:52:28.747104 systemd[1]: cri-containerd-1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16.scope: Deactivated successfully. Jan 24 01:52:28.766431 containerd[1505]: time="2026-01-24T01:52:28.765227571Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 01:52:28.777866 containerd[1505]: time="2026-01-24T01:52:28.777791803Z" level=info msg="StopContainer for \"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3\" with timeout 2 (s)" Jan 24 01:52:28.778691 containerd[1505]: time="2026-01-24T01:52:28.778631934Z" level=info msg="Stop container \"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3\" with signal terminated" Jan 24 01:52:28.796501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16-rootfs.mount: Deactivated successfully. Jan 24 01:52:28.799643 systemd-networkd[1432]: lxc_health: Link DOWN Jan 24 01:52:28.799655 systemd-networkd[1432]: lxc_health: Lost carrier Jan 24 01:52:28.808213 containerd[1505]: time="2026-01-24T01:52:28.807459575Z" level=info msg="shim disconnected" id=1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16 namespace=k8s.io Jan 24 01:52:28.808213 containerd[1505]: time="2026-01-24T01:52:28.807655907Z" level=warning msg="cleaning up after shim disconnected" id=1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16 namespace=k8s.io Jan 24 01:52:28.808213 containerd[1505]: time="2026-01-24T01:52:28.807679124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:52:28.834535 systemd[1]: cri-containerd-331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3.scope: Deactivated successfully. Jan 24 01:52:28.836678 systemd[1]: cri-containerd-331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3.scope: Consumed 10.164s CPU time. Jan 24 01:52:28.840188 containerd[1505]: time="2026-01-24T01:52:28.840019271Z" level=info msg="StopContainer for \"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16\" returns successfully" Jan 24 01:52:28.841721 containerd[1505]: time="2026-01-24T01:52:28.841566136Z" level=info msg="StopPodSandbox for \"fd9705f7591fcf28af86eea921047759f89b12a0de3ae038274da700ce91b97b\"" Jan 24 01:52:28.841721 containerd[1505]: time="2026-01-24T01:52:28.841651896Z" level=info msg="Container to stop \"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:52:28.845792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd9705f7591fcf28af86eea921047759f89b12a0de3ae038274da700ce91b97b-shm.mount: Deactivated successfully. Jan 24 01:52:28.860121 systemd[1]: cri-containerd-fd9705f7591fcf28af86eea921047759f89b12a0de3ae038274da700ce91b97b.scope: Deactivated successfully. Jan 24 01:52:28.880085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3-rootfs.mount: Deactivated successfully. Jan 24 01:52:28.896733 containerd[1505]: time="2026-01-24T01:52:28.896290469Z" level=info msg="shim disconnected" id=331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3 namespace=k8s.io Jan 24 01:52:28.896733 containerd[1505]: time="2026-01-24T01:52:28.896361136Z" level=warning msg="cleaning up after shim disconnected" id=331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3 namespace=k8s.io Jan 24 01:52:28.896733 containerd[1505]: time="2026-01-24T01:52:28.896377322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:52:28.924471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd9705f7591fcf28af86eea921047759f89b12a0de3ae038274da700ce91b97b-rootfs.mount: Deactivated successfully. Jan 24 01:52:28.931422 containerd[1505]: time="2026-01-24T01:52:28.931097868Z" level=warning msg="cleanup warnings time=\"2026-01-24T01:52:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 01:52:28.931890 containerd[1505]: time="2026-01-24T01:52:28.931403297Z" level=info msg="shim disconnected" id=fd9705f7591fcf28af86eea921047759f89b12a0de3ae038274da700ce91b97b namespace=k8s.io Jan 24 01:52:28.931890 containerd[1505]: time="2026-01-24T01:52:28.931858435Z" level=warning msg="cleaning up after shim disconnected" id=fd9705f7591fcf28af86eea921047759f89b12a0de3ae038274da700ce91b97b namespace=k8s.io Jan 24 01:52:28.931890 containerd[1505]: time="2026-01-24T01:52:28.931877514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:52:28.937026 containerd[1505]: time="2026-01-24T01:52:28.936973909Z" level=info msg="StopContainer for \"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3\" returns successfully" Jan 24 01:52:28.938136 containerd[1505]: time="2026-01-24T01:52:28.937983080Z" level=info msg="StopPodSandbox for \"3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae\"" Jan 24 01:52:28.938136 containerd[1505]: time="2026-01-24T01:52:28.938027123Z" level=info msg="Container to stop \"430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:52:28.938136 containerd[1505]: time="2026-01-24T01:52:28.938047809Z" level=info msg="Container to stop \"baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:52:28.938136 containerd[1505]: time="2026-01-24T01:52:28.938065449Z" level=info msg="Container to stop \"125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:52:28.938136 containerd[1505]: time="2026-01-24T01:52:28.938081207Z" level=info msg="Container to stop \"df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:52:28.938136 containerd[1505]: time="2026-01-24T01:52:28.938096709Z" level=info msg="Container to stop \"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:52:28.950094 kubelet[2696]: E0124 01:52:28.950010 2696 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 01:52:28.958724 systemd[1]: cri-containerd-3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae.scope: Deactivated successfully. Jan 24 01:52:28.984518 containerd[1505]: time="2026-01-24T01:52:28.984453456Z" level=info msg="TearDown network for sandbox \"fd9705f7591fcf28af86eea921047759f89b12a0de3ae038274da700ce91b97b\" successfully" Jan 24 01:52:28.984518 containerd[1505]: time="2026-01-24T01:52:28.984504703Z" level=info msg="StopPodSandbox for \"fd9705f7591fcf28af86eea921047759f89b12a0de3ae038274da700ce91b97b\" returns successfully" Jan 24 01:52:29.013937 containerd[1505]: time="2026-01-24T01:52:29.012663544Z" level=info msg="shim disconnected" id=3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae namespace=k8s.io Jan 24 01:52:29.015276 containerd[1505]: time="2026-01-24T01:52:29.014920250Z" level=warning msg="cleaning up after shim disconnected" id=3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae namespace=k8s.io Jan 24 01:52:29.015276 containerd[1505]: time="2026-01-24T01:52:29.015062442Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:52:29.039249 containerd[1505]: time="2026-01-24T01:52:29.039154320Z" level=info msg="TearDown network for sandbox \"3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae\" successfully" Jan 24 01:52:29.039249 containerd[1505]: time="2026-01-24T01:52:29.039215585Z" level=info msg="StopPodSandbox for \"3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae\" returns successfully" Jan 24 01:52:29.060376 kubelet[2696]: I0124 01:52:29.058985 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqv4b\" (UniqueName: \"kubernetes.io/projected/afac28d7-e69a-40d2-8641-7d5e2f9bc553-kube-api-access-vqv4b\") pod \"afac28d7-e69a-40d2-8641-7d5e2f9bc553\" (UID: \"afac28d7-e69a-40d2-8641-7d5e2f9bc553\") " Jan 24 01:52:29.060376 kubelet[2696]: I0124 01:52:29.060304 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afac28d7-e69a-40d2-8641-7d5e2f9bc553-cilium-config-path\") pod \"afac28d7-e69a-40d2-8641-7d5e2f9bc553\" (UID: \"afac28d7-e69a-40d2-8641-7d5e2f9bc553\") " Jan 24 01:52:29.089694 kubelet[2696]: I0124 01:52:29.088933 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afac28d7-e69a-40d2-8641-7d5e2f9bc553-kube-api-access-vqv4b" (OuterVolumeSpecName: "kube-api-access-vqv4b") pod "afac28d7-e69a-40d2-8641-7d5e2f9bc553" (UID: "afac28d7-e69a-40d2-8641-7d5e2f9bc553"). InnerVolumeSpecName "kube-api-access-vqv4b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 01:52:29.090796 kubelet[2696]: I0124 01:52:29.088517 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afac28d7-e69a-40d2-8641-7d5e2f9bc553-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "afac28d7-e69a-40d2-8641-7d5e2f9bc553" (UID: "afac28d7-e69a-40d2-8641-7d5e2f9bc553"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 01:52:29.161470 kubelet[2696]: I0124 01:52:29.161380 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-etc-cni-netd\") pod \"b3af5ffb-b970-4918-a67e-ee602022fa1d\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " Jan 24 01:52:29.161470 kubelet[2696]: I0124 01:52:29.161453 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-bpf-maps\") pod \"b3af5ffb-b970-4918-a67e-ee602022fa1d\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " Jan 24 01:52:29.161470 kubelet[2696]: I0124 01:52:29.161482 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-cilium-run\") pod \"b3af5ffb-b970-4918-a67e-ee602022fa1d\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " Jan 24 01:52:29.161834 kubelet[2696]: I0124 01:52:29.161523 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b3af5ffb-b970-4918-a67e-ee602022fa1d-clustermesh-secrets\") pod \"b3af5ffb-b970-4918-a67e-ee602022fa1d\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " Jan 24 01:52:29.161834 kubelet[2696]: I0124 01:52:29.161566 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc9kt\" (UniqueName: \"kubernetes.io/projected/b3af5ffb-b970-4918-a67e-ee602022fa1d-kube-api-access-dc9kt\") pod \"b3af5ffb-b970-4918-a67e-ee602022fa1d\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " Jan 24 01:52:29.161834 kubelet[2696]: I0124 01:52:29.161609 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-host-proc-sys-net\") pod \"b3af5ffb-b970-4918-a67e-ee602022fa1d\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " Jan 24 01:52:29.161834 kubelet[2696]: I0124 01:52:29.161636 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-host-proc-sys-kernel\") pod \"b3af5ffb-b970-4918-a67e-ee602022fa1d\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " Jan 24 01:52:29.161834 kubelet[2696]: I0124 01:52:29.161661 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-lib-modules\") pod \"b3af5ffb-b970-4918-a67e-ee602022fa1d\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " Jan 24 01:52:29.161834 kubelet[2696]: I0124 01:52:29.161686 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-cni-path\") pod \"b3af5ffb-b970-4918-a67e-ee602022fa1d\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " Jan 24 01:52:29.163740 kubelet[2696]: I0124 01:52:29.161714 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b3af5ffb-b970-4918-a67e-ee602022fa1d-hubble-tls\") pod \"b3af5ffb-b970-4918-a67e-ee602022fa1d\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " Jan 24 01:52:29.163740 kubelet[2696]: I0124 01:52:29.161787 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3af5ffb-b970-4918-a67e-ee602022fa1d-cilium-config-path\") pod \"b3af5ffb-b970-4918-a67e-ee602022fa1d\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " Jan 24 01:52:29.163740 kubelet[2696]: I0124 01:52:29.161814 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-hostproc\") pod \"b3af5ffb-b970-4918-a67e-ee602022fa1d\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " Jan 24 01:52:29.163740 kubelet[2696]: I0124 01:52:29.161836 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-cilium-cgroup\") pod \"b3af5ffb-b970-4918-a67e-ee602022fa1d\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " Jan 24 01:52:29.163740 kubelet[2696]: I0124 01:52:29.161872 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-xtables-lock\") pod \"b3af5ffb-b970-4918-a67e-ee602022fa1d\" (UID: \"b3af5ffb-b970-4918-a67e-ee602022fa1d\") " Jan 24 01:52:29.163740 kubelet[2696]: I0124 01:52:29.161943 2696 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vqv4b\" (UniqueName: \"kubernetes.io/projected/afac28d7-e69a-40d2-8641-7d5e2f9bc553-kube-api-access-vqv4b\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.164024 kubelet[2696]: I0124 01:52:29.161966 2696 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afac28d7-e69a-40d2-8641-7d5e2f9bc553-cilium-config-path\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.164024 kubelet[2696]: I0124 01:52:29.162060 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b3af5ffb-b970-4918-a67e-ee602022fa1d" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:52:29.164024 kubelet[2696]: I0124 01:52:29.162119 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b3af5ffb-b970-4918-a67e-ee602022fa1d" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:52:29.164024 kubelet[2696]: I0124 01:52:29.162150 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b3af5ffb-b970-4918-a67e-ee602022fa1d" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:52:29.164024 kubelet[2696]: I0124 01:52:29.162219 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b3af5ffb-b970-4918-a67e-ee602022fa1d" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:52:29.164273 kubelet[2696]: I0124 01:52:29.162354 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-cni-path" (OuterVolumeSpecName: "cni-path") pod "b3af5ffb-b970-4918-a67e-ee602022fa1d" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:52:29.165263 kubelet[2696]: I0124 01:52:29.165216 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b3af5ffb-b970-4918-a67e-ee602022fa1d" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:52:29.165350 kubelet[2696]: I0124 01:52:29.165301 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b3af5ffb-b970-4918-a67e-ee602022fa1d" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:52:29.165350 kubelet[2696]: I0124 01:52:29.165332 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b3af5ffb-b970-4918-a67e-ee602022fa1d" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:52:29.165448 kubelet[2696]: I0124 01:52:29.165376 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-hostproc" (OuterVolumeSpecName: "hostproc") pod "b3af5ffb-b970-4918-a67e-ee602022fa1d" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:52:29.165904 kubelet[2696]: I0124 01:52:29.165833 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b3af5ffb-b970-4918-a67e-ee602022fa1d" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:52:29.169498 kubelet[2696]: I0124 01:52:29.169391 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3af5ffb-b970-4918-a67e-ee602022fa1d-kube-api-access-dc9kt" (OuterVolumeSpecName: "kube-api-access-dc9kt") pod "b3af5ffb-b970-4918-a67e-ee602022fa1d" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d"). InnerVolumeSpecName "kube-api-access-dc9kt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 01:52:29.171690 kubelet[2696]: I0124 01:52:29.170140 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3af5ffb-b970-4918-a67e-ee602022fa1d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b3af5ffb-b970-4918-a67e-ee602022fa1d" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 01:52:29.171939 kubelet[2696]: I0124 01:52:29.171911 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3af5ffb-b970-4918-a67e-ee602022fa1d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b3af5ffb-b970-4918-a67e-ee602022fa1d" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 01:52:29.172878 kubelet[2696]: I0124 01:52:29.172837 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3af5ffb-b970-4918-a67e-ee602022fa1d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b3af5ffb-b970-4918-a67e-ee602022fa1d" (UID: "b3af5ffb-b970-4918-a67e-ee602022fa1d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 01:52:29.262289 kubelet[2696]: I0124 01:52:29.262111 2696 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-cni-path\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.262289 kubelet[2696]: I0124 01:52:29.262194 2696 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b3af5ffb-b970-4918-a67e-ee602022fa1d-hubble-tls\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.262289 kubelet[2696]: I0124 01:52:29.262217 2696 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3af5ffb-b970-4918-a67e-ee602022fa1d-cilium-config-path\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.262289 kubelet[2696]: I0124 01:52:29.262234 2696 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-hostproc\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.262289 kubelet[2696]: I0124 01:52:29.262253 2696 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-cilium-cgroup\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.262289 kubelet[2696]: I0124 01:52:29.262267 2696 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-xtables-lock\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.262289 kubelet[2696]: I0124 01:52:29.262281 2696 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-etc-cni-netd\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.262289 kubelet[2696]: I0124 01:52:29.262297 2696 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-bpf-maps\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.262833 kubelet[2696]: I0124 01:52:29.262313 2696 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-cilium-run\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.262833 kubelet[2696]: I0124 01:52:29.262327 2696 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b3af5ffb-b970-4918-a67e-ee602022fa1d-clustermesh-secrets\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.262833 kubelet[2696]: I0124 01:52:29.262341 2696 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dc9kt\" (UniqueName: \"kubernetes.io/projected/b3af5ffb-b970-4918-a67e-ee602022fa1d-kube-api-access-dc9kt\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.262833 kubelet[2696]: I0124 01:52:29.262367 2696 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-host-proc-sys-net\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.262833 kubelet[2696]: I0124 01:52:29.262384 2696 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-host-proc-sys-kernel\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.262833 kubelet[2696]: I0124 01:52:29.262399 2696 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3af5ffb-b970-4918-a67e-ee602022fa1d-lib-modules\") on node \"srv-58cs2.gb1.brightbox.com\" DevicePath \"\"" Jan 24 01:52:29.275060 kubelet[2696]: I0124 01:52:29.273543 2696 scope.go:117] "RemoveContainer" containerID="331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3" Jan 24 01:52:29.289069 containerd[1505]: time="2026-01-24T01:52:29.288946525Z" level=info msg="RemoveContainer for \"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3\"" Jan 24 01:52:29.294040 systemd[1]: Removed slice kubepods-burstable-podb3af5ffb_b970_4918_a67e_ee602022fa1d.slice - libcontainer container kubepods-burstable-podb3af5ffb_b970_4918_a67e_ee602022fa1d.slice. Jan 24 01:52:29.294241 systemd[1]: kubepods-burstable-podb3af5ffb_b970_4918_a67e_ee602022fa1d.slice: Consumed 10.279s CPU time. Jan 24 01:52:29.313934 systemd[1]: Removed slice kubepods-besteffort-podafac28d7_e69a_40d2_8641_7d5e2f9bc553.slice - libcontainer container kubepods-besteffort-podafac28d7_e69a_40d2_8641_7d5e2f9bc553.slice. Jan 24 01:52:29.314633 containerd[1505]: time="2026-01-24T01:52:29.314438054Z" level=info msg="RemoveContainer for \"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3\" returns successfully" Jan 24 01:52:29.317870 kubelet[2696]: I0124 01:52:29.317595 2696 scope.go:117] "RemoveContainer" containerID="125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37" Jan 24 01:52:29.321183 containerd[1505]: time="2026-01-24T01:52:29.320756586Z" level=info msg="RemoveContainer for \"125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37\"" Jan 24 01:52:29.324859 containerd[1505]: time="2026-01-24T01:52:29.324801465Z" level=info msg="RemoveContainer for \"125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37\" returns successfully" Jan 24 01:52:29.325261 kubelet[2696]: I0124 01:52:29.325205 2696 scope.go:117] "RemoveContainer" containerID="baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe" Jan 24 01:52:29.333010 containerd[1505]: time="2026-01-24T01:52:29.332331185Z" level=info msg="RemoveContainer for \"baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe\"" Jan 24 01:52:29.343334 containerd[1505]: time="2026-01-24T01:52:29.343181935Z" level=info msg="RemoveContainer for \"baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe\" returns successfully" Jan 24 01:52:29.343722 kubelet[2696]: I0124 01:52:29.343687 2696 scope.go:117] "RemoveContainer" containerID="430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b" Jan 24 01:52:29.346040 containerd[1505]: time="2026-01-24T01:52:29.346011397Z" level=info msg="RemoveContainer for \"430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b\"" Jan 24 01:52:29.350634 containerd[1505]: time="2026-01-24T01:52:29.350598732Z" level=info msg="RemoveContainer for \"430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b\" returns successfully" Jan 24 01:52:29.351728 kubelet[2696]: I0124 01:52:29.351700 2696 scope.go:117] "RemoveContainer" containerID="df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb" Jan 24 01:52:29.354241 containerd[1505]: time="2026-01-24T01:52:29.354112583Z" level=info msg="RemoveContainer for \"df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb\"" Jan 24 01:52:29.359103 containerd[1505]: time="2026-01-24T01:52:29.359048970Z" level=info msg="RemoveContainer for \"df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb\" returns successfully" Jan 24 01:52:29.360224 kubelet[2696]: I0124 01:52:29.360111 2696 scope.go:117] "RemoveContainer" containerID="331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3" Jan 24 01:52:29.371325 containerd[1505]: time="2026-01-24T01:52:29.362778332Z" level=error msg="ContainerStatus for \"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3\": not found" Jan 24 01:52:29.379023 kubelet[2696]: E0124 01:52:29.378966 2696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3\": not found" containerID="331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3" Jan 24 01:52:29.384982 kubelet[2696]: I0124 01:52:29.379047 2696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3"} err="failed to get container status \"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"331741e4f426b787bd1fcc4e305fef73630f7b618ddf7c6a6b9d6d479b0a27a3\": not found" Jan 24 01:52:29.385092 kubelet[2696]: I0124 01:52:29.384970 2696 scope.go:117] "RemoveContainer" containerID="125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37" Jan 24 01:52:29.385640 containerd[1505]: time="2026-01-24T01:52:29.385496210Z" level=error msg="ContainerStatus for \"125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37\": not found" Jan 24 01:52:29.385729 kubelet[2696]: E0124 01:52:29.385683 2696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37\": not found" containerID="125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37" Jan 24 01:52:29.385729 kubelet[2696]: I0124 01:52:29.385712 2696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37"} err="failed to get container status \"125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37\": rpc error: code = NotFound desc = an error occurred when try to find container \"125550d0b2dfd29205184367bed8ed9928461cb9740cb4e2043ea799bba3ca37\": not found" Jan 24 01:52:29.385995 kubelet[2696]: I0124 01:52:29.385737 2696 scope.go:117] "RemoveContainer" containerID="baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe" Jan 24 01:52:29.386642 containerd[1505]: time="2026-01-24T01:52:29.386456700Z" level=error msg="ContainerStatus for \"baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe\": not found" Jan 24 01:52:29.387153 kubelet[2696]: E0124 01:52:29.386664 2696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe\": not found" containerID="baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe" Jan 24 01:52:29.387153 kubelet[2696]: I0124 01:52:29.386776 2696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe"} err="failed to get container status \"baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"baae98dba7fcb18d2280012f596edf21d1a02565af0b7094a6e2797892fed5fe\": not found" Jan 24 01:52:29.387153 kubelet[2696]: I0124 01:52:29.386800 2696 scope.go:117] "RemoveContainer" containerID="430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b" Jan 24 01:52:29.387443 kubelet[2696]: E0124 01:52:29.387211 2696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b\": not found" containerID="430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b" Jan 24 01:52:29.387443 kubelet[2696]: I0124 01:52:29.387241 2696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b"} err="failed to get container status \"430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b\": rpc error: code = NotFound desc = an error occurred when try to find container \"430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b\": not found" Jan 24 01:52:29.387443 kubelet[2696]: I0124 01:52:29.387263 2696 scope.go:117] "RemoveContainer" containerID="df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb" Jan 24 01:52:29.387609 containerd[1505]: time="2026-01-24T01:52:29.387041996Z" level=error msg="ContainerStatus for \"430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"430762406843172110c628f41635d16f0a20e3a09ee0864271591dce8685176b\": not found" Jan 24 01:52:29.388018 containerd[1505]: time="2026-01-24T01:52:29.387910082Z" level=error msg="ContainerStatus for \"df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb\": not found" Jan 24 01:52:29.388196 kubelet[2696]: E0124 01:52:29.388111 2696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb\": not found" containerID="df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb" Jan 24 01:52:29.388593 kubelet[2696]: I0124 01:52:29.388186 2696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb"} err="failed to get container status \"df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"df97e2ced744f1c41df0bd51e8b2151619b7118bd0f3488633bcf5248f9844bb\": not found" Jan 24 01:52:29.388593 kubelet[2696]: I0124 01:52:29.388215 2696 scope.go:117] "RemoveContainer" containerID="1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16" Jan 24 01:52:29.390632 containerd[1505]: time="2026-01-24T01:52:29.390398299Z" level=info msg="RemoveContainer for \"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16\"" Jan 24 01:52:29.396492 containerd[1505]: time="2026-01-24T01:52:29.396464717Z" level=info msg="RemoveContainer for \"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16\" returns successfully" Jan 24 01:52:29.397243 kubelet[2696]: I0124 01:52:29.396882 2696 scope.go:117] "RemoveContainer" containerID="1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16" Jan 24 01:52:29.397361 containerd[1505]: time="2026-01-24T01:52:29.397134790Z" level=error msg="ContainerStatus for \"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16\": not found" Jan 24 01:52:29.397466 kubelet[2696]: E0124 01:52:29.397286 2696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16\": not found" containerID="1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16" Jan 24 01:52:29.397466 kubelet[2696]: I0124 01:52:29.397315 2696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16"} err="failed to get container status \"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c148fb2447835cd0d82a8d7a940872165f8f2be7c8b104f2381385d29ccbd16\": not found" Jan 24 01:52:29.730895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae-rootfs.mount: Deactivated successfully. Jan 24 01:52:29.731031 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d9e9a969fd589771323fa342af9fa790f4a5150bdc151942bf517e9b16f94ae-shm.mount: Deactivated successfully. Jan 24 01:52:29.731147 systemd[1]: var-lib-kubelet-pods-b3af5ffb\x2db970\x2d4918\x2da67e\x2dee602022fa1d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddc9kt.mount: Deactivated successfully. Jan 24 01:52:29.731286 systemd[1]: var-lib-kubelet-pods-afac28d7\x2de69a\x2d40d2\x2d8641\x2d7d5e2f9bc553-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvqv4b.mount: Deactivated successfully. Jan 24 01:52:29.731405 systemd[1]: var-lib-kubelet-pods-b3af5ffb\x2db970\x2d4918\x2da67e\x2dee602022fa1d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 24 01:52:29.731561 systemd[1]: var-lib-kubelet-pods-b3af5ffb\x2db970\x2d4918\x2da67e\x2dee602022fa1d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 24 01:52:30.713195 sshd[4298]: pam_unix(sshd:session): session closed for user core Jan 24 01:52:30.716193 kubelet[2696]: I0124 01:52:30.715516 2696 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afac28d7-e69a-40d2-8641-7d5e2f9bc553" path="/var/lib/kubelet/pods/afac28d7-e69a-40d2-8641-7d5e2f9bc553/volumes" Jan 24 01:52:30.717220 kubelet[2696]: I0124 01:52:30.716807 2696 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3af5ffb-b970-4918-a67e-ee602022fa1d" path="/var/lib/kubelet/pods/b3af5ffb-b970-4918-a67e-ee602022fa1d/volumes" Jan 24 01:52:30.718408 systemd[1]: sshd@24-10.230.77.170:22-20.161.92.111:37642.service: Deactivated successfully. Jan 24 01:52:30.721245 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 01:52:30.721665 systemd[1]: session-27.scope: Consumed 1.167s CPU time. Jan 24 01:52:30.723554 systemd-logind[1490]: Session 27 logged out. Waiting for processes to exit. Jan 24 01:52:30.725000 systemd-logind[1490]: Removed session 27. Jan 24 01:52:30.821565 systemd[1]: Started sshd@25-10.230.77.170:22-20.161.92.111:37648.service - OpenSSH per-connection server daemon (20.161.92.111:37648). Jan 24 01:52:31.397598 sshd[4460]: Accepted publickey for core from 20.161.92.111 port 37648 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:52:31.400455 sshd[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:52:31.407911 systemd-logind[1490]: New session 28 of user core. Jan 24 01:52:31.421442 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 24 01:52:31.546622 kubelet[2696]: I0124 01:52:31.545975 2696 setters.go:618] "Node became not ready" node="srv-58cs2.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T01:52:31Z","lastTransitionTime":"2026-01-24T01:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 24 01:52:32.923233 systemd[1]: Created slice kubepods-burstable-podbc0a0bd0_2593_4dcb_a505_21cbed0a61f4.slice - libcontainer container kubepods-burstable-podbc0a0bd0_2593_4dcb_a505_21cbed0a61f4.slice. Jan 24 01:52:32.968996 sshd[4460]: pam_unix(sshd:session): session closed for user core Jan 24 01:52:32.979059 systemd[1]: sshd@25-10.230.77.170:22-20.161.92.111:37648.service: Deactivated successfully. Jan 24 01:52:32.986469 systemd[1]: session-28.scope: Deactivated successfully. Jan 24 01:52:32.986711 systemd[1]: session-28.scope: Consumed 1.103s CPU time. Jan 24 01:52:32.990371 kubelet[2696]: I0124 01:52:32.989599 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-host-proc-sys-net\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.990371 kubelet[2696]: I0124 01:52:32.989666 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czhvn\" (UniqueName: \"kubernetes.io/projected/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-kube-api-access-czhvn\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.990371 kubelet[2696]: I0124 01:52:32.989700 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-cilium-ipsec-secrets\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.990371 kubelet[2696]: I0124 01:52:32.989736 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-cilium-run\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.990371 kubelet[2696]: I0124 01:52:32.989772 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-bpf-maps\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.990371 kubelet[2696]: I0124 01:52:32.989834 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-cilium-cgroup\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.991041 kubelet[2696]: I0124 01:52:32.989864 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-hostproc\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.991041 kubelet[2696]: I0124 01:52:32.989889 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-cni-path\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.991041 kubelet[2696]: I0124 01:52:32.989929 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-etc-cni-netd\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.991041 kubelet[2696]: I0124 01:52:32.989970 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-lib-modules\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.991041 kubelet[2696]: I0124 01:52:32.990000 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-xtables-lock\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.991041 kubelet[2696]: I0124 01:52:32.990033 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-cilium-config-path\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.992901 kubelet[2696]: I0124 01:52:32.990062 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-host-proc-sys-kernel\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.992901 kubelet[2696]: I0124 01:52:32.990086 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-hubble-tls\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.992901 kubelet[2696]: I0124 01:52:32.990116 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc0a0bd0-2593-4dcb-a505-21cbed0a61f4-clustermesh-secrets\") pod \"cilium-wghgf\" (UID: \"bc0a0bd0-2593-4dcb-a505-21cbed0a61f4\") " pod="kube-system/cilium-wghgf" Jan 24 01:52:32.992377 systemd-logind[1490]: Session 28 logged out. Waiting for processes to exit. Jan 24 01:52:32.995853 systemd-logind[1490]: Removed session 28. Jan 24 01:52:33.072708 systemd[1]: Started sshd@26-10.230.77.170:22-20.161.92.111:45392.service - OpenSSH per-connection server daemon (20.161.92.111:45392). Jan 24 01:52:33.230807 containerd[1505]: time="2026-01-24T01:52:33.230628163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wghgf,Uid:bc0a0bd0-2593-4dcb-a505-21cbed0a61f4,Namespace:kube-system,Attempt:0,}" Jan 24 01:52:33.269270 containerd[1505]: time="2026-01-24T01:52:33.265614092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:52:33.269270 containerd[1505]: time="2026-01-24T01:52:33.265706329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:52:33.269270 containerd[1505]: time="2026-01-24T01:52:33.265729654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:52:33.269270 containerd[1505]: time="2026-01-24T01:52:33.265866674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:52:33.300594 systemd[1]: Started cri-containerd-632b14aa9621dedce8509dc415b1ded2a460be7a528cf1d81bb8f01cc0717d83.scope - libcontainer container 632b14aa9621dedce8509dc415b1ded2a460be7a528cf1d81bb8f01cc0717d83. Jan 24 01:52:33.344189 containerd[1505]: time="2026-01-24T01:52:33.344122446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wghgf,Uid:bc0a0bd0-2593-4dcb-a505-21cbed0a61f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"632b14aa9621dedce8509dc415b1ded2a460be7a528cf1d81bb8f01cc0717d83\"" Jan 24 01:52:33.351840 containerd[1505]: time="2026-01-24T01:52:33.351770533Z" level=info msg="CreateContainer within sandbox \"632b14aa9621dedce8509dc415b1ded2a460be7a528cf1d81bb8f01cc0717d83\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 01:52:33.367596 containerd[1505]: time="2026-01-24T01:52:33.367409967Z" level=info msg="CreateContainer within sandbox \"632b14aa9621dedce8509dc415b1ded2a460be7a528cf1d81bb8f01cc0717d83\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ddec6d4ee7019b6f07d2c6a30c24299de4b930fab485b19a31ed0a1ff80ba63d\"" Jan 24 01:52:33.368957 containerd[1505]: time="2026-01-24T01:52:33.368593927Z" level=info msg="StartContainer for \"ddec6d4ee7019b6f07d2c6a30c24299de4b930fab485b19a31ed0a1ff80ba63d\"" Jan 24 01:52:33.401394 systemd[1]: Started cri-containerd-ddec6d4ee7019b6f07d2c6a30c24299de4b930fab485b19a31ed0a1ff80ba63d.scope - libcontainer container ddec6d4ee7019b6f07d2c6a30c24299de4b930fab485b19a31ed0a1ff80ba63d. Jan 24 01:52:33.438021 containerd[1505]: time="2026-01-24T01:52:33.437810426Z" level=info msg="StartContainer for \"ddec6d4ee7019b6f07d2c6a30c24299de4b930fab485b19a31ed0a1ff80ba63d\" returns successfully" Jan 24 01:52:33.453497 systemd[1]: cri-containerd-ddec6d4ee7019b6f07d2c6a30c24299de4b930fab485b19a31ed0a1ff80ba63d.scope: Deactivated successfully. Jan 24 01:52:33.495506 containerd[1505]: time="2026-01-24T01:52:33.495078951Z" level=info msg="shim disconnected" id=ddec6d4ee7019b6f07d2c6a30c24299de4b930fab485b19a31ed0a1ff80ba63d namespace=k8s.io Jan 24 01:52:33.495506 containerd[1505]: time="2026-01-24T01:52:33.495149429Z" level=warning msg="cleaning up after shim disconnected" id=ddec6d4ee7019b6f07d2c6a30c24299de4b930fab485b19a31ed0a1ff80ba63d namespace=k8s.io Jan 24 01:52:33.495506 containerd[1505]: time="2026-01-24T01:52:33.495197500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:52:33.642044 sshd[4471]: Accepted publickey for core from 20.161.92.111 port 45392 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:52:33.644258 sshd[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:52:33.651925 systemd-logind[1490]: New session 29 of user core. Jan 24 01:52:33.665468 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 24 01:52:33.953037 kubelet[2696]: E0124 01:52:33.952930 2696 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 01:52:34.042263 sshd[4471]: pam_unix(sshd:session): session closed for user core Jan 24 01:52:34.046487 systemd-logind[1490]: Session 29 logged out. Waiting for processes to exit. Jan 24 01:52:34.047704 systemd[1]: sshd@26-10.230.77.170:22-20.161.92.111:45392.service: Deactivated successfully. Jan 24 01:52:34.050272 systemd[1]: session-29.scope: Deactivated successfully. Jan 24 01:52:34.052914 systemd-logind[1490]: Removed session 29. Jan 24 01:52:34.146593 systemd[1]: Started sshd@27-10.230.77.170:22-20.161.92.111:45400.service - OpenSSH per-connection server daemon (20.161.92.111:45400). Jan 24 01:52:34.326318 containerd[1505]: time="2026-01-24T01:52:34.326096244Z" level=info msg="CreateContainer within sandbox \"632b14aa9621dedce8509dc415b1ded2a460be7a528cf1d81bb8f01cc0717d83\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 01:52:34.359358 containerd[1505]: time="2026-01-24T01:52:34.358651220Z" level=info msg="CreateContainer within sandbox \"632b14aa9621dedce8509dc415b1ded2a460be7a528cf1d81bb8f01cc0717d83\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3137d92ba9cfda17c19a426969cf220f229bcfef3cdb675ff4aa109308c5ca77\"" Jan 24 01:52:34.361189 containerd[1505]: time="2026-01-24T01:52:34.359668831Z" level=info msg="StartContainer for \"3137d92ba9cfda17c19a426969cf220f229bcfef3cdb675ff4aa109308c5ca77\"" Jan 24 01:52:34.436647 systemd[1]: Started cri-containerd-3137d92ba9cfda17c19a426969cf220f229bcfef3cdb675ff4aa109308c5ca77.scope - libcontainer container 3137d92ba9cfda17c19a426969cf220f229bcfef3cdb675ff4aa109308c5ca77. Jan 24 01:52:34.496728 containerd[1505]: time="2026-01-24T01:52:34.496666812Z" level=info msg="StartContainer for \"3137d92ba9cfda17c19a426969cf220f229bcfef3cdb675ff4aa109308c5ca77\" returns successfully" Jan 24 01:52:34.509679 systemd[1]: cri-containerd-3137d92ba9cfda17c19a426969cf220f229bcfef3cdb675ff4aa109308c5ca77.scope: Deactivated successfully. Jan 24 01:52:34.542042 containerd[1505]: time="2026-01-24T01:52:34.541897450Z" level=info msg="shim disconnected" id=3137d92ba9cfda17c19a426969cf220f229bcfef3cdb675ff4aa109308c5ca77 namespace=k8s.io Jan 24 01:52:34.542042 containerd[1505]: time="2026-01-24T01:52:34.541999354Z" level=warning msg="cleaning up after shim disconnected" id=3137d92ba9cfda17c19a426969cf220f229bcfef3cdb675ff4aa109308c5ca77 namespace=k8s.io Jan 24 01:52:34.542917 containerd[1505]: time="2026-01-24T01:52:34.542124629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:52:34.728130 sshd[4582]: Accepted publickey for core from 20.161.92.111 port 45400 ssh2: RSA SHA256:FPFzCV4zLNLFW0li5+2YVHAOtmV9qYriDYvKoYmY/ms Jan 24 01:52:34.730785 sshd[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:52:34.737679 systemd-logind[1490]: New session 30 of user core. Jan 24 01:52:34.745580 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 24 01:52:35.102693 systemd[1]: run-containerd-runc-k8s.io-3137d92ba9cfda17c19a426969cf220f229bcfef3cdb675ff4aa109308c5ca77-runc.LHa0hh.mount: Deactivated successfully. Jan 24 01:52:35.102884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3137d92ba9cfda17c19a426969cf220f229bcfef3cdb675ff4aa109308c5ca77-rootfs.mount: Deactivated successfully. Jan 24 01:52:35.328215 containerd[1505]: time="2026-01-24T01:52:35.327473328Z" level=info msg="CreateContainer within sandbox \"632b14aa9621dedce8509dc415b1ded2a460be7a528cf1d81bb8f01cc0717d83\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 01:52:35.347128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187788293.mount: Deactivated successfully. Jan 24 01:52:35.349670 containerd[1505]: time="2026-01-24T01:52:35.349602104Z" level=info msg="CreateContainer within sandbox \"632b14aa9621dedce8509dc415b1ded2a460be7a528cf1d81bb8f01cc0717d83\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"39f807876da88fa1ad49c3231b86fca5f940fa6c4e0e62c1841bfe0f9d7fa12b\"" Jan 24 01:52:35.352063 containerd[1505]: time="2026-01-24T01:52:35.352024163Z" level=info msg="StartContainer for \"39f807876da88fa1ad49c3231b86fca5f940fa6c4e0e62c1841bfe0f9d7fa12b\"" Jan 24 01:52:35.410380 systemd[1]: Started cri-containerd-39f807876da88fa1ad49c3231b86fca5f940fa6c4e0e62c1841bfe0f9d7fa12b.scope - libcontainer container 39f807876da88fa1ad49c3231b86fca5f940fa6c4e0e62c1841bfe0f9d7fa12b. Jan 24 01:52:35.460693 containerd[1505]: time="2026-01-24T01:52:35.460643492Z" level=info msg="StartContainer for \"39f807876da88fa1ad49c3231b86fca5f940fa6c4e0e62c1841bfe0f9d7fa12b\" returns successfully" Jan 24 01:52:35.470832 systemd[1]: cri-containerd-39f807876da88fa1ad49c3231b86fca5f940fa6c4e0e62c1841bfe0f9d7fa12b.scope: Deactivated successfully. Jan 24 01:52:35.502203 containerd[1505]: time="2026-01-24T01:52:35.502070458Z" level=info msg="shim disconnected" id=39f807876da88fa1ad49c3231b86fca5f940fa6c4e0e62c1841bfe0f9d7fa12b namespace=k8s.io Jan 24 01:52:35.502203 containerd[1505]: time="2026-01-24T01:52:35.502143913Z" level=warning msg="cleaning up after shim disconnected" id=39f807876da88fa1ad49c3231b86fca5f940fa6c4e0e62c1841bfe0f9d7fa12b namespace=k8s.io Jan 24 01:52:35.502203 containerd[1505]: time="2026-01-24T01:52:35.502158418Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:52:36.101848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39f807876da88fa1ad49c3231b86fca5f940fa6c4e0e62c1841bfe0f9d7fa12b-rootfs.mount: Deactivated successfully. Jan 24 01:52:36.335409 containerd[1505]: time="2026-01-24T01:52:36.334285079Z" level=info msg="CreateContainer within sandbox \"632b14aa9621dedce8509dc415b1ded2a460be7a528cf1d81bb8f01cc0717d83\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 01:52:36.369024 containerd[1505]: time="2026-01-24T01:52:36.367188363Z" level=info msg="CreateContainer within sandbox \"632b14aa9621dedce8509dc415b1ded2a460be7a528cf1d81bb8f01cc0717d83\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"13f5fbc6a90bc5fb37bbbf54b9015793767124d716e169a87cf887405ef19d03\"" Jan 24 01:52:36.371070 containerd[1505]: time="2026-01-24T01:52:36.370899204Z" level=info msg="StartContainer for \"13f5fbc6a90bc5fb37bbbf54b9015793767124d716e169a87cf887405ef19d03\"" Jan 24 01:52:36.432387 systemd[1]: Started cri-containerd-13f5fbc6a90bc5fb37bbbf54b9015793767124d716e169a87cf887405ef19d03.scope - libcontainer container 13f5fbc6a90bc5fb37bbbf54b9015793767124d716e169a87cf887405ef19d03. Jan 24 01:52:36.474908 systemd[1]: cri-containerd-13f5fbc6a90bc5fb37bbbf54b9015793767124d716e169a87cf887405ef19d03.scope: Deactivated successfully. Jan 24 01:52:36.477749 containerd[1505]: time="2026-01-24T01:52:36.477693366Z" level=info msg="StartContainer for \"13f5fbc6a90bc5fb37bbbf54b9015793767124d716e169a87cf887405ef19d03\" returns successfully" Jan 24 01:52:36.512713 containerd[1505]: time="2026-01-24T01:52:36.512618337Z" level=info msg="shim disconnected" id=13f5fbc6a90bc5fb37bbbf54b9015793767124d716e169a87cf887405ef19d03 namespace=k8s.io Jan 24 01:52:36.512713 containerd[1505]: time="2026-01-24T01:52:36.512689429Z" level=warning msg="cleaning up after shim disconnected" id=13f5fbc6a90bc5fb37bbbf54b9015793767124d716e169a87cf887405ef19d03 namespace=k8s.io Jan 24 01:52:36.512713 containerd[1505]: time="2026-01-24T01:52:36.512704547Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:52:37.102325 systemd[1]: run-containerd-runc-k8s.io-13f5fbc6a90bc5fb37bbbf54b9015793767124d716e169a87cf887405ef19d03-runc.NuLAMh.mount: Deactivated successfully. Jan 24 01:52:37.102497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13f5fbc6a90bc5fb37bbbf54b9015793767124d716e169a87cf887405ef19d03-rootfs.mount: Deactivated successfully. Jan 24 01:52:37.341584 containerd[1505]: time="2026-01-24T01:52:37.341521783Z" level=info msg="CreateContainer within sandbox \"632b14aa9621dedce8509dc415b1ded2a460be7a528cf1d81bb8f01cc0717d83\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 01:52:37.373679 containerd[1505]: time="2026-01-24T01:52:37.373056019Z" level=info msg="CreateContainer within sandbox \"632b14aa9621dedce8509dc415b1ded2a460be7a528cf1d81bb8f01cc0717d83\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"df739f664077230fb212b3e2282d14843e691598a3acfd3eddfd0294d4592a5c\"" Jan 24 01:52:37.375579 containerd[1505]: time="2026-01-24T01:52:37.375295321Z" level=info msg="StartContainer for \"df739f664077230fb212b3e2282d14843e691598a3acfd3eddfd0294d4592a5c\"" Jan 24 01:52:37.428402 systemd[1]: Started cri-containerd-df739f664077230fb212b3e2282d14843e691598a3acfd3eddfd0294d4592a5c.scope - libcontainer container df739f664077230fb212b3e2282d14843e691598a3acfd3eddfd0294d4592a5c. Jan 24 01:52:37.473009 containerd[1505]: time="2026-01-24T01:52:37.472934352Z" level=info msg="StartContainer for \"df739f664077230fb212b3e2282d14843e691598a3acfd3eddfd0294d4592a5c\" returns successfully" Jan 24 01:52:38.198462 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 24 01:52:38.370369 kubelet[2696]: I0124 01:52:38.370242 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wghgf" podStartSLOduration=6.370149187 podStartE2EDuration="6.370149187s" podCreationTimestamp="2026-01-24 01:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 01:52:38.368447093 +0000 UTC m=+150.008628633" watchObservedRunningTime="2026-01-24 01:52:38.370149187 +0000 UTC m=+150.010330716" Jan 24 01:52:42.016426 systemd-networkd[1432]: lxc_health: Link UP Jan 24 01:52:42.029438 systemd-networkd[1432]: lxc_health: Gained carrier Jan 24 01:52:43.981827 systemd[1]: run-containerd-runc-k8s.io-df739f664077230fb212b3e2282d14843e691598a3acfd3eddfd0294d4592a5c-runc.UqtH08.mount: Deactivated successfully. Jan 24 01:52:43.991556 systemd-networkd[1432]: lxc_health: Gained IPv6LL Jan 24 01:52:46.210858 systemd[1]: run-containerd-runc-k8s.io-df739f664077230fb212b3e2282d14843e691598a3acfd3eddfd0294d4592a5c-runc.xyrbkB.mount: Deactivated successfully. Jan 24 01:52:48.576909 sshd[4582]: pam_unix(sshd:session): session closed for user core Jan 24 01:52:48.582142 systemd[1]: sshd@27-10.230.77.170:22-20.161.92.111:45400.service: Deactivated successfully. Jan 24 01:52:48.586843 systemd[1]: session-30.scope: Deactivated successfully. Jan 24 01:52:48.589242 systemd-logind[1490]: Session 30 logged out. Waiting for processes to exit. Jan 24 01:52:48.593671 systemd-logind[1490]: Removed session 30.