cancel
Showing results for 
Search instead for 
Did you mean: 

Running LLMs Locally on AMD GPUs with Ollama

AMD_AI
Staff
Staff
2 0 87.7K

Running large language models (LLMs) locally on AMD systems has become more accessible, thanks to Ollama. This guide will focus on the latest Llama 3.2 model, published by Meta on Sep 25th 2024, Meta's Llama 3.2 goes small and multimodal with 1B, 3B, 11B and 90B models. Here’s how you can run these models on various AMD hardware configurations and a step-by-step installation guide for Ollama on both Linux and Windows Operating Systems on Radeon GPUs. 

ollama.png

Supported AMD GPUs 

Ollama supports a range of AMD GPUs, enabling their product on both newer and older models. Here you can find the list of supported GPUs by Ollama: https://github.com/ollama/ollama/blob/main/docs/gpu.md#amd-radeon

Installation and Setup Guide for Ollama 

Linux 

code-1.png

  • Download and run llama-3.2 model:
    • ollama run llama3.2

code-2.pngWindows

  • System Requirements:
    • Windows 10 or Higher
    • Supported AMD GPUs with driver installed
  • For Windows installation you can simply download and install Ollama from here:
    https://ollama.com/download
    Once installed, simply open PowerShell and run:
    • Ollama run llama3.2

code-3.png

As simple as that, you are ready to chat with your local LLM.

code-4.png

 

You can find the list of all available models from Ollama here https://ollama.com/library

Conclusion

The extensive support for AMD GPUs by Ollama demonstrates the growing accessibility of running LLMs locally. From consumer-grade AMD Radeon RX graphics cards to high-end AMD Instinct accelerators, users have a wide range of options to run models like Llama 3.2 on their own hardware. This flexible approach to enable innovative LLMs across the broad AI portfolio, allows for greater experimentation, privacy, and customization in AI applications across various sectors.