cancel
Showing results for 
Search instead for 
Did you mean: 

AI Discussions

Basas
Journeyman III

ROCm Windows 11 Adrenalin Edition 25.1.1 LM studio and Ollama 7900xtx - gpu not found

Hello,

 

Need help starting models with ROCm cause i tried everything i could google or reinstall. Any ideas or help would be appreciated.
Details:
Windows 11
AMD GPU 7900xtx - Adrenalin Edition 25.1.1 
CPU RYZEN 7700x - iGPU disabled in BIOS

Driver installations:

  1. Uninstall drivers + Install. gpu not found/GPU survey unsuccessful
  2. Uninstall + install with Factory reset checkbox. gpu not found/GPU survey unsuccessful
  3. Previously I tried with older driver version 24.10.1 Ollama did not work and LM studio worked with ROCm llama.cpp 1.8 but it does not load DeepSeek models (tested Meta Llama 3.1 8B).

With iGPU enabled both apps finds iGPU but both apps cannot load models with ROCm. 

With LM studio i can use vulcan and it works

 

LM studio (0.3.9 build 3) on any ROCm runtime i am getting

Compatibility GPU survey unsuccessful and i cannot select it from dropdown

Screenshot 2025-01-30 164140.png


Ollama it does not detect amd gpu. 
ollama version is 0.5.7

 

2025/01/30 16:23:33 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES:1 HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Basas\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-01-30T16:23:33.749+02:00 level=INFO source=images.go:432 msg="total blobs: 7"
time=2025-01-30T16:23:33.750+02:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-01-30T16:23:33.751+02:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-01-30T16:23:33.752+02:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"
time=2025-01-30T16:23:33.752+02:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-30T16:23:33.752+02:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-01-30T16:23:33.752+02:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16
time=2025-01-30T16:23:34.106+02:00 level=INFO source=amd_hip_windows.go:103 msg="AMD ROCm reports no devices found"
time=2025-01-30T16:23:34.106+02:00 level=INFO source=amd_windows.go:50 msg="no compatible amdgpu devices detected"
time=2025-01-30T16:23:34.107+02:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
time=2025-01-30T16:23:34.107+02:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.7 GiB" available="24.0 GiB"

 

 

 

 

 

LM studio System Resource info

Screenshot 2025-01-30 165227.png

 

 

 

 

[
  {
    "modelCompatibilityType": "gguf",
    "runtime": {
      "hardwareSurveyResult": {
        "compatibility": {
          "status": "Compatible"
        },
        "cpuSurveyResult": {
          "result": {
            "code": "Success",
            "message": ""
          },
          "cpuInfo": {
            "architecture": "x86_64",
            "supportedInstructionSetExtensions": [
              "AVX",
              "AVX2"
            ]
          }
        },
        "memoryInfo": {
          "ramCapacity": 34080616448,
          "vramCapacity": 25753026560,
          "totalMemory": 59833643008
        },
        "gpuSurveyResult": {
          "result": {
            "code": "Success",
            "message": ""
          },
          "gpuInfo": [
            {
              "name": "AMD Radeon RX 7900 XTX",
              "deviceId": 0,
              "totalMemoryCapacityBytes": 42523951104,
              "dedicatedMemoryCapacityBytes": 25753026560,
              "integrationType": "Discrete",
              "detectionPlatform": "Vulkan",
              "detectionPlatformVersion": "1.3.283",
              "otherInfo": {
                "deviceLUIDValid": "true",
                "deviceLUID": "4f3c5c0000000000",
                "deviceUUID": "00000000030000000000000000000000",
                "driverID": "1",
                "driverName": "AMD proprietary driver"
              }
            }
          ]
        }
      }
    }
  }
]

 

 

 

 

 

0 Likes
1 Reply
Basas
Journeyman III

it took a while but finally it works with new 0.9.

 

SOLUTION!!!!
Ollama  0.9 + Adrenalin 25.6.1 + latest Hip SDK
deleted both environment variables HIP_VISIBLE_DEVICES and HSA_OVERRIDE_GFX_VERSION 


With Ollama 0.9 - started reporting 2 GPUS on my PC
0: AMD 7700x integrated gpu
1: AMD 7900 xtx

 once ollama, AMD drivers + hip sdk is installed
Env variable was set by default
HIP_VISIBLE_DEVICES= 1
This itself caused somehow Ollama to go crazy

1. ollama serve - says all good 7900xtx 

 

 

 

time=2025-06-12T00:10:25.547+03:00 level=INFO source=routes.go:1287 msg="Listening on 127.0.0.1:11434 (version 0.9.0)"
time=2025-06-12T00:10:25.547+03:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-06-12T00:10:25.547+03:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-06-12T00:10:25.547+03:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16
time=2025-06-12T00:10:25.913+03:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=rocm variant="" compute=gfx1100 driver=6.4 name="AMD Radeon RX 7900 XTX" total="24.0 GiB" available="23.8 GiB"

 

 

2. ollama run llama3 - somehow ollama + rocom bugs started happening - for whatever reason it still tried loading on integrated gpu

 

 

llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon(TM) Graphics) - 12012 MiB free

 

 


3. When troubleshooting I tried: all values for HIP_VISIBLE_DEVICES= 1 (tried 0 2)
added new variable from other issue posts
HSA_OVERRIDE_GFX_VERSION= 1100 (tried 1100 gfx1100 etc)

4. Ollama was either loading CPU or AMD integrated + RAM

5. CPU/AMD integrated - ollama loaded models were working but it was SLOW

6. SOLUTION!!!! deleted both HIP_VISIBLE_DEVICES and HSA_OVERRIDE_GFX_VERSION 

After deleting Finally I see it loaded on GPU token/sec skyrocketed compared to CPU.

0 Likes