Hello,
Need help starting models with ROCm cause i tried everything i could google or reinstall. Any ideas or help would be appreciated.
Details:
Windows 11
AMD GPU 7900xtx - Adrenalin Edition 25.1.1
CPU RYZEN 7700x - iGPU disabled in BIOS
Driver installations:
- Uninstall drivers + Install. gpu not found/GPU survey unsuccessful
- Uninstall + install with Factory reset checkbox. gpu not found/GPU survey unsuccessful
- Previously I tried with older driver version 24.10.1 Ollama did not work and LM studio worked with ROCm llama.cpp 1.8 but it does not load DeepSeek models (tested Meta Llama 3.1 8B).
With iGPU enabled both apps finds iGPU but both apps cannot load models with ROCm.
With LM studio i can use vulcan and it works
LM studio (0.3.9 build 3) on any ROCm runtime i am getting
Compatibility GPU survey unsuccessful and i cannot select it from dropdown
Ollama it does not detect amd gpu.
ollama version is 0.5.7
2025/01/30 16:23:33 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES:1 HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Basas\\.ollama\\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-01-30T16:23:33.749+02:00 level=INFO source=images.go:432 msg="total blobs: 7"
time=2025-01-30T16:23:33.750+02:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-01-30T16:23:33.751+02:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-01-30T16:23:33.752+02:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"
time=2025-01-30T16:23:33.752+02:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-30T16:23:33.752+02:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-01-30T16:23:33.752+02:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=8 efficiency=0 threads=16
time=2025-01-30T16:23:34.106+02:00 level=INFO source=amd_hip_windows.go:103 msg="AMD ROCm reports no devices found"
time=2025-01-30T16:23:34.106+02:00 level=INFO source=amd_windows.go:50 msg="no compatible amdgpu devices detected"
time=2025-01-30T16:23:34.107+02:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
time=2025-01-30T16:23:34.107+02:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.7 GiB" available="24.0 GiB"
LM studio System Resource info
[
{
"modelCompatibilityType": "gguf",
"runtime": {
"hardwareSurveyResult": {
"compatibility": {
"status": "Compatible"
},
"cpuSurveyResult": {
"result": {
"code": "Success",
"message": ""
},
"cpuInfo": {
"architecture": "x86_64",
"supportedInstructionSetExtensions": [
"AVX",
"AVX2"
]
}
},
"memoryInfo": {
"ramCapacity": 34080616448,
"vramCapacity": 25753026560,
"totalMemory": 59833643008
},
"gpuSurveyResult": {
"result": {
"code": "Success",
"message": ""
},
"gpuInfo": [
{
"name": "AMD Radeon RX 7900 XTX",
"deviceId": 0,
"totalMemoryCapacityBytes": 42523951104,
"dedicatedMemoryCapacityBytes": 25753026560,
"integrationType": "Discrete",
"detectionPlatform": "Vulkan",
"detectionPlatformVersion": "1.3.283",
"otherInfo": {
"deviceLUIDValid": "true",
"deviceLUID": "4f3c5c0000000000",
"deviceUUID": "00000000030000000000000000000000",
"driverID": "1",
"driverName": "AMD proprietary driver"
}
}
]
}
}
}
}
]