- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
latest AMD blog on Deepseek-R1 - missing models
With the latest blog post of AMD AI staff there are recommended models per GPU to be used.
However, the model DeepSeek-R1-Distill-Llama-14B is nowhere to be found via the suggested LM Studio app.
Are there alternative dowload sources, anybody?
Shouldn't there be way more possible recommendations for older AMD GPUs for Llama based models?
Will AMD generate optimized models to download per typical configurations of GPU architecture (data types) and VRAM sizes?
- Labels:
-
AI Announcements
-
Direct ML
-
Tips and Tricks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I would recommend using DeepSeek-R1-Distill-Qwen-14B via Ollama instead of LM Studio if that's not an issue for you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
certain architectures won't support Qwen type models, afaik these need data types that are not supported on older GPU architectures. when AMD recommends specific models for a GPU one would assume it is tried and tested for best performance already.
hence also the request for further recommendations for model types and GPU variants since also older GPUs have drivers for DirectML or Vulkan.
the 8B variant DeepSeek-R1-Distill-Llama-8B is available via LM Studio and works, but it is not as good obviously.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
bump
