Jan 29, 2025
Ibiam Wayas
Unlike ChatGPT, which was originally not open for local use, the now-trending DeepSeek AI can run locally. But there are some prerequisites.
GPU: NVIDIA GPU with at least 12GB of VRAM for light models or at least 24GB for heavier models. RAM: At least 16GB of system memory (32GB recommended). Disk Space: 500 GB (may vary across models.
Ollama is simply a lightweight tool to manage and run AI models locally on your machine.
So, download a compatible version of Ollama from the official website and install it on your machine, following the given instructions.
Confirmed the installation by opening a new terminal and running the command “ollama --version” This should return the version of ollama, if installed correctly.
The next thing is to download the DeepSeek R1 model with your preferred model type, such as 8b, 70b, etc.
You can easily do that on Ollama by opening your terminal and typing the command: “ollama run deepseek-r1:<MODEL_CODE>” (replace <MODEL_CODE> with your preferred model type)
The command starts a prompt to download the R1 model. Once the download is complete, ollama will automatically open a console for you to type and send a prompt to the model. That is where you can chat with R1 locally.
Next Up