4 Steps to Install and Run DeepSeek R1 Locally

Jan 29, 2025

White Line

Ibiam Wayas

White Line

Unlike ChatGPT, which was originally not open for local use, the now-trending DeepSeek AI can run locally. But there are some prerequisites.

White Line

GPU: NVIDIA GPU with at least 12GB of VRAM for light models or at least 24GB for heavier models. RAM: At least 16GB of system memory (32GB recommended). Disk Space: 500 GB (may vary across models.

White Line

Ollama is simply a lightweight tool to manage and run AI models locally on your machine.

Step 1: Install Ollama

White Line

So, download a compatible version of Ollama from the official website and install it on your machine, following the given instructions.

White Line

Confirmed the installation by opening a new terminal and running the command “ollama --version” This should return the version of ollama, if installed correctly.

Step 2: Verify Ollama Installation

Step 3: Deploy DeepSeek R1 Model

White Line

The next thing is to download the DeepSeek R1 model with your preferred model type, such as 8b, 70b, etc.

White Line

You can easily do that on Ollama by opening your terminal and typing the command: “ollama run deepseek-r1:<MODEL_CODE>” (replace <MODEL_CODE> with your preferred model type)

Step 4: Start Running DeepSeek R1

White Line

The command starts a prompt to download the R1 model. Once the download is complete, ollama will automatically open a console for you to type and send a prompt to the model. That is where you can chat with R1 locally.

Next Up

White Line

Is DeepSeek Better Than ChatGPT?