The correct way to deploy DeepSeek locally

Many tutorials have explained how to deploy DeepSeek locally, but some details are not explained clearly. This leads to various problems later. This tutorial will introduce the deployment details in detail. Deploy once and benefit for life.

Hardware requirements

Large models require a lot of hardware resources to run. If the computer configuration is relatively low, there are no network restrictions, and there are no strong privacy and security requirements, local deployment is not recommended. Go and get the wool. There are many places to get wool. So before deployment, think clearly whether you need to do local deployment.

  1. The CPU should be at least not weaker than Intel 7th generation or later.
  2. The memory should be at least 16G.
  3. An independent graphics card is a must, at least the performance should not be weaker than RTX 1060, and at least it should be RTX 1050. 1050 is OK with a 1.5b model, but it is a bit stuck with a 7b model.
  4. It is best to use a solid state drive. Depending on the model, the required hard disk space is also different. The minimum model size is 1.1GB and the maximum size is 404GB.
  5. This tutorial only covers Win10 system, Win11 should be similar.

Deployment steps

  1. Download ollama, which is a model management software that can be used to download and run deepseek. Download address, after downloading, double-click “OllamaSetup.exe”, Install
  2. Configure ollama, this step is not covered in many tutorials. If you skip this step, the subsequent models will be installed to the C drive by default. If your C drive is not large, it may report red after installation. It is recommended to configure the model installation path to another drive letter according to the requirements of this step.
    • Right-click “This PC => Properties” to open the settings window.
    • Find “Advanced System Settings” on the right and click it to open the “System Properties” window.
    • Click the “Advanced” tab, then find “Environment Variables” and click it to open the Environment Variables window
    • In the “System Variables” below, select “New…” to open the New System Variables window.
    • Fill in “OLLAMA_MODELS” as the variable name, and fill in the path address where you want to save the model as the variable value, for example, “D:aiollamamodels”.
    • Then click “OK” several times until the “System Properties” window is closed.
  3. If ollama is open at this time, restart it.
  4. Click the win key, enter cmd and press Enter to open the “Command Prompt Window”.
  5. Enter echo %OLLAMA_MODELS% and press Enter to see if the path just set is displayed to confirm that the environment variable is set successfully.
  6. Visit here, view the models of deepseek-r1, and select one according to your needs.
  7. If you choose 7b, run ollama run deepseek-r1:7b in the “Command Prompt Window” just now, and if you choose 1.5b, run ollama run deepseek-r1:1.5b. Generally, there is no model locally when you run it for the first time. The software will automatically download the model to the path configured by the “OLLAMA_MODELS” environment variable. After downloading the model, it will run automatically. You may need to wait for a while. At this time, enter the question you want to ask and the model will answer you. Enter /? for help information. Enter /bye to exit the model. You can enter ollama and press Enter to view the commands supported by ollama. Enter ollama list to view the installed models. In addition to deepseek-r1, you can also install other models. You can refer to the ollama homepage information.
  8. If you think the command prompt window is not good-looking, you can install a Chatbox. The configuration is relatively simple, so I won’t go into details.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top