How to Run DeepSeek R1 for Free in Visual Studio Code Using Cline or Roo Code
Looking for a free, powerful AI that excels in reasoning? DeepSeek R1 is an excellent option. This open-source model outshines even well-known names like GPT-4, o1-mini, and Claude 3.5, especially when it comes to logic, mathematics, and code generation. In this article, we’ll show you how to run DeepSeek R1 for free in Visual Studio Code, integrating it as a code assistant using tools like LM Studio, Ollama, and Jan.
Why is DeepSeek R1 Getting So Much Attention?
- Completely Free and Open Source: Unlike many AI models that come with hefty subscription fees, DeepSeek R1 is open-source and available to everyone at no cost. You can even chat with it at DeepSeek Chat.
- Top-Notch Performance: DeepSeek R1 excels in logic, mathematical tasks, and generating code. It offers performance comparable to the latest AI models, making it a valuable tool for developers.
- Multiple Versions for Local Running: DeepSeek R1 comes in various sizes, from the 1.5B model up to the 70B version. You can choose the right model based on your PC’s hardware.
- Easy Integration with VSCode: With extensions like Cline or Roo Code, you can integrate DeepSeek R1 directly into Visual Studio Code, enabling a seamless coding experience just like using GitHub Copilot.
- No Hidden Costs: If you run it locally, you won’t need to worry about paying for tokens or API calls. Just make sure your system has a decent GPU for better performance.
If you are working with DeepSeek APIs, Apidog is here to make your life easier. It’s an all-in-one API development tool that streamlines the entire process — from design and documentation to testing and debugging.
Key Considerations Before You Start
- Optimize for Your PC’s Power: If you’re working with limited resources, opt for smaller models (1.5B or 7B parameters) or quantized versions to conserve memory.
- RAM Requirements: Use tools like LLM Calc to figure out the minimum RAM you’ll need based on the model you choose.
- Privacy: Running DeepSeek R1 locally ensures your data stays on your machine and isn’t sent to external servers.
- Cost-Free: Running DeepSeek R1 locally is completely free, but if you prefer to use their API, you’ll need to purchase tokens. These are priced much lower than most competitors.
Choosing the Right Model for Your PC
DeepSeek R1 comes with various model sizes, and which one you choose depends on your hardware:
1.5B Parameters:
- RAM: ~4 GB
- GPU: Integrated (e.g., NVIDIA GTX 1050) or modern CPU
- Ideal for: Basic tasks and modest PCs
7B Parameters:
- RAM: ~8–10 GB
- GPU: Dedicated (e.g., NVIDIA GTX 1660 or better)
- Ideal for: Intermediate tasks and moderately powerful PCs
70B Parameters:
- RAM: ~40 GB
- GPU: High-end (e.g., NVIDIA RTX 3090 or higher)
- Ideal for: Complex tasks and high-performance setups
How to Run DeepSeek R1 Locally
1. Using LM Studio:
- Download and install LM Studio from their official site.
- Open LM Studio, go to the Discover tab, and search for “DeepSeek R1”. Choose the version that fits your system.
- For MacBook users with Apple Silicon, use the MLX option for optimized models.
- For Windows or Linux, choose the GGUF option.
- After downloading, go to Local Models, select DeepSeek R1, and hit Load.
- Start the local server in the Developer tab by enabling Start Server, and access it at
http://localhost:1234
.
2. Using Ollama:
- Download Ollama from the official site.
- Run this command in the terminal:
ollama pull deepseek-r1
- If you want smaller versions, check the Ollama library for the specific commands.
- Start the server by running:
ollama serve
- The model will run at
http://localhost:11434
.
3. Using Jan:
- Install Jan from the Jan website.
- Although DeepSeek R1 isn’t directly available in Jan, you can find it on Hugging Face and manually download it.
- After selecting Jan as the deployment option, it will load automatically in the Jan interface.
- Start the server and access it at
http://localhost:1337
.
Integrating DeepSeek R1 with Visual Studio Code
Now that DeepSeek R1 is up and running locally, let’s integrate it with Visual Studio Code using either Cline or Roo Code extensions:
1. Install the Extension:
- Open the Extensions tab in VSCode and search for Cline or Roo Code.
- Install the extension you prefer.
2. Configure the Extension:
For LM Studio or Jan:
- Click on the extension in VSCode and open Settings.
- Under API Provider, select LM Studio or Jan.
- In the Base URL field, input the URL of your local server (e.g.,
http://localhost:1234
for LM Studio). - The Model ID will auto-populate if only one model is available. Otherwise, manually select the DeepSeek model you downloaded.
For Ollama:
- Click on the extension, go to Settings.
- Select Ollama as the API provider.
- Enter
http://localhost:11434
in the Base URL field. - Select the correct Model ID.
3. Finish Setup:
- Click Done, and you’re all set! Enjoy the full functionality of DeepSeek R1 within your coding environment.
Conclusion
DeepSeek R1 is a powerful and free AI that delivers outstanding performance for developers, especially for those who need strong logic reasoning and code generation. By running it locally using LM Studio, Ollama, or Jan, and integrating it with Visual Studio Code through extensions like Cline or Roo Code, you can leverage this open-source model without spending a dime. Simply choose the right model based on your PC’s capabilities, and start using DeepSeek R1 today to enhance your development experience!