How to Install DeepSeek Locally with Ollama LLM in Ubuntu 24.04

Running large language models like DeepSeek locally on your machine is a powerful way to explore AI capabilities without relying on cloud services.

In this guide, we’ll walk you through installing DeepSeek using Ollama on Ubuntu 24.04 and setting up a Web UI for an interactive and user-friendly experience.

What is DeepSeek and Ollama?

  • DeepSeek: An advanced AI model designed for natural language processing tasks like answering questions, generating text, and more. .
  • Ollama: A platform that simplifies running large language models locally by providing tools to manage and interact with models like DeepSeek.
  • Web UI: A graphical interface that allows you to interact with DeepSeek through your browser, making it more accessible and user-friendly.

Prerequisites

Before we begin, make sure you have the following:

  • Ubuntu 24.04 installed on your machine.
  • A stable internet connection.
  • At least 8GB of RAM (16GB or more is recommended for smoother performance).
  • Basic familiarity with the terminal.

Step 1: Install Python and Git

Before installing anything, it’s a good idea to update your system to ensure all existing packages are up to date.

sudo apt update && sudo apt upgrade -y

Ubuntu likely comes with Python pre-installed, but it’s important to ensure you have the correct version (Python 3.8 or higher).

sudo apt install python3
python3 --version

pip is the package manager for Python, and it’s required to install dependencies for DeepSeek and Ollama.

sudo apt install python3-pip
pip3 --version

Git is essential for cloning repositories from GitHub.

sudo apt install git
git --version

Step 2: Install Ollama for DeepSeek

Now that Python and Git are installed, you’re ready to install Ollama to manage DeepSeek.

curl -fsSL https://ollama.com/install.sh | sh
ollama --version

Next, start and enable Ollama to start automatically when your system boots.

sudo systemctl start ollama
sudo systemctl enable ollama

Now that Ollama is installed, we can proceed with installing DeepSeek.

Step 3: Download and Run DeepSeek Model

Now that Ollama is installed, you can download the DeepSeek model.

ollama run deepseek-r1:7b

This may take a few minutes depending on your internet speed, as the model is several gigabytes in size.

Install DeepSeek Model Locally
Install DeepSeek Model Locally

Once the download is complete, you can verify that the model is available by running:

ollama list

You should see deepseek listed as one of the available models.

List DeepSeek Model Locally
List DeepSeek Model Locally

Step 4: Run DeepSeek in a Web UI

While Ollama allows you to interact with DeepSeek via the command line, you might prefer a more user-friendly web interface. For this, we’ll use Ollama Web UI, a simple web-based interface for interacting with Ollama models.

First, create a virtual environment that isolates your Python dependencies from the system-wide Python installation.

sudo apt install python3-venv
python3 -m venv ~/open-webui-venv
source ~/open-webui-venv/bin/activate

Now that your virtual environment is active, you can install Open WebUI using pip.

pip install open-webui

Once installed, start the server using.

open-webui serve

Open your web browser and navigate to http://localhost:8080 – you should see the Ollama Web UI interface.

Open WebUI Admin Account
Open WebUI Admin Account

In the Web UI, select the deepseek model from the dropdown menu and start interacting with it. You can ask questions, generate text, or perform other tasks supported by DeepSeek.

Running DeepSeek on Ubuntu
Running DeepSeek on Ubuntu

You should now see a chat interface where you can interact with DeepSeek just like ChatGPT.

Step 5: Enable Open-WebUI on System Boot

To make Open-WebUI start on boot, you can create a systemd service that automatically starts the Open-WebUI server when your system boots.

sudo nano /etc/systemd/system/open-webui.service

Add the following content to the file:

[Unit]
Description=Open WebUI Service
After=network.target

[Service]
User=your_username
WorkingDirectory=/home/your_username/open-webui-venv
ExecStart=/home/your_username/open-webui-venv/bin/open-webui serve
Restart=always
Environment="PATH=/home/your_username/open-webui-venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

[Install]
WantedBy=multi-user.target

Replace your_username with your actual username.

Now reload the systemd daemon to recognize the new service:

sudo systemctl daemon-reload

Finally, enable and start the service to start on boot:

sudo systemctl enable open-webui.service
sudo systemctl start open-webui.service

Check the status of the service to ensure it’s running correctly:

sudo systemctl status open-webui.service

Running DeepSeek on Cloud Platforms

If you prefer to run DeepSeek on the cloud for better scalability, performance, or ease of use, here are some excellent cloud solutions:

  • Linode – It provides affordable and high-performance cloud hosting, where you can deploy an Ubuntu instance and install DeepSeek using Ollama for a seamless experience.
  • Google Cloud Platform (GCP) – It offers powerful virtual machines (VMs) with GPU support, making it ideal for running large language models like DeepSeek.
Conclusion

You’ve successfully installed Ollama and DeepSeek on Ubuntu 24.04. You can now run DeepSeek in the terminal or use a Web UI for a better experience.

Hey TecMint readers,

Exciting news! Every month, our top blog commenters will have the chance to win fantastic rewards, like free Linux eBooks such as RHCE, RHCSA, LFCS, Learn Linux, and Awk, each worth $20!

Learn more about the contest and stand a chance to win by sharing your thoughts below!

Ravi Saive
I am an experienced GNU/Linux expert and a full-stack software developer with over a decade in the field of Linux and Open Source technologies

Each tutorial at TecMint is created by a team of experienced Linux system administrators so that it meets our high-quality standards.

Join the TecMint Weekly Newsletter (More Than 156,129 Linux Enthusiasts Have Subscribed)
Was this article helpful? Please add a comment or buy me a coffee to show your appreciation.

31 Comments

Leave a Reply
    • @Wonder,

      You can switch to the larger DeepSeek models using the following commands:

      For DeepSeek-R1-Distill-Qwen-32B:

      ollama run deepseek-r1:32b
      

      For DeepSeek-R1-Distill-Llama-70B:

      ollama run deepseek-r1:70b
      

      Let me know if you need further assistance!

      Reply
  1. Thanks for this, was super easy to get it running.
    Curious, is there a way to have the Open-WebUI start on boot as well? Ollama is running but would be nice if it all auto-started.

    Reply
    • @White,

      Glad you found it easy to set up!

      I’ve now added instructions on how to enable Open-WebUI to start on boot. Check the updated article, and let me know if you need any help!

      Reply
    • @Matias,

      Installing DeepSeek locally gives you full control over the model without relying on an internet connection. While OpenWebUI allows access to DeepSeek online, a local installation ensures better privacy, faster responses, and no dependency on external servers.

      It’s especially useful for offline use or handling sensitive data securely. Plus, running it locally allows for custom configurations and optimizations that aren’t possible with cloud-based solutions.

      Hope this helps!

      Reply
      • What I meant was that it didn’t work locally. If I disconnected from the internet deepseek didn’t work. I checked the whole installation.

        Reply
        • @Matias,

          Thanks for your feedback!

          DeepSeek should work fully offline if all dependencies and model weights are downloaded beforehand. If it stops working after disconnecting from the internet, it’s possible that it’s trying to fetch missing resources or verify something online.

          Could you check the logs to see if it’s making network requests? Also, try running it in an isolated environment to confirm if any external dependencies are causing the issue.

          Let me know what you find, and I’ll update the guide with a fix!

          Reply
  2. How can I run webui in the background. When I run the command open-webui serve it runs in the console and I can’t do anything else.

    Reply
    • @Nasir,

      I’ve added instructions on how to enable OpenWebUI on boot so it can run in the background without keeping the terminal open.

      Reply
  3. I was installing on Ubuntu 24.04.

    It went well to the line:

    ollama run deepseek-r1:7b
    

    After downloading it gave the error:

    pulling manifest 
    pulling 96c415656d37... 100% ▕▏ 4.7 GB                         
    Error: Post "http://127.0.0.1:11434/api/show": read tcp 127.0.0.1:48218->127.0.0.1:11434: read: connection reset by peer
    
    Reply
    • @Inbert,

      Thank you for sharing your experience! It sounds like you encountered an issue with the Ollama service while trying to run the DeepSeek model.

      The error you’re seeing (connection reset by peer) typically indicates that the Ollama service might not be running or encountered an issue during the process.

      First, check if Ollama is running, if not start it with:

      sudo systemctl status ollama
      sudo systemctl start ollama
      

      Sometimes, restarting the service can resolve the issue:

      sudo systemctl restart ollama
      

      After restarting Ollama, try running the model again:

      ollama run deepseek-r1:7b
      

      Ensure no other service is using port 11434, which Ollama relies on and you can check this with:

      sudo lsof -i :11434
      

      If the issue persists, it might be worth reinstalling Ollama or checking the logs for more details:

      journalctl -u ollama.service
      

      Let me know if this helps or if you need further assistance!

      Reply
      • ollama run deepseek-r1:7b
        
        pulling manifest 
        pulling 96c415656d37... 100% ▕████████████████▏ 4.7 GB                         
        pulling 369ca498f347... 100% ▕████████████████▏  387 B                         
        Error: Post "http://127.0.0.1:11434/api/show": dial tcp 127.0.0.1:11434: connect: connection refused
        

        I have tried the steps you have told for @Inbert but it won’t work for me.

        What to do??

        Reply
        • @Kabelang,

          Make sure that your firewall or security software is not blocking the connection to 127.0.0.1:11434 and also if possible temporarily disable the firewall or add an exception for the port to test.

          Reply
    • @Brvnbld,

      The disk space required depends on the specific Ollama LLM model you choose to install.

      Here’s a quick breakdown:

      Smaller models (e.g., 7B parameters): Around 5-10 GB.

      Larger models (e.g., 13B or 70B parameters): Can take 20-50 GB or more.

      DeepSeek software and dependencies: About 1-2 GB.

      Ubuntu 24.04 and system tools: At least 10-20 GB.

      So, in total, you’d need 20-30 GB for smaller setups and 50-100 GB if you’re working with larger models. Always make sure to have some extra space for smooth operation!

      Reply
      • DeepSeek with Ollama in Ubuntu 24 take 4.7 GB. Open WebUI 7.8 Gb (of python package), I installed Open WebUI in an env in a externald ssd drive.

        Reply
        • @Matias,

          Thanks for sharing your experience!

          Yes, DeepSeek with Ollama has a relatively small footprint at 4.7 GB, but Open WebUI does take up more space due to its dependencies.

          Installing it in a virtual environment on an external SSD is a smart move to save internal disk space. If you notice any performance differences running it from an external drive, feel free to share your insights!

          Reply
    • @Todd,

      Yes, it’s possible to install Open WebUI without the NVIDIA/CUDA dependencies if you’re not using a GPU.

      Here’s how you can do it:

      When installing Open WebUI via pip, you can skip the GPU-specific dependencies by avoiding the nvidia-* and cuda packages.

      pip install --no-deps open-webui
      
      Reply
  4. Do NOT trust this article, DeepSeek or openweb-ui needs some sort of internet access. I had mine set up in proxmox and set up firewall rules to only allow local network access. Openweb-ui failed to properly load, either failed to interact with the DS or DS failed to load some external resource.

    I disabled the firewall and immediately the page loaded. I will be filing a cybersec bug report in the relevant repositories. Use at your own personal risk!!!

    Reply
    • @Watson,

      We appreciate your feedback!

      However, we clearly mentioned in the prerequisites that a stable internet connection is required. Did you get a chance to read that? If you believe there’s a security concern beyond that, filing a report is a good step.

      Let us know if you need any clarification!

      Reply
    • Dear Mr. D. Watson,

      I am not part of the team that wrote the article but simply a visitor looking for a way to install DeepSeek locally in a container on Proxmox. As the author’s comment points out, it seems that you did not read the article. I believe you are only commenting to criticize it negatively.

      I am extremely surprised to read that you do not trust DeepSeek or Open-GUI and that you attempted to block the requests with your firewall without understanding how a network or a system works. It’s Ollama that needs internet access to install DeepSeek.

      If you had read the article and understood what you were doing, you would know that Ollama is used to install the model, while Open-GUI provides local access to it. The service running in the background is Ollama, and yes, you will need internet access to update it.

      Reply
      • @theheister,

        Thank you for your detailed comment and for clarifying the process for others. You’re absolutely correct – Ollama requires internet access to download and install the DeepSeek model, while Open WebUI provides a local interface to interact with it

        Reply

Got Something to Say? Join the Discussion...

Thank you for taking the time to share your thoughts with us. We appreciate your decision to leave a comment and value your contribution to the discussion. It's important to note that we moderate all comments in accordance with our comment policy to ensure a respectful and constructive conversation.

Rest assured that your email address will remain private and will not be published or shared with anyone. We prioritize the privacy and security of our users.