L
o
a
d
i
n
g
.
.
.
https://michele.zonca.org

Playing locally with DeepSeek

By Michele Zonca

24 March 2025

1 minutes to read

April 1, 2025

Ollama is a great tool to run LLMs locally. I have been using it for a while now, and I love it. It is easy to use, and it works well with most models.

curl -fsSL https://ollama.com/install.sh | sh
ollama pull deepseek-r1
ollama run deepseek-r1

The first line is to download and install Ollama. The second line is to pull the DeepSeek model, and the third line is to run it.

Now I have and interactive shell and an API running on localhost:11434. I can use it to run queries against the model.

curl http://localhost:11434/api/generate -d '{\n  "model": "deepseek",\n  "prompt": "<insert prompt here>"\n}'

This solution is great to get started, but it lacks of usability.. so let’s go with the llm command line tool

First of all, I create a virtual environment to install the required packages. I use venv to create a virtual environment called deepseek:

python3 -m venv deepseek

Then, I activate the virtual environment:

source deepseek/bin/activate

And install the required packages:

pip install llm                
llm install llm-ollama

Now I can access it from the command line:

~ llm ollama list-models
deepseek-r1:latest

And start querying it

llm -m deepseek-r1 "<insert prompt here>"