Ollama, WebUI, Automatic1111 – your own, personal, local AI from scratch
My local toolbox was empty, now it’s full.

Lately I have been writing about Ollama, WebUI and StableDiffusion on top of Automatic1111. I found myself struggling a little bit to keep up with all those information about how to run it in specific environments. So here you have an extract of step by step installation. Starting with NVIDIA driver and some basic requirements:
sudo apt install nvidia-driver-xxx
curl -fsSL https://ollama.com/install.sh | sh
sudo apt install tmux screen
sudo apt install curl apt-transport-https ca-certificates software-properties-common
Next we go for Docker.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce -y
Ollama, but with binaries instead of Docker container. It will be much easier, and does not require installing Docker extensions for GPU acceleration support:
curl -fsSL https://ollama.com/install.sh | sh
If running Ollama on different server, then need to modify Ollama’s service file to add proper environment variable, which is /etc/systemd/system/ollama.service:
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
Then reload service definition and restart Ollama:
systemctl daemon-reload
systemctl restart ollama
To install Open WebUI we just start new container passing Ollma URL as follows:
sudo docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Automatic1111/Stable Diffusion will be installed natively using Python 3.10, so in case of using Ubuntu 24 we need to add specific repository and install this particular version of Python.
sudo add-apt-repository ppa:deadsnakes/ppa -y
sudo apt install python3.10-venv -y
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
cd stable-diffusion-webui
python3.10 -m venv .
./webui.sh --api --api-log --loglevel DEBUG
And basically that’s it. All things should be now present and running which is:
- NVIDIA driver
- Ollama server
- Open WebUI in Docker container
- Automatic1111 / Stable Diffusion in Python venv
For Ollama the minimum requirement of NVIDIA Compute Capability (based on experiments) is 5.0+. So NVIDIA 940MX 2GB will work as well as RTX 3060 12GB of course. WebUI does not put any specific requirements. Automatic111 uses Torch:
“The current PyTorch binaries support minimum 3.7 compute capability”
So in theory both Ollama and Automatic1111 should work on CC somewhere around 5.0. On my 940MX both loaded but Stable Diffusion default model requires around 4GB of VRAM so it does not fit. Actually prefered minimum GPU is Turing CC 7.5+, RTX 20xx as those support 16-bit by default and got built-in tensor cores.