(Get Ready to Build Your Own AI Playground!)
Are you excited about the possibilities of Large Language Models (LLMs) like Llama 2, but intimidated by the setup? Do you dream of running these powerful AI tools locally, without relying on cloud services? You’ve come to the right place! This comprehensive guide will walk you through installing and configuring everything you need – Docker, Ollama, Open-WebUI, and n8n – to create your very own AI playground. We’ll break down each step with clear instructions, making it accessible even if you’re new to the world of containers and AI.
Why Docker? Why Ollama, Open-WebUI & n8n?
Before we dive in, let’s quickly understand why we’re using these tools:
- Docker: Think of Docker as a lightweight virtual machine. It packages applications and their dependencies into isolated “containers,” ensuring they run consistently across different environments. This eliminates the “it works on my machine” problem and simplifies deployment.
- Ollama: Ollama makes it incredibly easy to run LLMs locally. It handles the complexities of downloading, managing, and running these models, allowing you to focus on using them.
- Open-WebUI: This provides a beautiful and user-friendly web interface for interacting with your LLMs running through Ollama. No more command-line interfaces – just a clean, intuitive experience.
- n8n: A powerful, no-code workflow automation tool. With n8n, you can connect Ollama to other services, creating automated AI-powered workflows. Imagine automatically summarizing articles, generating social media posts, or building custom chatbots!
1. DOCKER INSTALLATION: Laying the Foundation
First things first, we need to get Docker up and running on your system. These instructions are tailored for Ubuntu-based systems, but the general principles apply to other Linux distributions as well.
Step One: Uninstall Conflicting Packages
Before installing Docker, it’s crucial to remove any existing packages that might conflict with the installation. Run the following command in your terminal:
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; dosudo apt-get remove $pkg; done
This command systematically removes any potentially conflicting packages, ensuring a smooth Docker installation.
Step Two: Set up Docker’s Apt Repository
Now, we need to add Docker’s official repository to your system’s package manager (apt). This allows you to easily install and update Docker.
Add Docker’s Official GPG Key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m0755-d /etc/apt/keyrings
sudocurl-fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudochmod a+r /etc/apt/keyrings/docker.asc
These commands download and install Docker’s GPG key, which verifies the authenticity of the packages you’ll be installing.
Add the Repository to Apt Sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
sudotee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
This command adds the Docker repository to your system’s apt sources, allowing you to install Docker packages.
Step Three: Install Docker
Finally, install Docker using apt:
sudo apt-get install docker-ce docker-ce-cli containerd.io
This command downloads and installs the Docker Engine, Docker CLI, and containerd.io, the container runtime.
Step Four: Create the Docker Group and Add Your User
To avoid having to use sudo
every time you run a Docker command, we’ll add your user to the docker
group.
Create the Docker Group:
sudo groupadd docker
Add Your User to the Docker Group:
sudo usermod -aG docker $USER
Activate the Changes to Groups:
newgrp docker
This command applies the group changes to your current session. You may need to log out and log back in for the changes to take effect permanently.
Step Five: Verify the Installation
Let’s verify that Docker is installed correctly and that you can run Docker commands without sudo
.
docker run hello-world
This command downloads and runs a simple “hello-world” container. If everything is set up correctly, you should see a message confirming the successful execution.
2. DOCKER OLLAMA INSTALLATION: Bringing LLMs to Life
Now that Docker is installed, let’s install Ollama. We’ll cover both CPU-only and GPU-accelerated installations.
CPU Only (If You Have No GPU)
If you don’t have a compatible GPU, you can run Ollama in CPU-only mode:
docker run -d -v ollama:/root/.ollama -p11434:11434 --name ollama ollama/ollama
This command downloads the Ollama image, runs it in detached mode (-d
), maps a volume to persist your data (-v ollama:/root/.ollama
), and exposes port 11434, which Ollama uses for communication.
WITH GPU on Ollama!! (ONLY IF GPU IS AVAILABLE)
If you have an NVIDIA GPU, you can significantly accelerate Ollama’s performance. Here’s how to set it up:
Configure the Repository:
url -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
These commands add the NVIDIA Container Toolkit repository to your system.
Install the NVIDIA Container Toolkit Packages:
sudo apt-get install -y nvidia-container-toolkit
This command installs the necessary packages for GPU support in Docker.
Configure Docker to Use Nvidia Driver:
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
This command configures Docker to use the NVIDIA driver.
Restart the docker service to apply the changes.
Start the Container:
docker run -d --gpus all -v ollama:/root/.ollama -p11434:11434 --name ollama ollama/ollama
This command is similar to the CPU-only version, but it includes the --gpus all
flag, which tells Docker to make all available GPUs accessible to the Ollama container.
Find the Ollama Docker Container IP:
To connect to Ollama from Open-WebUI, you’ll need to know the container’s IP address.
docker container list
This command lists all running Docker containers, including Ollama. Note the CONTAINER ID
of the Ollama container.
docker inspect -f'{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' [container ID]
Replace [container ID]
with the actual container ID you noted earlier. This command will output the container’s IP address (e.g., 172.17.0.2).
Test Ollama:
Open your web browser and navigate to http://[container IP]:11434
. If Ollama is running correctly, you should see a welcome message.
3. DOCKER OPEN-WEBUI INSTALLATION: A User-Friendly Interface
Now that Ollama is running, let’s install Open-WebUI to provide a graphical interface for interacting with your LLMs.
Open-WebUI without GPU (CPU ONLY)
docker run -d -p3000:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main
This command downloads the Open-WebUI image, runs it in detached mode, maps a volume for persistent data storage, and exposes port 3000, which you’ll use to access the web interface.
Open-WebUI with GPU
If you have a GPU, you can use the CUDA-enabled version of Open-WebUI for improved performance:
docker run -d -p3000:8080 --gpus all -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:cuda
This command is similar to the CPU-only version, but it includes the --gpus all
flag and uses the ghcr.io/open-webui/open-webui:cuda
image.
Update Open-WebUI:
To ensure you’re running the latest version of Open-WebUI, you can pull the latest image:
docker pull ghcr.io/open-webui/open-webui:main
Configure Open-WebUI:
Open your web browser and navigate to http://localhost:3000
. In the connection settings, enter the Ollama container’s IP address and port (e.g., 172.17.0.2:11434
).
4. DOCKER INSTALLATION n8n: Automate Your AI Workflows
Finally, let’s install n8n to automate your AI workflows.
docker volume create n8n_data
docker run -it --rm --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n docker.n8n.io/n8nio/n8n
This command creates a Docker volume for n8n’s data, runs the n8n container in interactive mode, maps port 5678, and mounts the volume for persistent storage.
Setup Ollama Credentials in n8n:
In n8n, you’ll need to configure a connection to Ollama. Use the Ollama container’s IP address and port (e.g., 172.17.0.2:11434
) as the connection details.
Congratulations!
You’ve successfully installed Docker, Ollama, Open-WebUI, and n8n. You’re now ready to explore the exciting world of LLMs and AI automation! Experiment with different models, build custom workflows, and unleash your creativity. The possibilities are endless!
PS! Before I forget pull the following Ollama model’s into your docker container for Ollama
docker exec -it ollama ollama run olmo2:13b
docker exec -it ollama ollama run gemma3:12b
docker exec -it ollama ollama run tulu3:latest
docker exec -it ollama ollama run olmo2:latest
docker exec -it ollama ollama run gemma3:latest
docker exec -it ollama ollama run llama2-uncensored:latest
docker exec -it ollama ollama run mistral-nemo:latest
docker exec -it ollama ollama run llama3.2-vision:latest
docker exec -it ollama ollama run codellama:13b
docker exec -it ollama ollama run codellama:latest
docker exec -it ollama ollama run qwen2.5-coder:14b
docker exec -it ollama ollama run qwen2.5-coder:latest
docker exec -it ollama ollama run qwen2.5:14b
docker exec -it ollama ollama run qwen2.5:latest
docker exec -it ollama ollama run mistral:latest
docker exec -it ollama ollama run llama3.2:latest
docker exec -it ollama ollama run deepseek-coder:6.7b
docker exec -it ollama ollama run deepseek-r1:14b
docker exec -it ollama ollama run deepseek-r1:8b
docker exec -it ollama ollama run deepseek-r1:latest