Table of Contents
Setting up OpenWebUI and Ollama using Docker
Prerequisites
1. A system with Docker installed
Step 1: Install Ollama
1. Open a terminal window.
2. Run the following command to install Ollama:
curl https://ollama.ai/install.sh | sh
3. Verify the installation by running:
ollama --version
Step 2: Pull a Language Model
Pull a language model (e.g., Llama 3) using Ollama:
ollama pull llama3
Step 4: Set Up OpenWebUI
1. Open a new terminal window.
2. Run the OpenWebUI Docker container:
docker run -d --name openwebui \ -p 3000:8080 \ -e OLLAMA_API_BASE_URL=http://host.docker.internal:11434/api \ --add-host host.docker.internal:host-gateway \ ghcr.io/open-webui/open-webui:main
This command does the following:
1. Runs the container in detached mode (`-d`)
2. Names the container “openwebui” (`–name openwebui`)
3. Maps port 3000 on the host to port 8080 in the container (`-p 3000:8080`)
4. Sets the Ollama API base URL environment variable (`-e OLLAMA_API_BASE_URL=…`)
5. Adds a host entry to allow the container to communicate with the host machine (`–add-host …`)
6. Uses the OpenWebUI Docker image (`ghcr.io/open-webui/open-webui:main`)
Step 5: Access OpenWebUI
1. Open a web browser and navigate to:
http://localhost:3000
Step 6: Configure OpenWebUI
1. In the OpenWebUI interface, first create an account.
2. Login to the account created.
Step 7: Test the Setup
1. Start a new chat in OpenWebUI.
2. Select the language model you pulled earlier (e.g., Llama 3).
3. Send a test message to verify that everything is working correctly.
Additional Docker Commands
- To stop the OpenWebUI container:
docker stop openwebui
- To start the OpenWebUI container again:
docker start openwebui
- To remove the OpenWebUI container:
docker rm openwebui