This is a local chatbot that you can run on your own machine, download a model or two and start chatting.
The only pre-requisite is to install Docker and Docker Compose 🐳
The image of ollama relies and uses a different tag for AMD GPU specifically:
ollama/ollama:rocm
Use the tag :latest
if you are running without a GPU
ollama/ollama:latest
Need to add a new data parameter in the docker-compose.yml
to get all GPUs.
...
data:
- gpus=all
Run the following command to get it started:
docker compose up -d
Refer to the official ollama docs for more info
Then navigate to http://localhost:8080 (feel free to change the port for the openwebui in docker-compose.yml
if 8080 is taken)
If you're curious about how it works with only a CPU, you can read the experience here: https://buymeacoffee.com/qoyyuum/how-i-made-ai-system-admin-assistant
TL;DR Get a GPU