Skip to content

Ollama (on AMD GPU) + OpenWebUI locally and easily with docker-compose

Notifications You must be signed in to change notification settings

Qoyyuum/mychatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 

Repository files navigation

MyChatBot

This is a local chatbot that you can run on your own machine, download a model or two and start chatting.

Setting Up

The only pre-requisite is to install Docker and Docker Compose 🐳

For AMD GPU

The image of ollama relies and uses a different tag for AMD GPU specifically:

ollama/ollama:rocm

For CPU only

Use the tag :latest if you are running without a GPU

ollama/ollama:latest

For NVidia GPU

Need to add a new data parameter in the docker-compose.yml to get all GPUs.

...
data:
  - gpus=all

Run the following command to get it started:

docker compose up -d

Refer to the official ollama docs for more info

Then navigate to http://localhost:8080 (feel free to change the port for the openwebui in docker-compose.yml if 8080 is taken)

Experimented with CPU only

If you're curious about how it works with only a CPU, you can read the experience here: https://buymeacoffee.com/qoyyuum/how-i-made-ai-system-admin-assistant

TL;DR Get a GPU

About

Ollama (on AMD GPU) + OpenWebUI locally and easily with docker-compose

Topics

Resources

Stars

Watchers

Forks

Sponsor this project