-
Notifications
You must be signed in to change notification settings - Fork 17
Home
Welcome to the homelab-docker wiki!
sudo apt install git
git config --global user.name "your name"
git config --global user.email "[email protected]"
cd /opt/docker
-
git clone https://github.com/jgwehr/homelab-docker.git docker
Stores the local contents in a directory namedDocker
- Instructions for Ubuntu 20LTS
- Assign the user to the Docker Group:
sudo usermod -aG docker ${USER}
- Start on boot:
sudo systemctl enable docker.service
- Start on boot:
sudo systemctl enable containerd.service
- Instructions for Ubuntu 20LTS
You can, of course, use your own structure. The below instructions use this repo's opinion.
cd /srv
mkdir -p {docker/config,cache,logs}
cd /opt/docker
git clone https://github.com/jgwehr/homelab-docker.git homelab
cd /mnt/storage #or otherwise /data
mkdir -p db
mkdir -p downloads/{audiobooks,music,podcasts,movies,tv}
mkdir -p media/{audiobooks,music,pictures,podcasts,movies,tv}
mkdir -p staticfiles
sudo chown -R $USER:$USER /mnt/storage
sudo chmod -R a=,a+rX,u+w,g+w /mnt/storage
- execute
id
to learn the UID (user ID) and GID (group ID) - execute
cd ~ ; pwd
to learn the User Directory - Steps below TBD
- set
PUID=
to the UID value from (1) above - set
PGID=
to the GID value from (1) above - set the
TZ=
to the database name
- execute
useradd jellyfin
- execute
id jellyfin
to learn it's new uid and gid. You can, alternatively, assign this usinguseradd -u #### jellyfin
- the uid and gid need to be provided to the .env file as UID_JELLYFIN and GID_JELLYFIN
- execute
usermod -aG render jellyfin
to enable access to video rendering devices - the gid of this "render" group should be provided to the .env file as GID_HARDWAREACC
- Within Jellyfin > Admin > Dashboard > Playback, you should then enable "Video Acceleration API (VAAPI)" for Transcoding
- I've had success checking "H264, HEVC, VC1" with an i7-2600. Better processors or GPUs are outside the scope of this wiki, since I don't have them.
- VAAPI support can be understood here: https://01.org/linuxmedia/vaapi
- For newer CPUs, it's likely that Intel QuickSync is a better type of acceleration.
docker exec jellyfin apt list | grep ffmpeg
intel_gpu_top
see https://github.com/jgwehr/homelab-docker/wiki/*Arr-Configuration
Crowdsec needs read access to protect the system from ... system-level things. Such as SSH attacks. The related logs are not world readable. And, as of yet, I don't have a good grasp on groups/users to do this more elegantly. Thus: [NEEDS REVIEW]
cd /var/logs
-
ls -l auth.log
should show something like -rw-r----- syslog adm -
chmod o+r auth.log
will grant (r)ead access to "others"
At this time, it's not possible for the bouncer (configured into the Caddy container) to negotiate the Crowdsec Local API token. Thus, we must have everything up and running, register it, reconfigure the Caddyfile, and then restart.
- Confirm the status of your Bouncers:
docker exec crowdsec cscli bouncers list
. Should be empty if first install - Per https://github.com/hslatman/caddy-crowdsec-bouncer, run
docker exec crowdsec cscli bouncers add caddy-bouncer
- Copy the generated API Key from the terminal
- Paste this key into .env
BOUNCER_CADDY_TOKEN
. Save. docker-compose down
docker-compose up -d
- Create an account https://app.crowdsec.net
- Generate a
unique-registeration-key
on the website. - Request the CrowdSec container to register online credentials:
docker exec crowdsec cscli capi register
- Restart the container
- Enroll:
docker exec crowdsec cscli console enroll unique-registeration-key
- Return the website, and Accept the registration of this instance
For somereason the hosts file of Crowdsec LOVES to get corrupted this results in issues such as "Unable to retrieve latest crowdsec version" and "Post "https://api.crowdsec.net/v3/watchers\": dial tcp: lookup api.crowdsec.net on 127.0.0.11:53: read udp 127.0.0.1:36736->127.0.0.11:53: i/o timeout"
start with sudo service docker restart
Backing Up Tandoor: sudo docker exec -t tandoor_db pg_dumpall -U djangouser > tandoor_pgdump.sql
Restoring Tandoor:
- Start a fresh Postgres container. This may mean deleting the volumes or dropping all tables; etc. It's important the Tandoor container is NOT started yet
- Remove the
PASSWORD x
command from your exported tandoor_pgdump.sql. Store this file on the local server. cat tandoor_pgdump.sql | sudo docker exec -i tandoor_db psql postgres -U tandoor_user
- After the containers are running, create your super user:
docker exec -it paperless bash
and thenpython3 manage.py createsuperuser
- Paperless' training models need a decent amount of RAM; I had issues at .5 and none with 2GB or more
- The backup mechanism is pretty straightforward, and preferred over a simple "copy and paste" of the files. Paperless keeps two versions of each file.
- modify the docker-compose.yaml file. Provide the correct
devices:
for the Scrutiny container- help:
df -h
orls -lA /dev/disk/by-id
- help:
- copy /configtemplates/scrutiny/scrutiny.yaml to your docker config directory:
cp /configtemplates/scrutiny/scrutiny.yaml /srv/docker/scrutiny
- Customize this file per Scrutiny's instructions: https://github.com/AnalogJ/scrutiny
see: https://github.com/jgwehr/homelab-docker/wiki/Homepage-(dashboard)
tip, reset 2FA with docker exec -it <container name> npm run remove-2fa
Currently, this app requires an SMTP server for creating accounts. Tools such as MailJet provide hobbyist accounts for free. I set it up successfully using:
- SMTP_USER = API Token
- SMTP_PASSWORD = Secret Key
- SMTP_PORT = 465
- SMTP_SECURE = true
Use any password/key generation for RALLLY_SECRETKEY
The PostGres database is tricky with MergerFS. I start the container using a specific harddrive location instead of the MergerFS storage. This creates the directory with correct permissions. Afterwards, the volume can be changed.
# Aliases in this file are available to all users
# To install for one user place in ~/.bash_aliases
# Tail last 50 lines of docker logs
alias dtail='docker logs -tf --tail='50' '
# Shorthand, customise docker-compose.yaml location as needed
alias dcp='docker-compose -f ~/docker-compose.yaml '
# Remove unused images (useful after an upgrade)
alias dprune='docker image prune'
# Remove unused images, unused networks *and data* (use with care)
alias dprunesys='docker system prune --all'```
Stop all Containers
`docker stop $(docker ps -a -q)`
`service docker restart`
Check what is using a port
`sudo ss -tulpn | grep :80`
Stop a Container by port
`docker stop $(docker ps | grep ":PORT_NUMBER" | awk '{print $1}')`