Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow running on a different port (for a reason that hasn't been covered on a different thread) #1131

Open
1 task done
airbreather opened this issue Dec 28, 2024 · 1 comment
Assignees

Comments

@airbreather
Copy link

Feature description

I want to be able to choose which port the server runs on, using something that's at least as flexible as, like, a --build-arg when building the image (though an environment variable would be even better).

Motivation

I'm aware of the multiple times that this has been requested, cataloged quite nicely in #728 (comment). It does look like each of the linked cases had a better solution.

I'm setting up a server that use rootless Podman containers (see also #621). These containers all go into a single pod, since that makes it trivial for me to run a reverse-proxy in one container that forwards to services running in different containers without needing to publish the ports to make those services visible to anything else running on the same machine (docs).

So far, I've been setting everything up so that there's a single pod that publishes only port 443, while the other services it exposes are in the same pod and running on their own individual ports. This is a migration from a more "traditional" server setup, and I've gotten through most of the services, until I got here. I happen to host two instances with my two licenses, so if I'm going to have both of them run in the same pod, then I must be able to change one of the ports.

There are many ways I could work around this, of course, but everything I can come up with is orders of magnitude more annoying than what #728 tried to do.

Example

Quadlet files could look something like the following.

web.pod
[Unit]
Description=TLS reverse-proxy and its targets

[Pod]
UserNS=auto
PublishPort=443:443
foundryvtt1.container
[Unit]
Description=Foundry VTT (first instance)

[Container]
Image=felddy/foundryvtt:12
AutoUpdate=registry
Pod=web.pod
Secret=foundryvtt_timedurl,type=env,target=FOUNDRY_RELEASE_URL
Secret=foundryvtt1_adminpassword,type=env,target=FOUNDRY_ADMIN_KEY
Environment=FOUNDRY_PROXY_SSL=true
Environment=FOUNDRY_IP_DISCOVERY=false
Environment=FOUNDRY_HOSTNAME=my.domain
Environment=FOUNDRY_ROUTE_PREFIX=myRoutePrefix
# the new thing I'd like to see:
Environment=FOUNDRY_PORT=30000
Environment=TZ=America/Detroit
Volume=foundryvtt1-data.volume:/data:Z
Volume=/nas/shared/foundry-assets:/data/Data/assets:ro,Z

[Service]
Restart=on-failure
RestartSec=5s
foundryvtt2.container
[Unit]
Description=Foundry VTT (second instance)

[Container]
Image=felddy/foundryvtt:12
AutoUpdate=registry
Pod=web.pod
Secret=foundryvtt_timedurl,type=env,target=FOUNDRY_RELEASE_URL
Secret=foundryvtt2_adminpassword,type=env,target=FOUNDRY_ADMIN_KEY
Environment=FOUNDRY_PROXY_SSL=true
Environment=FOUNDRY_IP_DISCOVERY=false
Environment=FOUNDRY_HOSTNAME=my.domain
Environment=FOUNDRY_ROUTE_PREFIX=myRoutePrefix
# the new thing I'd like to see:
Environment=FOUNDRY_PORT=30001
Environment=TZ=America/Detroit
Volume=foundryvtt2-data.volume:/data:Z
Volume=/nas/shared/foundry-assets:/data/Data/assets:ro,Z

[Service]
Restart=on-failure
RestartSec=5s

This is my first attempt at getting serious with containerizing a large number of services, so I'm sure there's something I could be doing better. I also haven't tested any of it at the time of writing: I just stumbled upon this gap while setting up my Quadlet files, and I stopped everything to write up an issue before I just go work around it and then forget to circle back around. I may or may not come back around to tweak the above files if I discover anything too egregious.

Pitch

This project seems to go the extra mile to streamline as much of the process as possible for running Foundry VTT in a container. It would be a bit disappointing to me if I had to skip using it just because of a single configuration option where I can find no particularly strong reasons* to force it to be a hardcoded value.

  • *EXPOSE in the Dockerfile is just a suggestion, but in case that's the sticking point, I'm like 90% sure that it could still be a --build-arg so that the EXPOSE stays consistent with what actually gets used.

Code of Conduct

  • I agree to follow this project's Code of Conduct
@airbreather
Copy link
Author

airbreather commented Dec 28, 2024

FWIW my workaround for now is to throw this into my CONTAINER_PATCHES:

#!/bin/sh

# Needed because of:
# https://github.com/felddy/foundryvtt-docker/issues/1131
#
# based on code from:
# https://github.com/felddy/foundryvtt-docker/blob/9d1ebf8764e6dd0ee6934da3c4a3d76c31c4cee9/src/launcher.sh

CONFIG_DIR="/data/Config"
CONFIG_FILE="${CONFIG_DIR}/options.json"

mkdir -p "${CONFIG_DIR}"

if [[ "${CONTAINER_PRESERVE_CONFIG:-}" == "true" && -f "${CONFIG_FILE}" ]]; then
  :
elif [[ "${FOUNDRY_PORT:-unset}" != "unset" ]]; then
  ./set_options.js | sed s/30000/$FOUNDRY_PORT/g > "${CONFIG_FILE}"
  export CONTAINER_PRESERVE_CONFIG="true"
fi

Edit: and, of course, in the quadlets:

[Container]
# ... other stuff from above ...
Exec=resources/app/main.mjs --port=30001 --headless --noupdate --dataPath=/data

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants