Skip to content

Docker Setup Guide [for production]

Alfred Gutierrez edited this page Dec 22, 2019 · 1 revision

Welcome to the openencoder setup guide. This guide will step through the process of building and running the production build via Docker.

openencoder is made of 3 components:

  • Server: HTTP API and worker queue for submitting and managing encode jobs, and other operations.
  • Worker: A background process listening and running jobs on the worker queue.
  • Web: A web UI for monitoring and managing encode jobs. Optional, but recommended for this setup guide.

It also has 2 storage components:

  • PostgreSQL - Relational database system.
  • Redis - Key/value database used as a message broker for the worker.

Requirements

  • Docker
  • S3 API Credentials & Bucket (AWS or Digital Ocean)
  • Digital Ocean API Key (only required for Machines API)

Please note, openencoder should not be publicly accessible for security reasons. See Security for additional setup recommendations around security.

Setup

Using docker-compose-production.yml:

  1. Configure your DNS and private networking for openencoder-web:
openencoder-web:
      - VIRTUAL_HOST=openencoder.yourdomain.com
      - CLOUDINIT_REDIS_HOST=priv.net.ip.addr
      - CLOUDINIT_DATABASE_HOST=priv.net.ip.add

This is to map nginx-proxy to the running openencoder-web container from the DNS entry.

The CLOUDINIT entries are for the startup script when a Machine is created via Machines API. This configures the worker to connect to the redis and db instances.

  1. Configure private networking address for redis and db:
redis:
    ports:
      - priv.net.ip.addr:6379:6379

db:
    ports:
      - priv.net.ip.addr:5432:5432

The reason for this is we don't want to expose the redis and db ports publicly, only on your own private network. However, the worker should still be able to connect to these ports.

  1. Start all services in Docker:
docker-compose -f docker-compose-production.yml up -d

You should now have a Server API and Web Dashboard running. Next we need to create an admin user.

Create an admin user

  1. Load http://localhost:8081/dashboard/register in the browser: Create a username and password

  2. Grant the user admin privileges via DB:

UPDATE "public"."users" SET "role" = 'admin' WHERE "username" LIKE '[email protected]';
  1. Login with user at http://localhost:8081/dashboard/register. You should see all tabs including settings.

Next we configure settings.

Configure Settings

In order to run encode jobs, you will need your S3 credentials, region and buckets. If you plan to run the Machines API, you'll also need a Digital Ocean API key.

  1. Go to http://localhost:8081/dashboard/settings in the web dashboard.
  2. Configure all necessary settings for S3.

Start a Worker

If you have configured a Digital Ocean API Key, then you can use the Machines API for scaling workers.

  1. Go to http://localhost:8081/dashboard/machines
  2. Select a region, size and count to spin up a worker instance.

The worker instance will take about ~5 minutes to boot, provision and subscribe to the job queue.

If you are NOT using the Machines API, you can still run your own worker instance via Docker:

  1. Configure environment variables:
cat > .env <<EOL
REDIS_HOST=priv.net.ip.addr
DATABASE_HOST=priv.net.ip.addr
EOL
  1. Run worker:
docker run -d --env-file .env --rm alfg/cloudencoder:latest worker

Run an Encode Job

Once you have settings configured, you can now run an encode job.

  1. Go to http://localhost:8081/dashboard/encode in the web dashboard.
  2. Select a pre-configured preset, such as h264_baseline_360p_600.
  3. Select a file by clicking the input field. This should load your configured inbound S3 path.
  4. Destination should auto-populate the output folder in your outbound S3 path.
  5. Submit the job.
  6. Monitor the job status at http://localhost:8081/dashboard/jobs
  7. Verify your encode job by checking your outbound path in S3. 🎉

Additional Resources

Clone this wiki locally