404 Page not found :: Crunchy Data Container Suite Documentation
diff --git a/docs/categories/index.html b/docs/categories/index.html
index 4712dbefb..1281f4d83 100644
--- a/docs/categories/index.html
+++ b/docs/categories/index.html
@@ -5,7 +5,7 @@
-
+
Categories :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-backrest-restore/index.html b/docs/container-specifications/crunchy-backrest-restore/index.html
index 7c42f48d5..a40e3a469 100644
--- a/docs/container-specifications/crunchy-backrest-restore/index.html
+++ b/docs/container-specifications/crunchy-backrest-restore/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-backrest-restore :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-backup/index.html b/docs/container-specifications/crunchy-backup/index.html
index 5c79c31cc..591a1f752 100644
--- a/docs/container-specifications/crunchy-backup/index.html
+++ b/docs/container-specifications/crunchy-backup/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-backup :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-collect/index.html b/docs/container-specifications/crunchy-collect/index.html
index 9f5492346..d5f2bfd1a 100644
--- a/docs/container-specifications/crunchy-collect/index.html
+++ b/docs/container-specifications/crunchy-collect/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-collect :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-dba/index.html b/docs/container-specifications/crunchy-dba/index.html
index e64b545c8..84b5e30d7 100644
--- a/docs/container-specifications/crunchy-dba/index.html
+++ b/docs/container-specifications/crunchy-dba/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-dba :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-grafana/index.html b/docs/container-specifications/crunchy-grafana/index.html
index f6dc526c1..e436b70c9 100644
--- a/docs/container-specifications/crunchy-grafana/index.html
+++ b/docs/container-specifications/crunchy-grafana/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-grafana :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-pgadmin4/index.html b/docs/container-specifications/crunchy-pgadmin4/index.html
index 9a8325670..6b2525ae0 100644
--- a/docs/container-specifications/crunchy-pgadmin4/index.html
+++ b/docs/container-specifications/crunchy-pgadmin4/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-pgadmin4 :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-pgbadger/index.html b/docs/container-specifications/crunchy-pgbadger/index.html
index 6bddb04e1..379016fc7 100644
--- a/docs/container-specifications/crunchy-pgbadger/index.html
+++ b/docs/container-specifications/crunchy-pgbadger/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-pgbadger :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-pgbouncer/index.html b/docs/container-specifications/crunchy-pgbouncer/index.html
index 1a2a0e05b..4f451c72c 100644
--- a/docs/container-specifications/crunchy-pgbouncer/index.html
+++ b/docs/container-specifications/crunchy-pgbouncer/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-pgbouncer :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-pgdump/index.html b/docs/container-specifications/crunchy-pgdump/index.html
index c2ab18056..3e54c978b 100644
--- a/docs/container-specifications/crunchy-pgdump/index.html
+++ b/docs/container-specifications/crunchy-pgdump/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-pgdump :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-pgpool/index.html b/docs/container-specifications/crunchy-pgpool/index.html
index 5744d0dfd..371a682f4 100644
--- a/docs/container-specifications/crunchy-pgpool/index.html
+++ b/docs/container-specifications/crunchy-pgpool/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-pgpool :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-pgrestore/index.html b/docs/container-specifications/crunchy-pgrestore/index.html
index 04f5ccd1d..64024abf2 100644
--- a/docs/container-specifications/crunchy-pgrestore/index.html
+++ b/docs/container-specifications/crunchy-pgrestore/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-pgrestore :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-postgres-gis/index.html b/docs/container-specifications/crunchy-postgres-gis/index.html
index 95446448e..6ed4e170e 100644
--- a/docs/container-specifications/crunchy-postgres-gis/index.html
+++ b/docs/container-specifications/crunchy-postgres-gis/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-postgres-gis :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-postgres/index.html b/docs/container-specifications/crunchy-postgres/index.html
index 5c26b3fac..dfe41a12c 100644
--- a/docs/container-specifications/crunchy-postgres/index.html
+++ b/docs/container-specifications/crunchy-postgres/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-postgres :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-prometheus/index.html b/docs/container-specifications/crunchy-prometheus/index.html
index ecbbfe8d9..b8362d3f9 100644
--- a/docs/container-specifications/crunchy-prometheus/index.html
+++ b/docs/container-specifications/crunchy-prometheus/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-prometheus :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-sim/index.html b/docs/container-specifications/crunchy-sim/index.html
index c7242bdc2..a222fbe0a 100644
--- a/docs/container-specifications/crunchy-sim/index.html
+++ b/docs/container-specifications/crunchy-sim/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-sim :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-upgrade/index.html b/docs/container-specifications/crunchy-upgrade/index.html
index f6e182e16..9f6d1fd33 100644
--- a/docs/container-specifications/crunchy-upgrade/index.html
+++ b/docs/container-specifications/crunchy-upgrade/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-upgrade :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-vacuum/index.html b/docs/container-specifications/crunchy-vacuum/index.html
index 10c567b52..824bb7222 100644
--- a/docs/container-specifications/crunchy-vacuum/index.html
+++ b/docs/container-specifications/crunchy-vacuum/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-vacuum :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/crunchy-watch/index.html b/docs/container-specifications/crunchy-watch/index.html
index 3d3fb5ef8..c0f10590f 100644
--- a/docs/container-specifications/crunchy-watch/index.html
+++ b/docs/container-specifications/crunchy-watch/index.html
@@ -5,7 +5,7 @@
-
+
crunchy-watch :: Crunchy Data Container Suite Documentation
diff --git a/docs/container-specifications/index.html b/docs/container-specifications/index.html
index d6b29fa70..26badc8b8 100644
--- a/docs/container-specifications/index.html
+++ b/docs/container-specifications/index.html
@@ -5,7 +5,7 @@
-
+
Container Specifications :: Crunchy Data Container Suite Documentation
@@ -394,7 +394,7 @@
Container Specifications
-
Latest Release: 2.1.0 2018-08-09
+
Latest Release: 2.1.0 2018-08-13
Introduction
diff --git a/docs/contributing/index.html b/docs/contributing/index.html
index 456ef5cfa..248ba3ebf 100644
--- a/docs/contributing/index.html
+++ b/docs/contributing/index.html
@@ -5,7 +5,7 @@
-
+
Contributing :: Crunchy Data Container Suite Documentation
@@ -393,7 +393,7 @@
Contributing
-
Latest Release: 2.1.0 2018-08-09
+
Latest Release: 2.1.0 2018-08-13
Getting Started
diff --git a/docs/getting-started/index.html b/docs/getting-started/index.html
index 57d7a6787..89268d56b 100644
--- a/docs/getting-started/index.html
+++ b/docs/getting-started/index.html
@@ -5,7 +5,7 @@
-
+
Getting Started :: Crunchy Data Container Suite Documentation
@@ -394,7 +394,7 @@
Getting Started
-
Latest Release: 2.1.0 2018-08-09
+
Latest Release: 2.1.0 2018-08-13
Examples are provided for the Kubernetes, OpenShift, and Docker environments in the
diff --git a/docs/getting-started/kubernetes-and-openshift/index.html b/docs/getting-started/kubernetes-and-openshift/index.html
index 918c7b6b5..e091a2924 100644
--- a/docs/getting-started/kubernetes-and-openshift/index.html
+++ b/docs/getting-started/kubernetes-and-openshift/index.html
@@ -5,7 +5,7 @@
-
+
Kubernetes and Openshift :: Crunchy Data Container Suite Documentation
@@ -459,7 +459,7 @@
Kubernetes and Openshift
-
Latest Release: 2.1.0 2018-08-09
+
Latest Release: 2.1.0 2018-08-13
Getting Started
@@ -820,6 +820,14 @@
Upgrade
Kubernetes and OpenShift
+
+
Before running the example, ensure you edit upgrade.json and update the OLD_VERSION
+and NEW_VERSION parameters to the ones relevant to your situation.
+
+
+
+
+
Start the upgrade as follows:
@@ -834,8 +842,8 @@
Kubernetes and OpenShift
-
${CCP_CLI} get pod -a -l job-name=upgrade-job
-${CCP_CLI} logs -l job-name=upgrade-job
+
${CCP_CLI} get pod -l job-name=upgrade
+${CCP_CLI} logs -l job-name=upgrade
There are three options to choose from when performing a restore:
- * Delta - only restore missing files from PGDATA
- * Full - restore all files, pgdata must be empty
- * Point in Time Recovery (PITR) - delta restore to a certain point in time
+
There are three options to choose from when performing a restore:
+
+
+
+
+
Delta - only restore missing files from PGDATA
+
+
+
Full - restore all files, pgdata must be empty
+
+
+
Point in Time Recovery (PITR) - delta restore to a certain point in time
+
+
PITR
-
+
This example uses the backrest/backup example. It should be left running and a
pgBackRest backup has been created.
@@ -1407,7 +1425,7 @@
PITR
Full
-
+
This example uses the backrest/backup example. It does not need to be running but a
pgBackRest backup is required.
@@ -1455,7 +1473,7 @@
Full
Delta
-
+
This example uses the backrest/backup example. It does not need to be running but a
pgBackRest backup is required.
@@ -1610,7 +1628,7 @@
Restore
PITR
-
+
This example uses the backrest/backup example. It should be left running and a
pgBackRest backup has been created.
@@ -1659,7 +1677,7 @@
PITR
Full
-
+
This example uses the backrest/backup example. It does not need to be running but a
pgBackRest backup is required.
@@ -1704,7 +1722,7 @@
Full
Delta
-
+
This example uses the backrest/backup example. It does not need to be running but a
pgBackRest backup is required.
@@ -2505,20 +2523,48 @@
pgBouncer
Crunchy pgBouncer is a lightweight connection pooler for PostgreSQL databases.
-
The following examples create the following containers:
- * pgBouncer Primary
- * pgBouncer Replica
- * PostgreSQL Primary
- * PostgreSQL Replica
+
The following examples create the following containers:
+
+
+
+
+
pgBouncer Primary
+
+
+
pgBouncer Replica
+
+
+
PostgreSQL Primary
+
+
+
PostgreSQL Replica
+
+
-
In Kubernetes and OpenShift, this example will also create:
- * pgBouncer Primary Service
- * pgBouncer Replica Service
- * Primary Service
- * Replica Service
- * PostgreSQL Secrets
- * pgBouncer Secrets
+
In Kubernetes and OpenShift, this example will also create:
+
+
+
+
+
pgBouncer Primary Service
+
+
+
pgBouncer Replica Service
+
+
+
Primary Service
+
+
+
Replica Service
+
+
+
PostgreSQL Secrets
+
+
+
pgBouncer Secrets
+
+
To cleanup the objects created by this example, run the following in the pgbouncer example directory:
@@ -3840,7 +3886,7 @@
Kubernetes and OpenShift
-
curl -L http://badger:10000/api/badgergenerate
+
curl -L http://pgbadger:10000/api/badgergenerate
diff --git a/docs/getting-started/openshift-templates/index.html b/docs/getting-started/openshift-templates/index.html
index 42e2a7db3..7a0926049 100644
--- a/docs/getting-started/openshift-templates/index.html
+++ b/docs/getting-started/openshift-templates/index.html
@@ -5,7 +5,7 @@
-
+
Openshift Templates :: Crunchy Data Container Suite Documentation
@@ -400,7 +400,7 @@
Openshift Templates
-
Latest Release: 2.1.0 2018-08-09
+
Latest Release: 2.1.0 2018-08-13
The OpenShift template examples referenced in this documentation are located at the following
diff --git a/docs/getting-started/openshift-templates/pgadmin4/index.html b/docs/getting-started/openshift-templates/pgadmin4/index.html
index 2c60878c3..8a55d662e 100644
--- a/docs/getting-started/openshift-templates/pgadmin4/index.html
+++ b/docs/getting-started/openshift-templates/pgadmin4/index.html
@@ -5,7 +5,7 @@
-
+
pgAdmin4 :: Crunchy Data Container Suite Documentation
diff --git a/docs/getting-started/openshift-templates/postgres-replicated/index.html b/docs/getting-started/openshift-templates/postgres-replicated/index.html
index a0de0e70a..29ae79ebf 100644
--- a/docs/getting-started/openshift-templates/postgres-replicated/index.html
+++ b/docs/getting-started/openshift-templates/postgres-replicated/index.html
@@ -5,7 +5,7 @@
-
+
Primary Replica :: Crunchy Data Container Suite Documentation
diff --git a/docs/getting-started/openshift-templates/primary-backup-secret/index.html b/docs/getting-started/openshift-templates/primary-backup-secret/index.html
index 8d8d591b8..8aa27c8b9 100644
--- a/docs/getting-started/openshift-templates/primary-backup-secret/index.html
+++ b/docs/getting-started/openshift-templates/primary-backup-secret/index.html
@@ -5,7 +5,7 @@
-
+
Primary Backup Secret :: Crunchy Data Container Suite Documentation
diff --git a/docs/getting-started/openshift-templates/primary-backup/index.html b/docs/getting-started/openshift-templates/primary-backup/index.html
index c1969cfe6..c5cf6f923 100644
--- a/docs/getting-started/openshift-templates/primary-backup/index.html
+++ b/docs/getting-started/openshift-templates/primary-backup/index.html
@@ -5,7 +5,7 @@
-
+
Primary Backup :: Crunchy Data Container Suite Documentation
diff --git a/docs/getting-started/openshift-templates/primary-restore-secret/index.html b/docs/getting-started/openshift-templates/primary-restore-secret/index.html
index 1ce29cb05..083cca898 100644
--- a/docs/getting-started/openshift-templates/primary-restore-secret/index.html
+++ b/docs/getting-started/openshift-templates/primary-restore-secret/index.html
@@ -5,7 +5,7 @@
-
+
Primary Restore Secret :: Crunchy Data Container Suite Documentation
diff --git a/docs/getting-started/openshift-templates/primary-restore/index.html b/docs/getting-started/openshift-templates/primary-restore/index.html
index 1dc953122..8e1ef15cf 100644
--- a/docs/getting-started/openshift-templates/primary-restore/index.html
+++ b/docs/getting-started/openshift-templates/primary-restore/index.html
@@ -5,7 +5,7 @@
-
+
Primary Restore :: Crunchy Data Container Suite Documentation
diff --git a/docs/getting-started/openshift-templates/single-primary/index.html b/docs/getting-started/openshift-templates/single-primary/index.html
index ff84d3d3b..3667bd78f 100644
--- a/docs/getting-started/openshift-templates/single-primary/index.html
+++ b/docs/getting-started/openshift-templates/single-primary/index.html
@@ -5,7 +5,7 @@
-
+
Single Primary :: Crunchy Data Container Suite Documentation
diff --git a/docs/index.html b/docs/index.html
index 9e0bc476f..c963aabc9 100644
--- a/docs/index.html
+++ b/docs/index.html
@@ -5,7 +5,7 @@
-
+
Crunchy Data Container Suite :: Crunchy Data Container Suite Documentation
@@ -354,7 +354,7 @@
-
Latest Release: 2.1.0 2018-08-09
+
Latest Release: 2.1.0 2018-08-13
General
diff --git a/docs/index.json b/docs/index.json
index aada890ed..5d30eb4a2 100644
--- a/docs/index.json
+++ b/docs/index.json
@@ -11,14 +11,14 @@
"title": "Installation",
"tags": [],
"description": "",
- "content": "Latest Release: 2.1.0 2018-08-09\n Installation To set up the environment, visit the Environment Setup document. From there, navigate to Build the Containers page to either build or pull the container images necessary to run examples. The final step will be to configure storage in the Storage Configuration page, with either HostPath, NFS, or dynamic storage options.\n Next Steps There are many ways to configure the examples and containers further. Descriptions of each container offered and the variables that can be used to customize them are found on the Container Specifications page.\n Information can be found on the full scope of examples that can be performed on the Getting Started page.\n "
+ "content": "Latest Release: 2.1.0 2018-08-13\n Installation To set up the environment, visit the Environment Setup document. From there, navigate to Build the Containers page to either build or pull the container images necessary to run examples. The final step will be to configure storage in the Storage Configuration page, with either HostPath, NFS, or dynamic storage options.\n Next Steps There are many ways to configure the examples and containers further. Descriptions of each container offered and the variables that can be used to customize them are found on the Container Specifications page.\n Information can be found on the full scope of examples that can be performed on the Getting Started page.\n "
},
{
"uri": "https://crunchydata.github.io/crunchy-containers/installation/environment-setup/",
"title": "Environment Setup",
"tags": [],
"description": "",
- "content": "Table of Contents Requirements Project Environment Installing Requirements CentOS 7 RHEL 7 PostgreSQL Docker OpenShift Kubernetes Helm Creating a Demo Namespace Kubernetes OpenShift Next Steps Latest Release: 2.1.0 2018-08-09\n Requirements The Crunchy Container Suite can run on different environments including:\n Docker 1.12+\n OpenShift Container Platform 3.6+\n Kubernetes 1.8+\n In this document we list the basic installation steps required for these environments.\n These installation instructions are developed and tested for the following operating systems:\n CentOS 7\n RHEL 7\n Project Environment First add the following lines to your .bashrc file to set the project paths:\n export GOPATH=$HOME/cdev export GOBIN=$GOPATH/bin export PATH=$PATH:$GOBIN export CCP_BASEOS=centos7 export CCP_PGVERSION=10 export CCP_PG_FULLVERSION=10.5 export CCP_VERSION=2.1.0 export CCP_IMAGE_PREFIX=crunchydata export CCP_IMAGE_TAG=$CCP_BASEOS-$CCP_PG_FULLVERSION-$CCP_VERSION export CCPROOT=$GOPATH/src/github.com/crunchydata/crunchy-containers export CCP_SECURITY_CONTEXT=\"\" export CCP_CLI=kubectl export CCP_NAMESPACE=demo Please see the Storage Configuration document for configuring storage using environment variables set in .bashrc.\n Additionally, the CCP_CLI setting above indicates you are running the examples on a Kubernetes platform and not an OpenShift platform. For OpenShift, use the value of oc as the CCP_CLI variable instead.\n It will be necessary to refresh your .bashrc file in order for the changes to take effect.\n . ~/.bashrc Next, set up a project directory structure and pull down the project:\n mkdir -p $HOME/cdev/src $HOME/cdev/pkg $HOME/cdev/bin Installing Requirements CentOS 7 cd $GOPATH sudo yum -y install golang git docker go get github.com/tools/godep cd src/github.com mkdir crunchydata cd crunchydata git clone https://github.com/crunchydata/crunchy-containers cd crunchy-containers git checkout 2.1.0 go get github.com/blang/expenv If you are a Crunchy enterprise customer, you will place the Crunchy repository key and yum repository file into the $CCPROOT/conf directory at this point. These files can be obtained through https://access.crunchydata.com/ on the downloads page.\n RHEL 7 When setting up the environment on RHEL 7, there are slightly different steps that need to be taken.\n cd $GOPATH sudo subscription-manager repos --enable=rhel-7-server-optional-rpms sudo yum-config-manager --enable rhel-7-server-extras-rpms sudo yum -y install git golang go get github.com/tools/godep cd src/github.com mkdir crunchydata cd crunchydata git clone https://github.com/crunchydata/crunchy-containers cd crunchy-containers git checkout 2.1.0 go get github.com/blang/expenv If you are a Crunchy enterprise customer, you will place the Crunchy repository key and yum repository file into the $CCPROOT/conf directory at this point. These files can be obtained through https://access.crunchydata.com/ on the downloads page.\n PostgreSQL These installation instructions assume the installation of PostgreSQL 10 through the official PGDG repository. View the documentation located here in order to view more detailed notes or install a different version of PostgreSQL.\n Locate and edit your distribution\u0026#8217;s .repo file, located:\n On CentOS: /etc/yum.repos.d/CentOS-Base.repo, [base] and [updates] sections\n On Red Hat: /etc/yum/pluginconf.d/rhnplugin.conf [main] section\n To the section(s) identified above, you need to append a line (otherwise dependencies might resolve to the PostgreSQL supplied by the base repository):\n exclude=postgresql* Next, install the RPM relating to the base operating system and PostgreSQL version you wish to install. The RPMs can be found here.\n For example, to install PostgreSQL 10 on a CentOS 7 system:\n sudo yum -y install https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm Or to install PostgreSQL 10 on a RHEL 7 system:\n sudo yum -y install https://download.postgresql.org/pub/repos/yum/testing/10/redhat/rhel-7-x86_64/pgdg-redhat10-10-2.noarch.rpm You\u0026#8217;ll need to update your system:\n sudo yum -y update Then, go ahead and install the PostgreSQL server package.\n sudo yum -y install postgresql10-server.x86_64 Docker As good practice, at this point you\u0026#8217;ll update your system.\n sudo yum -y update Now we\u0026#8217;ll install Docker.\n sudo yum -y install docker After that, it\u0026#8217;s necessary to add the docker group and give your user access to that group (here referenced as someuser):\n sudo groupadd docker sudo usermod -a -G docker someuser Remember to log out of the someuser account for the Docker group to be added to your current session. Once it\u0026#8217;s added, you\u0026#8217;ll be able to run Docker commands from your user account.\n su - someuser You can ensure your someuser account is added correctly by running the following command and ensuring docker appears as one of the results:\n groups Before you start Docker, you might consider configuring Docker storage: This is described if you run:\n man docker-storage-setup Follow the instructions available on the main OpenShift documentation page to configure Docker storage appropriately.\n These steps are illustrative of a typical process for setting up Docker storage. You will need to run these commands as root.\n First, add an extra virtual hard disk to your virtual machine (see this blog post for tips on how to do so).\n Run this command to format the drive, where /dev/sd? is the new hard drive that was added:\n fdisk /dev/sd? Next, create a volume group on the new drive partition within the fdisk utility:\n vgcreate docker-vg /dev/sd? Then, you\u0026#8217;ll need to edit the docker-storage-setup configuration file in order to override default options. Add these two lines to /etc/sysconfig/docker-storage-setup:\n DEVS=/dev/sd? VG=docker-vg Finally, run the command docker-storage-setup to use that new volume group. The results should state that the physical volume /dev/sd? and the volume group docker-vg have both been successfully created.\n Next, we enable and start up Docker:\n sudo systemctl enable docker.service sudo systemctl start docker.service Verify that Docker version 1.12.6 was installed, as per the OpenShift 3.6 requirements.\n docker version OpenShift See the OpenShift installation guide for details on how to install OpenShift Enterprise on your host. The main instructions are here:\n https://docs.openshift.com/container-platform/3.6/install_config/install/quick_install.html\n Note If you install OpenShift Enterprise on a server with less than 16GB memory and 40GB of disk, the following Ansible variables need to be added to ~/.config/openshift/installer.cfg.yml prior to installation: openshift_check_min_host_disk_gb: '10' # min 10gb disk openshift_check_min_host_memory_gb: '3' # min 3gb memory Kubernetes See kubeadm for installing the latest version of Kubernetes.\n Please see here to view the official documentation regarding configuring DNS for your Kubernetes cluster.\n Make sure your hostname resolves to a single IP address in your /etc/hosts file. The NFS examples will not work otherwise and other problems with installation can occur unless you have a resolving hostname.\n You should see a single IP address returned from this command:\n hostname --ip-address When running the containers in GKE Role Based Account Control will need to be set up.\n kubectl create clusterrolebinding cluster-admin-binding \\ --clusterrole cluster-admin --user $(gcloud config get-value account) If more than one user will be running on the same kubernetes cluster in GKE, from the above command cluster-admin-binding will need to be unique and is the name that is added to the clusterrolebidings. The example below will add another user to the clusterrolebinding with a unique value.\n $ ACCOUNT=$(gcloud info --format='value(config.account)') $ kubectl create clusterrolebinding \u0026lt;unique\u0026gt;-cluster-admin-binding \\ --clusterrole cluster-admin \\ --user $ACCOUNT ACCOUNT is just your google gcloud acount login, ie username@google.com Helm Some Kubernetes Helm examples are provided in the following directory as one option for deploying the Container Suite.\n $CCPROOT/examples/helm/ Once you have your Kubernetes environment configured, it is simple to get Helm up and running. Please refer to this document to get Helm installed and configured properly.\n Creating a Demo Namespace In Kubernetes, a concept called a namespace provides the means to separate created resources or components into individual logically grouped partitions.\n It is considered a best practice to have dedicated namespaces for projects in both testing and production environments.\n Note All examples in the Crunchy Container Suite operate within the namespace defined by the environment variable $CCP_NAMESPACE. The instructions below illustrate how to set up and work within new namespaces or projects in both Kubernetes and OpenShift. Kubernetes This section will illustrate how to set up a new Kubernetes namespace called demo, and will then show how to provide permissions to that namespace to allow the Kubernetes examples to run within that namespace.\n First, view currently existing namespaces:\n $ kubectl get namespace NAME STATUS AGE default Active 21d kube-public Active 21d kube-system Active 21d Then, create a new namespace called demo:\n $ kubectl create -f $CCPROOT/conf/demo-namespace.json namespace \"demo\" created $ kubectl get namespace demo NAME STATUS AGE demo Active 7s Then set the namespace as the current location to avoid using the wrong namespace:\n $ kubectl config set-context $(kubectl config current-context) --namespace=demo We can verify that the namespace was set correctly through the following command:\n $ kubectl config view | grep namespace: namespace: demo OpenShift This section assumes you are first logging into OpenShift as a normal user such as:\n oc login -u someuser For our development purposes only, we typically specify the OCP Authorization policy of AllowAll as documented here:\n https://docs.openshift.com/container-platform/3.7/install_config/configuring_authentication.html#AllowAllPasswordIdentityProvider\n We do not recommend this authentication policy for a production deployment of OCP.\n The next step is to create a demo namespace to run the examples within. The name of this OCP project will be what you supply in the CCP_NAMESPACE environment variable:\n $ oc new-project demo --description=\"Crunchy Containers project\" --display-name=\"Crunchy-Containers\" Now using project \"demo\" on server \"https://127.0.0.1:8443\". $ export CCP_NAMESPACE=demo If we view the list of projects, we can see the new project has been added and is \"active\".\n $ oc get projects NAME DISPLAY NAME STATUS demo Crunchy-Containers Active myproject My Project Active If you were on a different project and wanted to switch to the demo project, you would do so by running the following:\n $ oc project demo Now using project \"demo\" on server \"https://127.0.0.1:8443\". Finally, you will want to ensure the proper privileges are added to the user in order to have the ability to create persistent volumes. A command similar to the following can be used to accomplish this, by adding the cluster-admin role to the demo user:\n oc adm policy add-cluster-role-to-user cluster-admin demo Next Steps Next, build or pull the container images as demonstrated in the Build the Containers document.\n "
+ "content": "Table of Contents Requirements Project Environment Installing Requirements CentOS 7 RHEL 7 PostgreSQL Docker OpenShift Kubernetes Helm Creating a Demo Namespace Kubernetes OpenShift Next Steps Latest Release: 2.1.0 2018-08-13\n Requirements The Crunchy Container Suite can run on different environments including:\n Docker 1.12+\n OpenShift Container Platform 3.6+\n Kubernetes 1.8+\n In this document we list the basic installation steps required for these environments.\n These installation instructions are developed and tested for the following operating systems:\n CentOS 7\n RHEL 7\n Project Environment First add the following lines to your .bashrc file to set the project paths:\n export GOPATH=$HOME/cdev export GOBIN=$GOPATH/bin export PATH=$PATH:$GOBIN export CCP_BASEOS=centos7 export CCP_PGVERSION=10 export CCP_PG_FULLVERSION=10.5 export CCP_VERSION=2.1.0 export CCP_IMAGE_PREFIX=crunchydata export CCP_IMAGE_TAG=$CCP_BASEOS-$CCP_PG_FULLVERSION-$CCP_VERSION export CCPROOT=$GOPATH/src/github.com/crunchydata/crunchy-containers export CCP_SECURITY_CONTEXT=\"\" export CCP_CLI=kubectl export CCP_NAMESPACE=demo Please see the Storage Configuration document for configuring storage using environment variables set in .bashrc.\n Additionally, the CCP_CLI setting above indicates you are running the examples on a Kubernetes platform and not an OpenShift platform. For OpenShift, use the value of oc as the CCP_CLI variable instead.\n It will be necessary to refresh your .bashrc file in order for the changes to take effect.\n . ~/.bashrc Next, set up a project directory structure and pull down the project:\n mkdir -p $HOME/cdev/src $HOME/cdev/pkg $HOME/cdev/bin Installing Requirements CentOS 7 cd $GOPATH sudo yum -y install golang git docker go get github.com/tools/godep cd src/github.com mkdir crunchydata cd crunchydata git clone https://github.com/crunchydata/crunchy-containers cd crunchy-containers git checkout 2.1.0 go get github.com/blang/expenv If you are a Crunchy enterprise customer, you will place the Crunchy repository key and yum repository file into the $CCPROOT/conf directory at this point. These files can be obtained through https://access.crunchydata.com/ on the downloads page.\n RHEL 7 When setting up the environment on RHEL 7, there are slightly different steps that need to be taken.\n cd $GOPATH sudo subscription-manager repos --enable=rhel-7-server-optional-rpms sudo yum-config-manager --enable rhel-7-server-extras-rpms sudo yum -y install git golang go get github.com/tools/godep cd src/github.com mkdir crunchydata cd crunchydata git clone https://github.com/crunchydata/crunchy-containers cd crunchy-containers git checkout 2.1.0 go get github.com/blang/expenv If you are a Crunchy enterprise customer, you will place the Crunchy repository key and yum repository file into the $CCPROOT/conf directory at this point. These files can be obtained through https://access.crunchydata.com/ on the downloads page.\n PostgreSQL These installation instructions assume the installation of PostgreSQL 10 through the official PGDG repository. View the documentation located here in order to view more detailed notes or install a different version of PostgreSQL.\n Locate and edit your distribution\u0026#8217;s .repo file, located:\n On CentOS: /etc/yum.repos.d/CentOS-Base.repo, [base] and [updates] sections\n On Red Hat: /etc/yum/pluginconf.d/rhnplugin.conf [main] section\n To the section(s) identified above, you need to append a line (otherwise dependencies might resolve to the PostgreSQL supplied by the base repository):\n exclude=postgresql* Next, install the RPM relating to the base operating system and PostgreSQL version you wish to install. The RPMs can be found here.\n For example, to install PostgreSQL 10 on a CentOS 7 system:\n sudo yum -y install https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm Or to install PostgreSQL 10 on a RHEL 7 system:\n sudo yum -y install https://download.postgresql.org/pub/repos/yum/testing/10/redhat/rhel-7-x86_64/pgdg-redhat10-10-2.noarch.rpm You\u0026#8217;ll need to update your system:\n sudo yum -y update Then, go ahead and install the PostgreSQL server package.\n sudo yum -y install postgresql10-server.x86_64 Docker As good practice, at this point you\u0026#8217;ll update your system.\n sudo yum -y update Now we\u0026#8217;ll install Docker.\n sudo yum -y install docker After that, it\u0026#8217;s necessary to add the docker group and give your user access to that group (here referenced as someuser):\n sudo groupadd docker sudo usermod -a -G docker someuser Remember to log out of the someuser account for the Docker group to be added to your current session. Once it\u0026#8217;s added, you\u0026#8217;ll be able to run Docker commands from your user account.\n su - someuser You can ensure your someuser account is added correctly by running the following command and ensuring docker appears as one of the results:\n groups Before you start Docker, you might consider configuring Docker storage: This is described if you run:\n man docker-storage-setup Follow the instructions available on the main OpenShift documentation page to configure Docker storage appropriately.\n These steps are illustrative of a typical process for setting up Docker storage. You will need to run these commands as root.\n First, add an extra virtual hard disk to your virtual machine (see this blog post for tips on how to do so).\n Run this command to format the drive, where /dev/sd? is the new hard drive that was added:\n fdisk /dev/sd? Next, create a volume group on the new drive partition within the fdisk utility:\n vgcreate docker-vg /dev/sd? Then, you\u0026#8217;ll need to edit the docker-storage-setup configuration file in order to override default options. Add these two lines to /etc/sysconfig/docker-storage-setup:\n DEVS=/dev/sd? VG=docker-vg Finally, run the command docker-storage-setup to use that new volume group. The results should state that the physical volume /dev/sd? and the volume group docker-vg have both been successfully created.\n Next, we enable and start up Docker:\n sudo systemctl enable docker.service sudo systemctl start docker.service Verify that Docker version 1.12.6 was installed, as per the OpenShift 3.6 requirements.\n docker version OpenShift See the OpenShift installation guide for details on how to install OpenShift Enterprise on your host. The main instructions are here:\n https://docs.openshift.com/container-platform/3.6/install_config/install/quick_install.html\n Note If you install OpenShift Enterprise on a server with less than 16GB memory and 40GB of disk, the following Ansible variables need to be added to ~/.config/openshift/installer.cfg.yml prior to installation: openshift_check_min_host_disk_gb: '10' # min 10gb disk openshift_check_min_host_memory_gb: '3' # min 3gb memory Kubernetes See kubeadm for installing the latest version of Kubernetes.\n Please see here to view the official documentation regarding configuring DNS for your Kubernetes cluster.\n Make sure your hostname resolves to a single IP address in your /etc/hosts file. The NFS examples will not work otherwise and other problems with installation can occur unless you have a resolving hostname.\n You should see a single IP address returned from this command:\n hostname --ip-address When running the containers in GKE Role Based Account Control will need to be set up.\n kubectl create clusterrolebinding cluster-admin-binding \\ --clusterrole cluster-admin --user $(gcloud config get-value account) If more than one user will be running on the same kubernetes cluster in GKE, from the above command cluster-admin-binding will need to be unique and is the name that is added to the clusterrolebidings. The example below will add another user to the clusterrolebinding with a unique value.\n $ ACCOUNT=$(gcloud info --format='value(config.account)') $ kubectl create clusterrolebinding \u0026lt;unique\u0026gt;-cluster-admin-binding \\ --clusterrole cluster-admin \\ --user $ACCOUNT ACCOUNT is just your google gcloud acount login, ie username@google.com Helm Some Kubernetes Helm examples are provided in the following directory as one option for deploying the Container Suite.\n $CCPROOT/examples/helm/ Once you have your Kubernetes environment configured, it is simple to get Helm up and running. Please refer to this document to get Helm installed and configured properly.\n Creating a Demo Namespace In Kubernetes, a concept called a namespace provides the means to separate created resources or components into individual logically grouped partitions.\n It is considered a best practice to have dedicated namespaces for projects in both testing and production environments.\n Note All examples in the Crunchy Container Suite operate within the namespace defined by the environment variable $CCP_NAMESPACE. The instructions below illustrate how to set up and work within new namespaces or projects in both Kubernetes and OpenShift. Kubernetes This section will illustrate how to set up a new Kubernetes namespace called demo, and will then show how to provide permissions to that namespace to allow the Kubernetes examples to run within that namespace.\n First, view currently existing namespaces:\n $ kubectl get namespace NAME STATUS AGE default Active 21d kube-public Active 21d kube-system Active 21d Then, create a new namespace called demo:\n $ kubectl create -f $CCPROOT/conf/demo-namespace.json namespace \"demo\" created $ kubectl get namespace demo NAME STATUS AGE demo Active 7s Then set the namespace as the current location to avoid using the wrong namespace:\n $ kubectl config set-context $(kubectl config current-context) --namespace=demo We can verify that the namespace was set correctly through the following command:\n $ kubectl config view | grep namespace: namespace: demo OpenShift This section assumes you are first logging into OpenShift as a normal user such as:\n oc login -u someuser For our development purposes only, we typically specify the OCP Authorization policy of AllowAll as documented here:\n https://docs.openshift.com/container-platform/3.7/install_config/configuring_authentication.html#AllowAllPasswordIdentityProvider\n We do not recommend this authentication policy for a production deployment of OCP.\n The next step is to create a demo namespace to run the examples within. The name of this OCP project will be what you supply in the CCP_NAMESPACE environment variable:\n $ oc new-project demo --description=\"Crunchy Containers project\" --display-name=\"Crunchy-Containers\" Now using project \"demo\" on server \"https://127.0.0.1:8443\". $ export CCP_NAMESPACE=demo If we view the list of projects, we can see the new project has been added and is \"active\".\n $ oc get projects NAME DISPLAY NAME STATUS demo Crunchy-Containers Active myproject My Project Active If you were on a different project and wanted to switch to the demo project, you would do so by running the following:\n $ oc project demo Now using project \"demo\" on server \"https://127.0.0.1:8443\". Finally, you will want to ensure the proper privileges are added to the user in order to have the ability to create persistent volumes. A command similar to the following can be used to accomplish this, by adding the cluster-admin role to the demo user:\n oc adm policy add-cluster-role-to-user cluster-admin demo Next Steps Next, build or pull the container images as demonstrated in the Build the Containers document.\n "
},
{
"uri": "https://crunchydata.github.io/crunchy-containers/container-specifications/crunchy-postgres-gis/",
@@ -32,14 +32,14 @@
"title": "Getting Started",
"tags": [],
"description": "",
- "content": "Latest Release: 2.1.0 2018-08-09\n Examples are provided for the Kubernetes, OpenShift, and Docker environments in the Kubernetes and OpenShift document.\n For documentation relating to OpenShift Templates examples, visit the OpenShift Templates document.\n "
+ "content": "Latest Release: 2.1.0 2018-08-13\n Examples are provided for the Kubernetes, OpenShift, and Docker environments in the Kubernetes and OpenShift document.\n For documentation relating to OpenShift Templates examples, visit the OpenShift Templates document.\n "
},
{
"uri": "https://crunchydata.github.io/crunchy-containers/installation/build-the-containers/",
"title": "Build the Containers",
"tags": [],
"description": "",
- "content": "Table of Contents Build the Containers Next Steps Latest Release: 2.1.0 2018-08-09\n Build the Containers At this point, you have a decision to make - either download prebuilt containers from Dockerhub, or build the containers on your local host.\n To download the prebuilt containers, make sure you can login to Dockerhub, and then run the following:\n docker login cd $CCPROOT ./bin/pull-from-dockerhub.sh Or if you\u0026#8217;d rather build the containers from source, perform a container build as follows:\n godep restore cd $CCPROOT make setup make all After this, you will have all the Crunchy containers built and are ready for use in a standalone Docker environment.\n Next Steps Next, configure a storage type as demonstrated in the Storage Configuration document.\n "
+ "content": "Table of Contents Build the Containers Next Steps Latest Release: 2.1.0 2018-08-13\n Build the Containers At this point, you have a decision to make - either download prebuilt containers from Dockerhub, or build the containers on your local host.\n To download the prebuilt containers, make sure you can login to Dockerhub, and then run the following:\n docker login cd $CCPROOT ./bin/pull-from-dockerhub.sh Or if you\u0026#8217;d rather build the containers from source, perform a container build as follows:\n godep restore cd $CCPROOT make setup make all After this, you will have all the Crunchy containers built and are ready for use in a standalone Docker environment.\n Next Steps Next, configure a storage type as demonstrated in the Storage Configuration document.\n "
},
{
"uri": "https://crunchydata.github.io/crunchy-containers/container-specifications/crunchy-backup/",
@@ -53,14 +53,14 @@
"title": "Container Specifications",
"tags": [],
"description": "",
- "content": "Latest Release: 2.1.0 2018-08-09\n Introduction Each of the containers found within the Crunchy Container Suite for PostgreSQL are described in further detail within their respective pages.\n The containers and their relationships to the overall architecture are depicted below:\n Containers The following container images are provided with further information:\n crunchy-postgres\n crunchy-postgres-gis\n crunchy-backup\n crunchy-pgdump\n crunchy-collect\n crunchy-prometheus\n crunchy-grafana\n crunchy-pgbadger\n crunchy-pgpool\n crunchy-watch\n crunchy-vacuum\n crunchy-dba\n crunchy-pgbouncer\n crunchy-backrest-restore\n crunchy-pgadmin4\n crunchy-pgrestore\n crunchy-upgrade\n crunchy-sim\n crunchy-postgres The crunchy-postgres container executes the Postgres database.\n Packages The container image is built using either the Crunchy Postgres release or the community version based upon a flag in the Makefile.\n The crunchy-postgres RPMs are available to Crunchy customers only. The Crunchy release is meant for customers that require enterprise level support.\n The PGDG community RPMs can be used as well by simply commenting out the Crunchy yum repo within the Dockerfiles and uncommenting the PGDG yum repo.\n setup.sql The setup.sql script is used to define startup SQL commands that are executed when the database is first created.\n Environment Variables PG_MODE - either primary, replica or set, this value determines whether the database is set up as a primary or replica instance. In the case of set, it means the container is started within a StatefulSet in a Kubernetes cluster.\n PG_PRIMARY_USER - the value to use for the user ID created as primaryuser. The primaryuser has super user privileges.\n PG_PRIMARY_PASSWORD - the password for the PG_PRIMARY_USER database user\n PG_USER - the value to use for the user ID created as a normal user. This user is created as part of the setup.sql script upon database creation and allows users to predefine an application user.\n PG_PASSWORD - the password for the PG_USER database user that is created\n PG_DATABASE - a database that is created upon database initialization\n PG_ROOT_PASSWORD - the PostgreSQL user password set up upon database initialization\n PG_LOCALE - if set, the locale you want to create the database with, if not set, the default locale is used\n SYNC_REPLICA - if set, this value is used to specify the application_name of a replica that will be used for a synchronous replication\n CHECKSUMS - if set, this value is used to enable the --data-checksums option when initdb is executed at initialization, if not set, the default is to not enable data checksums\n ARCHIVE_MODE - if set to on, will enable continuous WAL archiving by setting the value within the postgresql.conf file archive_mode setting, if not set, the default is off\n ARCHIVE_TIMEOUT - if set to a number (in seconds) , will specify the postgresql.conf archive_timeout setting, if not set, the default value of 60 is used.\n PGAUDIT_ANALYZE - if set, will cause the container to also start the pgaudit_analyze program in the background\n PG_PRIMARY_HOST - for when PG_MODE is set, specifies the primary host for setting the primary label\n PG_REPLICA_HOST - for when PG_MODE is set, specifies the replica host for setting the replica label\n PGDATA_PATH_OVERRIDE - if set, will cause the container to use a /pgdata path name of your choosing rather than the hostname of the container which is the default. This is useful for a primary in a deployment.\n XLOGDIR - if set to true, will cause initdb to include --xlogdir=$PGWAL, this will cause a symlink to be created from /pgdata/containername/pg_wal (or pg_xlog if you\u0026#8217;re running PG 9.5 or 9.6) to /pgwal/containername-wal\n TEMP_BUFFERS - default is 8MB, set this value to override this PostgreSQL configuration setting\n MAX_CONNECTIONS - default is 100, set this value to override this PostgreSQL configuration setting\n SHARED_BUFFERS - default is 128MB, set this value to override this PostgreSQL configuration setting\n WORK_MEM - default is 4MB, set this value to override this PostgreSQL configuration setting\n MAX_WAL_SENDERS - default is 6, set this value to override this PostgreSQL configuration setting\n ENABLE_SSHD- default is false, set this value to true to enable SSHD\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Features The following features are supported by the crunchy-postgres container:\n use of OpenShift secrets\n ability to restore from a database backup\n use of custom pg_hba.conf and postgresql.conf files\n ability to override postgresql.conf configuration parameters\n ability to override the default setup.sql script\n ability to set the database locale\n ability to specify a synchronous replica application_name\n ability to specify a recovery using PITR and WAL files\n ability to enable SSHD\n Locale Support Adding locale support to the container is accomplished by running 'yum reinstall glibc_common' within the container, this increases the size of the container image and can be removed if you do not require specific locale support.\n You can specify the PG_LOCALE env var which is passed to the initdb command when the initial data files are created, for example:\n \"name\": \"PG_LOCALE\", \"value\": \"fr_BE.UTF-8\" By default, no locale is specified when the initdb command is executed.\n crunchy-postgres-gis This container is the same as the crunchy-postgres container except that it includes the following PostgreSQL extensions:\n postgis\n pl/r\n You can test the pl/r extension by running the following commands for example:\n create extension plr; SELECT * FROM plr_environ(); SELECT load_r_typenames(); SELECT * FROM r_typenames(); SELECT plr_array_accum('{23,35}', 42); CREATE OR REPLACE FUNCTION plr_array (text, text) RETURNS text[] AS '$libdir/plr','plr_array' LANGUAGE 'c' WITH (isstrict); select plr_array('hello','world'); crunchy-backup The crunchy-backup container executes a pg_basebackup against another database container. The backup is a full backup using the standard utility included with PostgreSQL, pg_basebackup.\n Backup Location Backups are stored in a mounted backup volume location, using the database host name plus -backups as a sub-directory, then followed by a unique backup directory based upon a date/timestamp. It is left to the user to perform database backup archives in this current version of the container. This backup location is referenced when performing a database restore.\n Dependencies The container is meant to be using NFS or a similar network file system to persist database backups.\n Environment Variables BACKUP_LABEL - when set, will set the label of the backup, if not set the default label used is crunchy-backup\n BACKUP_HOST - required, this is the database we will be doing the backup for\n BACKUP_USER - required, this is the database user we will be doing the backup with\n BACKUP_PASS - required, this is the database password we will be doing the backup with\n BACKUP_PORT - required, this is the database port we will be doing the backup with\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n crunchy-pgdump The crunchy-pgdump container executes either a pg_dump or pg_dumpall against another Postgres database.\n Dump Location Dumps are stored in a mounted backup volume location, using the database host name plus -backups as a sub-directory, then followed by a unique backup directory based upon a date/timestamp. It is left to the user to perform database dump archives in this current version of the container.\n Dependencies The container is meant to be using NFS or a similar network file system to persist database dumps.\n Environment Variables REQUIRED ARGS PGDUMP_DB - Database to connect to\n PGDUMP_HOST - Hostname of the PostgreSQL database to connect to\n PGDUMP_PASS - Password of the PostgreSQL role used by the pgdump container\n PGDUMP_USER - PostgreSQL Role used by the pgdump container\n OPTIONAL/EXTENDED ARGS PGDUMP_ALL - Run pg_dump instead of pg_dumpall. Default is true, set to false to use pg_dump\n PGDUMP_CUSTOM_OPTS - Advanced options to pass into pg_dump or pg_dumpall. Default is empty\n PGDUMP_FILENAME - Name of the file created by the pgdump container. Default is dump\n PGDUMP_PORT - Port of the PostgreSQL database to connect to. Default is 5432\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs Note: this mode can reveal secrets in logs.\n Note: For a list of advanced options for configuring the PGDUMP_CUSTOM_OPTS variable, see the official documentation:\n https://www.postgresql.org/docs/current/static/app-pgdump.html\n https://www.postgresql.org/docs/current/static/app-pg-dumpall.html\n crunchy-collect Description Crunchy Collect container provides real time metrics about the PostgreSQL database via an API. These metrics are scrapped and stored by Crunchy Prometheus time-series database and visualized by Crunchy Grafana.\n Requirements This container requires TCP access to the PostgreSQL database to run queries for collecting metrics. The PostgreSQL database to be scrapped is specified by the DATA_SOURCE_NAME environment variable.\n Additionally, custom queries to collect metrics can be specified by the user. By mounting a queries.yml file to /conf on the container, additionally metrics can be specified for the API to collect. For an example of a queries.yml file, see here.\n Environment Variables Required:\n DATA_SOURCE_NAME - The URL for the PostgreSQL server\u0026#8217;s data source name. This is required to be in the form of postgresql://.\n Optional: * CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n crunchy-prometheus Description Prometheus is a multi-dimensional time series data model with an elastic query language. It is used in collaboration with Grafana in this metrics suite. Overall, it’s reliable, manageable, and operationally simple for efficiently storing and analyzing data for large-scale environments. It scraps metrics from exporters such as Crunchy Collect.\n The following port is exposed by the crunchy-prometheus container:\n crunchy-prometheus:9090 - the Prometheus web user interface\n Requirements The Crunchy Prometheus container must be able to reach the Crunchy Collect container to scrape metrics.\n By default, Crunchy Prometheus detects which environment its running on (Docker, Kube or OCP) and applies a default configuration. If this container is running on Kube or OCP, it will use the Kubernetes API to discover pods with the label \"crunchy-collect\": \"true\". Crunchy Collect container must have this label to be discovered.\n For Docker environments the Crunchy Collect hostname must be specified as an environment variable.\n A user may define a custom prometheus.yml file and mount to /conf for custom configuration. For a configuration examples, see here.\n Environment Variables Required:\n COLLECT_HOST - Hostname of Crunchy Collect container. Only required in Docker environments.\n Optional:\n SCRAPE_INTERVAL - default is \"5s\", set this value to the number of seconds to scrape metrics from exporters.\n SCRAPE_TIMEOUT - default is \"5s\", set this value to the number of seconds to timeout when scraping metrics from exporters.\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n crunchy-grafana Description Visual dashboards are created from the collected and stored data that crunchy-collect and crunchy-prometheus provides with the crunchy-grafana container, which hosts a web-based graphing dashboard called Grafana.\n Grafana is an open-source platform which can then apply the defined metrics and visualize information through various tools. It is extremely flexible with a powerful query and transformation language, producing beautiful and easily understandable graphics to analyze and monitor your data.\n By default, Crunchy Grafana will register the Crunchy Prometheus datasource within Grafana and import a premade dashboard for PostgreSQL monitoring.\n The following port is exposed by the crunchy-grafana container:\n crunchy-grafana:3000 - the Grafana web user interface\n Requirements The Crunchy Grafana container must be able to reach the Crunchy Prometheus container.\n Users must specify an administrator user and password to provide basic authentication for the web frontend.\n Additionally the Prometheus Host and Port are required. If Prometheus uses basic authentication, users must specify the username and password to access Prometheus via environment variables.\n Users may define a custom defaults.ini file and mount to /conf for custom configuration. For a configuration examples, see here.\n Environment Variables Required:\n ADMIN_USER - specifies the administrator user to be used when logging into the web frontend.\n ADMIN_PASS - specifies the administrator password to be used when logging into the web frontend.\n PROM_HOST - specifies the Prometheus container hostname for auto registering the prometheus datasource.\n PROM_PORT - specifies the Prometheus container port for auto registering the prometheus datasource.\n Optional:\n PROM_USER - specifies the Prometheus username, if one is required.\n PROM_PASS - specifies the Prometheus password, if one is required.\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n crunchy-pgbadger The crunchy-pgbadger container executes the pgbadger utility. A small http server is running on the container, when a request is made to:\n http://\u0026lt;\u0026lt;ip address\u0026gt;\u0026gt;:10000/api/badgergenerate Environment Variables Optional:\n BADGER_TARGET - only used in standalone mode to specify the name of the container, also used to find the location of the database log files in /pgdata/$BADGER_TARGET/pg_log/*.log\n BADGER_CUSTOM_OPTS - no default, set this value to provide custom flags to pgbadger. For a list of optional flags, see the official pgBadger documentation.\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Features The following features are supported by the crunchy-pgbadger container:\n Generate a full report by default\n Optional custom options for more advanced use cases (such as incremental reports)\n Report persistence on a volume\n crunchy-pgpool The crunchy-pgpool container executes the pgpool utility. Pgpool can be used to provide a smart PostgreSQL-aware proxy to a PostgreSQL cluster, both primary and replica, so that applications can only have to work with a single database connection.\n Postgres replicas are read-only whereas a primary is both read and write capable.\n The default pgpool examples use a Secret to hold the set of pgpool configuration files used by the examples. The Secret is mounted into the pgconf volume mount where the container will look to find configuration files. If you do not specify your own configuration files via a Secret then you can specify environment variables to the container that it will attempt to use to configure pgpool, this is not recommended however for a real pgpool deployment.\n Environment Variables PG_USERNAME - user to connect to PostgreSQL\n PG_PASSWORD - user password to connect to PostgreSQL\n PG_PRIMARY_SERVICE_NAME - database host to connect to for the primary node\n PG_REPLICA_SERVICE_NAME - database host to connect to for the replica node\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Features The following features are supported by the crunchy-pgpool container:\n basic invocation of pgpool\n crunchy-watch crunchy-watch runs as a pod unto itself typically. The watch container essentially does a health check on a primary database container and performs a failover sequence if the primary is not reached.\n The watch container has access to a service account that is used inside the container to issue commands to OpenShift.\n In Kube 1.5, if a policy file is being used for securing down the Kube cluster, you could possibly need to add a policy to allow the pg-watcher service account access to the Kube API as mentioned here: https://kubernetes.io/docs/admin/authorization/abac/#a-quick-note-on-service-accounts\n In Kube 1.6, an equivalent RBAC policy is also possibly required depending on your authorization/authentication configuration. See this link for details on the new RBAC policy mechanism: https://kubernetes.io/docs/admin/authorization/rbac/\n For example, you can grant cluster-admin permissions on the pg-watcher service account, in the my-namespace namespace as follows:\n kubectl create clusterrolebinding pgwatcher-view-binding --clusterrole=cluster-admin --serviceaccount=my-namespace:pg-watcher A less wide open policy would be applied like this on Kube 1.6 rbac:\n kubectl create rolebinding my-sa-binding --clusterrole=admin --serviceaccount=demo:pg-watcher --namespace=demo Note this kubectl command is only available in Kube 1.6, for prior Kube release such as 1.5 and the alpha RBAC, you will need to specify the role binding in a JSON/YAML file instead of using this command syntax above. You then reference the SA within the POD spec.\n The oc/docker/kubectl commands are included into the container from the host when the container image is built. These commands are used by the watch logic to interact with the replica containers.\n Starting with release 1.7.1 crunchy-watch source code is relocated to https://github.com/crunchydata/crunchy-watch\n Environment Variables CRUNCHY_WATCH_HEALTHCHECK_INTERVAL - the time to sleep in seconds between checking on the primary\n CRUNCHY_WATCH_FAILOVER_WAIT - the time to sleep in seconds between triggering the failover and updating its label (default is 40 secs)\n PG_CONTAINER_NAME - if set, the name of the container to refer to when doing an exec, this is required if you have more than 1 container in your database pod\n CRUNCHY_WATCH_PRIMARY - the primary service name\n CRUNCHY_WATCH_REPLICA - the replica service name\n PG_PRIMARY_PORT - database port to use when checking the database\n CRUNCHY_WATCH_USERNAME - database user account to use when checking the database using pg_isready utility\n CRUNCHY_WATCH_DATABASE - database to use when checking the database using pg_isready\n REPLICA_TO_TRIGGER_LABEL - the pod name of a replica that you want to choose as the new primary in a failover; this will override the normal replica selection\n CRUNCHY_WATCH_PRE_HOOK - path to an executable file to run before failover is processed.\n CRUNCHY_WATCH_POST_HOOK - path to an executable file to run after failover is processed.\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Logic The watch container will watch the primary, if the primary dies, then the watcher will:\n create the trigger file on the replica that will become the new primary\n change the labels on the replica to be those of the primary\n start watching the new primary in case that falls over next\n look for replicas that have the metadata label value of replicatype=trigger to prefer the failover to. If found, it will use the first replica with that label; if not found, it will use the first replica it finds.\n Example of looking for the failover replica:\n oc get pod -l name=pg-replica-rc-dc NAME READY STATUS RESTARTS AGE pg-replica-rc-dc 1/1 Running 2 16m pg-replica-rc-dc-1-96qs8 1/1 Running 1 16m oc get pod -l replicatype=trigger NAME READY STATUS RESTARTS AGE pg-replica-rc-dc 1/1 Running 2 16m crunchy-vacuum Description The crunchy-vacuum container allows you to perform a SQL VACUUM job against a PostgreSQL database container. You specify a database to vacuum using various environment variables which are listed below. It is possible to run different vacuum operations either manually or automatically through scheduling.\n The crunchy-vacuum image is executed, passed in the Postgres connection parameters to the single-primary PostgreSQL container. The type of vacuum performed is dictated by the environment variables passed into the job.\n Environment Variables The complete set of environment variables read by the crunchy-vacuum job include:\n VAC_FULL - when set to true adds the FULL parameter to the VACUUM command\n VAC_TABLE - when set, allows you to specify a single table to vacuum, when not specified, the entire database tables are vacuumed\n JOB_HOST - required variable is the postgres host we connect to\n PG_USER - required variable is the postgres user we connect with\n PG_DATABASE - required variable is the postgres database we connect to\n PG_PASSWORD - required variable is the postgres user password we connect with\n PG_PORT - allows you to override the default value of 5432\n VAC_ANALYZE - when set to true adds the ANALYZE parameter to the VACUUM command\n VAC_VERBOSE - when set to true adds the VERBOSE parameter to the VACUUM command\n VAC_FREEZE - when set to true adds the FREEZE parameter to the VACUUM command\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n crunchy-dba The crunchy-dba container implements a cron scheduler. The purpose of the crunchy-dba container is to offer a way to perform simple DBA tasks that occur on some form of schedule such as backup jobs or running a vacuum on a single Postgres database container.\n You can either run the crunchy-dba container as a single pod or include the container within a database pod.\n The crunchy-dba container makes use of a Service Account to perform the startup of scheduled jobs. The Kube Job type is used to execute the scheduled jobs with a Restart policy of Never.\n Environment Variables The following environment variables control the actions of crunchy-dba:\n OSE_PROJECT - required, the OSE project name to log into\n JOB_HOST - required, the PostgreSQL container name the action will be taken against\n VAC_SCHEDULE - if set, this will start a vacuum job container. The setting value must be a valid cron expression as described below.\n BACKUP_SCHEDULE - if set, this will start a backup job container. The setting value must be a valid cron expression as described below.\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n For a vacuum job, you are required to supply the following environment variables:\n JOB_HOST\n PG_USER\n PG_PASSWORD\n PG_DATABASE - defaults to postgres when not specified\n PG_PORT - defaults to 5432 when not specified\n VAC_ANALYZE(optional) - defaults to true when not specified\n VAC_FULL(optional) - defaults to true when not specified\n VAC_VERBOSE(optional) - defaults to true when not specified\n VAC_FREEZE(optional) - defaults to false when not specified\n VAC_TABLE(optional) - defaults to all tables when not specified, or you can set this value to indicate a single table to vacuum\n For a backup job, you are required to supply the following environment variables:\n JOB_HOST\n PG_USER - database user used to perform the backup\n PG_PASSWORD - database user password used to perform the backup\n PG_PORT - port value used when connecting for a backup to the database\n BACKUP_PV_CAPACITY - a value like 1Gi is used to define the PV storage capacity\n BACKUP_PV_PATH - the storage path used to build the PV\n BACKUP_PV_HOST - the storage host used to build the PV\n BACKUP_PVC_STORAGE - a value like 75M means to allow 75 megabytes for the PVC used in performing the backup\n CRON Expression Format A cron expression represents a set of times, using 6 space-separated fields.\n Table 1. Table Fields Field name Mandatory? Allowed values Allowed special characters Seconds\n Yes\n 0-59\n * / , -\n Minutes\n Yes\n 0-59\n * / , -\n Hours\n Yes\n 0-23\n * / , -\n Day of month\n Yes\n 1-31\n * / , - ?\n Month\n Yes\n 1-12 or JAN-DEC\n * / , -\n Day of week\n Yes\n 0-6 or SUN-SAT\n * / , - ?\n Note Month and Day-of-week field values are case insensitive. SUN'', Sun'', and ``sun'' are equally accepted. Special Characters Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month.\n Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form *\\/\u0026#8230;\u0026#8203;'' is equivalent to the form first-last/\u0026#8230;\u0026#8203;'', that is, an increment over the largest possible range of the field. The form N/\u0026#8230;\u0026#8203;'' is accepted as meaning N-MAX/\u0026#8230;\u0026#8203;'', that is, starting at N, use the increment until the end of that specific range. It does not wrap around.\n Comma ( , ) Commas are used to separate items of a list. For example, using ``MON,WED,FRI'' in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays.\n Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive.\n Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank.\n Predefined schedules You may use one of several pre-defined schedules in place of a cron expression.\n Table 2. Table Predefined Schedules Entry Description Equivalent To @yearly (or @annually)\n Run once a year, midnight, Jan. 1st\n 0 0 0 1 1 *\n @monthly\n Run once a month, midnight, first of month\n 0 0 0 1 * *\n @weekly\n Run once a week, midnight on Sunday\n 0 0 0 * * 0\n @daily (or @midnight)\n Run once a day, midnight\n 0 0 0 * * *\n @hourly\n Run once an hour, beginning of hour\n 0 0 * * * *\n Intervals You may also schedule a job to execute at fixed intervals. This is supported by formatting the cron spec like this:\n @every \u0026lt;duration\u0026gt; where ``duration'' is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration).\n For example, ``@every 1h30m10s'' would indicate a schedule that activates every 1 hour, 30 minutes, 10 seconds.\n Note The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. Time zones All interpretation and scheduling is done in the machines local time zone (as provided by the Go time package (http://www.golang.org/pkg/time). Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run!\n crunchy-pgbouncer Crunchy pgBouncer is a lightweight connection pooler for PostgreSQL databases.\n Environment Variables REQUIRED ARGS PGBOUNCER_PASSWORD - the password of the pgbouncer role in PostgreSQL. Must be also set on the primary database.\n PG_SERVICE - the hostname of the database service\n OPTIONAL/EXTENDED ARGS DEFAULT_POOL_SIZE - default 20, how many server connections to allow per user/database pair.\n MAX_CLIENT_CONN - default 100, maximum number of client connections allowed.\n MAX_DB_CONNECTIONS - default unlimited, do not allow more than this many connections per-database.\n MIN_POOL_SIZE - default 0 (disabled), adds more server connections to pool if below this number.\n POOL_MODE - default session, specifies when a server connection can be reused by other clients. Possible values: session, transaction and statement.\n RESERVE_POOL_SIZE - default 0 (disabled), how many additional connections to allow to a pool.\n RESERVE_POOL_TIMEOUT - default 5, if a client has not been serviced in this many seconds, pgbouncer enables use of additional connections from reserve pool. 0 disables.\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Features The following features are supported by the crunchy-pgbouncer container:\n Crunchy pgBouncer uses auth_query to authenticate users. This requires only the pgbouncer username and password in users.txt. Automatically generated from environment variables.\n Mount a custom users.txt and pgbouncer.ini configurations for advanced usage.\n Tune pooling parameters via environment variables.\n Connect to the administration database in pgBouncer to view statistics of the target databases.\n Restrictions OpenShift: If custom configurations aren\u0026#8217;t being mounted, an emptydir volume is required to be mounted at /pgconf.\n Superusers cannot connect through the connection pooler.\n crunchy-backrest-restore The crunchy-backrest-restore container executes the pgbackrest utility, allowing FULL and DELTA restore capability. See the pgbackrest guide for more details. https://github.com/pgbackrest/pgbackrest\n Environment Variables Required: * STANZA - desired stanza to restore from. For most cases this should be set to db.\n Optional: * DELTA - when set to true, this will configure pgBackRest to do a delta restore. Delta restores do not require pgdata directoy to be empty. This will only pull in differences between pgdata and the backup. * BACKREST_CUSTOM_OPTS - pass in custom parameters to pgBackRest for advanced use cases (such as point in time recovery). * CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Features The following features are supported by the crunchy-backrest-restore container:\n mount pgbackrest.conf config file via /pgconf volume\n mount the /backrestrepo for access to pgbackrest archives\n Restrictions for configuration, has to have pgbackrest.conf files mounted in /pgconf\n must have valid pgbackrest archive directory mounted in /backrestrepo\n crunchy-pgadmin4 The crunchy-ppgadmin4 container executes the pgadmin4 web application.\n The pgadmin4 project is found at the following location: https://www.pgadmin.org/\n pgadmin4 provides a web user interface to PostgreSQL databases. A sample screenshot is below:\n Environment Variables PGADMIN_SETUP_EMAIL - required, set this value to the email address used for pgAdmin4 login.\n PGADMIN_SETUP_PASSWORD - required, set this value to a password used for pgAdmin4 login. This should be a strong password.\n SERVER_PORT - default is 5050, set this value to a change the port pgAdmin4 listens on.\n ENABLE_TLS - default is false, set this value to true to enable HTTPS on the pgAdmin4 container. This requires a server.key and server.crt to be mounted on the /certs directory.\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Features The following features are supported by the crunchy-pgadmin4 container:\n expose port (5050 by default) which is the web server port\n mount a certificate and key to the /certs directory and set ENABLE_TLS to true to activate HTTPS mode.\n Set username and password for login via environment variables.\n Restrictions An emptyDir, with write access, must be mounted to the /run/httpd directory in OpenShift.\n crunchy-pgrestore The restore image provides a means of performing a restore of a dump from pg_dump or pg_dumpall via psql or pg_restore to a Postgres container database.\n Dump-file Input Location As the input files for crunchy-pgrestore, files generated by crunchy-pgdump are retrieved in a mounted backup volume location, using the database host name plus -backups as a sub-directory, then followed by a unique backup directory based upon a date/timestamp. It is left to the user to restore database dump archives in this current version of the container.\n Dependencies The container is meant to be using NFS or a similar network file system to retrieve database dumps to be restored via psql or pg_restore.\n Environment Variables REQUIRED ARGS PGRESTORE_DB - Database to connect to\n PGRESTORE_HOST - Hostname of the PostgreSQL database to connect to\n PGRESTORE_PASS - Password of the PostgreSQL role used by the pgdump container\n PGRESTORE_USER - PostgreSQL Role used by the pgdump container\n OPTIONAL/EXTENDED ARGS PGDUMP_BACKUP_HOST - Hostname of the PostgreSQL server that was backed up by pgdump container. Used when restoring a backup to a new host.\n PGRESTORE_BACKUP_TIMESTAMP - Timestamp of the backup to restore from. Default is empty (restores from latest backup)\n PGRESTORE_CUSTOM_OPTS - Advanced options to pass into pg_restore. Default is empty\n PGRESTORE_PORT - Port of the PostgreSQL database to connect to. Default is 5432\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs Note: this mode can reveal secrets in logs.\n Note: For a list of advanced options for configuring the PGRESTORE_CUSTOM_OPTS variable, see the official documentation:\n https://www.postgresql.org/docs/current/static/app-pgrestore.html\n crunchy-upgrade The crunchy-upgrade container contains both the 9.5/9.6 and 9.6/10 Postgres packages in order to perform a pg_upgrade from 9.5 to 9.6 or 9.6 to 10 versions.\n Environment Variables OLD_DATABASE_NAME - required, refers to the database (pod) name that we want to convert\n NEW_DATABASE_NAME - required, refers to the database (pod) name that we give to the upgraded database\n OLD_VERSION - required, the Postgres version of the old database\n NEW_VERSION - required, the Postgres version of the new database\n PG_LOCALE - if set, the locale you want to create the database with, if not set, the default locale is used\n CHECKSUMS - if set, this value is used to enable the --data-checksums option when initdb is executed at initialization, if not set, the default is to not enable data checksums\n XLOGDIR - if set, initdb will use the specified directory for WAL\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Features The following features are supported by the crunchy-upgrade container:\n supports only a pg_upgrade of the Postgres database\n doesn\u0026#8217;t alter the old database files\n creates the new database directory\n Restrictions does NOT support a postgis upgrade currently\n all environment variables are required\n supports upgrades from 9.5/9.6 to 10\n crunchy-sim The crunchy-sim container is a simple traffic simulator for PostgreSQL.\n Environment Variables PGSIM_HOST - required, the PostgreSQL host address\n PGSIM_PORT - required, the PostgreSQL host port\n PGSIM_USERNAME - required, the PostgreSQL username\n PGSIM_PASSWORD - required, the PostgreSQL password\n PGSIM_DATABASE - required, the database to connect\n PGSIM_INTERVAL - required, The units of the simulation interval\n PGSIM_MININTERVAL - required, the minimum interval value\n PGSIM_MAXINTERVAL - requited, the maximum interval value\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Valid values for PGSIM_INTERVAL are as follows:\n millisecond\n second\n minute\n Features Creates a single connection to PostgreSQL and will execute queries over a specified interval range.\n Queries are specified through a simple YAML file. Each query is a name-value pair and can span multiple lines by utilizing scalar notation (|'' or \u0026gt;'') as defined by the YAML spec.\n Queries are randomly chosen for execution.\n Restrictions Only one connection is created for all queries. \u0026gt;\u0026gt;\u0026gt;\u0026gt;\u0026gt;\u0026gt;\u0026gt; aae224745f5caf481c142b4ccbf3332ab4f45f8e\n "
+ "content": "Latest Release: 2.1.0 2018-08-13\n Introduction Each of the containers found within the Crunchy Container Suite for PostgreSQL are described in further detail within their respective pages.\n The containers and their relationships to the overall architecture are depicted below:\n Containers The following container images are provided with further information:\n crunchy-postgres\n crunchy-postgres-gis\n crunchy-backup\n crunchy-pgdump\n crunchy-collect\n crunchy-prometheus\n crunchy-grafana\n crunchy-pgbadger\n crunchy-pgpool\n crunchy-watch\n crunchy-vacuum\n crunchy-dba\n crunchy-pgbouncer\n crunchy-backrest-restore\n crunchy-pgadmin4\n crunchy-pgrestore\n crunchy-upgrade\n crunchy-sim\n crunchy-postgres The crunchy-postgres container executes the Postgres database.\n Packages The container image is built using either the Crunchy Postgres release or the community version based upon a flag in the Makefile.\n The crunchy-postgres RPMs are available to Crunchy customers only. The Crunchy release is meant for customers that require enterprise level support.\n The PGDG community RPMs can be used as well by simply commenting out the Crunchy yum repo within the Dockerfiles and uncommenting the PGDG yum repo.\n setup.sql The setup.sql script is used to define startup SQL commands that are executed when the database is first created.\n Environment Variables PG_MODE - either primary, replica or set, this value determines whether the database is set up as a primary or replica instance. In the case of set, it means the container is started within a StatefulSet in a Kubernetes cluster.\n PG_PRIMARY_USER - the value to use for the user ID created as primaryuser. The primaryuser has super user privileges.\n PG_PRIMARY_PASSWORD - the password for the PG_PRIMARY_USER database user\n PG_USER - the value to use for the user ID created as a normal user. This user is created as part of the setup.sql script upon database creation and allows users to predefine an application user.\n PG_PASSWORD - the password for the PG_USER database user that is created\n PG_DATABASE - a database that is created upon database initialization\n PG_ROOT_PASSWORD - the PostgreSQL user password set up upon database initialization\n PG_LOCALE - if set, the locale you want to create the database with, if not set, the default locale is used\n SYNC_REPLICA - if set, this value is used to specify the application_name of a replica that will be used for a synchronous replication\n CHECKSUMS - if set, this value is used to enable the --data-checksums option when initdb is executed at initialization, if not set, the default is to not enable data checksums\n ARCHIVE_MODE - if set to on, will enable continuous WAL archiving by setting the value within the postgresql.conf file archive_mode setting, if not set, the default is off\n ARCHIVE_TIMEOUT - if set to a number (in seconds) , will specify the postgresql.conf archive_timeout setting, if not set, the default value of 60 is used.\n PGAUDIT_ANALYZE - if set, will cause the container to also start the pgaudit_analyze program in the background\n PG_PRIMARY_HOST - for when PG_MODE is set, specifies the primary host for setting the primary label\n PG_REPLICA_HOST - for when PG_MODE is set, specifies the replica host for setting the replica label\n PGDATA_PATH_OVERRIDE - if set, will cause the container to use a /pgdata path name of your choosing rather than the hostname of the container which is the default. This is useful for a primary in a deployment.\n XLOGDIR - if set to true, will cause initdb to include --xlogdir=$PGWAL, this will cause a symlink to be created from /pgdata/containername/pg_wal (or pg_xlog if you\u0026#8217;re running PG 9.5 or 9.6) to /pgwal/containername-wal\n TEMP_BUFFERS - default is 8MB, set this value to override this PostgreSQL configuration setting\n MAX_CONNECTIONS - default is 100, set this value to override this PostgreSQL configuration setting\n SHARED_BUFFERS - default is 128MB, set this value to override this PostgreSQL configuration setting\n WORK_MEM - default is 4MB, set this value to override this PostgreSQL configuration setting\n MAX_WAL_SENDERS - default is 6, set this value to override this PostgreSQL configuration setting\n ENABLE_SSHD- default is false, set this value to true to enable SSHD\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Features The following features are supported by the crunchy-postgres container:\n use of OpenShift secrets\n ability to restore from a database backup\n use of custom pg_hba.conf and postgresql.conf files\n ability to override postgresql.conf configuration parameters\n ability to override the default setup.sql script\n ability to set the database locale\n ability to specify a synchronous replica application_name\n ability to specify a recovery using PITR and WAL files\n ability to enable SSHD\n Locale Support Adding locale support to the container is accomplished by running 'yum reinstall glibc_common' within the container, this increases the size of the container image and can be removed if you do not require specific locale support.\n You can specify the PG_LOCALE env var which is passed to the initdb command when the initial data files are created, for example:\n \"name\": \"PG_LOCALE\", \"value\": \"fr_BE.UTF-8\" By default, no locale is specified when the initdb command is executed.\n crunchy-postgres-gis This container is the same as the crunchy-postgres container except that it includes the following PostgreSQL extensions:\n postgis\n pl/r\n You can test the pl/r extension by running the following commands for example:\n create extension plr; SELECT * FROM plr_environ(); SELECT load_r_typenames(); SELECT * FROM r_typenames(); SELECT plr_array_accum('{23,35}', 42); CREATE OR REPLACE FUNCTION plr_array (text, text) RETURNS text[] AS '$libdir/plr','plr_array' LANGUAGE 'c' WITH (isstrict); select plr_array('hello','world'); crunchy-backup The crunchy-backup container executes a pg_basebackup against another database container. The backup is a full backup using the standard utility included with PostgreSQL, pg_basebackup.\n Backup Location Backups are stored in a mounted backup volume location, using the database host name plus -backups as a sub-directory, then followed by a unique backup directory based upon a date/timestamp. It is left to the user to perform database backup archives in this current version of the container. This backup location is referenced when performing a database restore.\n Dependencies The container is meant to be using NFS or a similar network file system to persist database backups.\n Environment Variables BACKUP_LABEL - when set, will set the label of the backup, if not set the default label used is crunchy-backup\n BACKUP_HOST - required, this is the database we will be doing the backup for\n BACKUP_USER - required, this is the database user we will be doing the backup with\n BACKUP_PASS - required, this is the database password we will be doing the backup with\n BACKUP_PORT - required, this is the database port we will be doing the backup with\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n crunchy-pgdump The crunchy-pgdump container executes either a pg_dump or pg_dumpall against another Postgres database.\n Dump Location Dumps are stored in a mounted backup volume location, using the database host name plus -backups as a sub-directory, then followed by a unique backup directory based upon a date/timestamp. It is left to the user to perform database dump archives in this current version of the container.\n Dependencies The container is meant to be using NFS or a similar network file system to persist database dumps.\n Environment Variables REQUIRED ARGS PGDUMP_DB - Database to connect to\n PGDUMP_HOST - Hostname of the PostgreSQL database to connect to\n PGDUMP_PASS - Password of the PostgreSQL role used by the pgdump container\n PGDUMP_USER - PostgreSQL Role used by the pgdump container\n OPTIONAL/EXTENDED ARGS PGDUMP_ALL - Run pg_dump instead of pg_dumpall. Default is true, set to false to use pg_dump\n PGDUMP_CUSTOM_OPTS - Advanced options to pass into pg_dump or pg_dumpall. Default is empty\n PGDUMP_FILENAME - Name of the file created by the pgdump container. Default is dump\n PGDUMP_PORT - Port of the PostgreSQL database to connect to. Default is 5432\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs Note: this mode can reveal secrets in logs.\n Note: For a list of advanced options for configuring the PGDUMP_CUSTOM_OPTS variable, see the official documentation:\n https://www.postgresql.org/docs/current/static/app-pgdump.html\n https://www.postgresql.org/docs/current/static/app-pg-dumpall.html\n crunchy-collect Description Crunchy Collect container provides real time metrics about the PostgreSQL database via an API. These metrics are scrapped and stored by Crunchy Prometheus time-series database and visualized by Crunchy Grafana.\n Requirements This container requires TCP access to the PostgreSQL database to run queries for collecting metrics. The PostgreSQL database to be scrapped is specified by the DATA_SOURCE_NAME environment variable.\n Additionally, custom queries to collect metrics can be specified by the user. By mounting a queries.yml file to /conf on the container, additionally metrics can be specified for the API to collect. For an example of a queries.yml file, see here.\n Environment Variables Required:\n DATA_SOURCE_NAME - The URL for the PostgreSQL server\u0026#8217;s data source name. This is required to be in the form of postgresql://.\n Optional: * CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n crunchy-prometheus Description Prometheus is a multi-dimensional time series data model with an elastic query language. It is used in collaboration with Grafana in this metrics suite. Overall, it’s reliable, manageable, and operationally simple for efficiently storing and analyzing data for large-scale environments. It scraps metrics from exporters such as Crunchy Collect.\n The following port is exposed by the crunchy-prometheus container:\n crunchy-prometheus:9090 - the Prometheus web user interface\n Requirements The Crunchy Prometheus container must be able to reach the Crunchy Collect container to scrape metrics.\n By default, Crunchy Prometheus detects which environment its running on (Docker, Kube or OCP) and applies a default configuration. If this container is running on Kube or OCP, it will use the Kubernetes API to discover pods with the label \"crunchy-collect\": \"true\". Crunchy Collect container must have this label to be discovered.\n For Docker environments the Crunchy Collect hostname must be specified as an environment variable.\n A user may define a custom prometheus.yml file and mount to /conf for custom configuration. For a configuration examples, see here.\n Environment Variables Required:\n COLLECT_HOST - Hostname of Crunchy Collect container. Only required in Docker environments.\n Optional:\n SCRAPE_INTERVAL - default is \"5s\", set this value to the number of seconds to scrape metrics from exporters.\n SCRAPE_TIMEOUT - default is \"5s\", set this value to the number of seconds to timeout when scraping metrics from exporters.\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n crunchy-grafana Description Visual dashboards are created from the collected and stored data that crunchy-collect and crunchy-prometheus provides with the crunchy-grafana container, which hosts a web-based graphing dashboard called Grafana.\n Grafana is an open-source platform which can then apply the defined metrics and visualize information through various tools. It is extremely flexible with a powerful query and transformation language, producing beautiful and easily understandable graphics to analyze and monitor your data.\n By default, Crunchy Grafana will register the Crunchy Prometheus datasource within Grafana and import a premade dashboard for PostgreSQL monitoring.\n The following port is exposed by the crunchy-grafana container:\n crunchy-grafana:3000 - the Grafana web user interface\n Requirements The Crunchy Grafana container must be able to reach the Crunchy Prometheus container.\n Users must specify an administrator user and password to provide basic authentication for the web frontend.\n Additionally the Prometheus Host and Port are required. If Prometheus uses basic authentication, users must specify the username and password to access Prometheus via environment variables.\n Users may define a custom defaults.ini file and mount to /conf for custom configuration. For a configuration examples, see here.\n Environment Variables Required:\n ADMIN_USER - specifies the administrator user to be used when logging into the web frontend.\n ADMIN_PASS - specifies the administrator password to be used when logging into the web frontend.\n PROM_HOST - specifies the Prometheus container hostname for auto registering the prometheus datasource.\n PROM_PORT - specifies the Prometheus container port for auto registering the prometheus datasource.\n Optional:\n PROM_USER - specifies the Prometheus username, if one is required.\n PROM_PASS - specifies the Prometheus password, if one is required.\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n crunchy-pgbadger The crunchy-pgbadger container executes the pgbadger utility. A small http server is running on the container, when a request is made to:\n http://\u0026lt;\u0026lt;ip address\u0026gt;\u0026gt;:10000/api/badgergenerate Environment Variables Optional:\n BADGER_TARGET - only used in standalone mode to specify the name of the container, also used to find the location of the database log files in /pgdata/$BADGER_TARGET/pg_log/*.log\n BADGER_CUSTOM_OPTS - no default, set this value to provide custom flags to pgbadger. For a list of optional flags, see the official pgBadger documentation.\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Features The following features are supported by the crunchy-pgbadger container:\n Generate a full report by default\n Optional custom options for more advanced use cases (such as incremental reports)\n Report persistence on a volume\n crunchy-pgpool The crunchy-pgpool container executes the pgpool utility. Pgpool can be used to provide a smart PostgreSQL-aware proxy to a PostgreSQL cluster, both primary and replica, so that applications can only have to work with a single database connection.\n Postgres replicas are read-only whereas a primary is both read and write capable.\n The default pgpool examples use a Secret to hold the set of pgpool configuration files used by the examples. The Secret is mounted into the pgconf volume mount where the container will look to find configuration files. If you do not specify your own configuration files via a Secret then you can specify environment variables to the container that it will attempt to use to configure pgpool, this is not recommended however for a real pgpool deployment.\n Environment Variables PG_USERNAME - user to connect to PostgreSQL\n PG_PASSWORD - user password to connect to PostgreSQL\n PG_PRIMARY_SERVICE_NAME - database host to connect to for the primary node\n PG_REPLICA_SERVICE_NAME - database host to connect to for the replica node\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Features The following features are supported by the crunchy-pgpool container:\n basic invocation of pgpool\n crunchy-watch crunchy-watch runs as a pod unto itself typically. The watch container essentially does a health check on a primary database container and performs a failover sequence if the primary is not reached.\n The watch container has access to a service account that is used inside the container to issue commands to OpenShift.\n In Kube 1.5, if a policy file is being used for securing down the Kube cluster, you could possibly need to add a policy to allow the pg-watcher service account access to the Kube API as mentioned here: https://kubernetes.io/docs/admin/authorization/abac/#a-quick-note-on-service-accounts\n In Kube 1.6, an equivalent RBAC policy is also possibly required depending on your authorization/authentication configuration. See this link for details on the new RBAC policy mechanism: https://kubernetes.io/docs/admin/authorization/rbac/\n For example, you can grant cluster-admin permissions on the pg-watcher service account, in the my-namespace namespace as follows:\n kubectl create clusterrolebinding pgwatcher-view-binding --clusterrole=cluster-admin --serviceaccount=my-namespace:pg-watcher A less wide open policy would be applied like this on Kube 1.6 rbac:\n kubectl create rolebinding my-sa-binding --clusterrole=admin --serviceaccount=demo:pg-watcher --namespace=demo Note this kubectl command is only available in Kube 1.6, for prior Kube release such as 1.5 and the alpha RBAC, you will need to specify the role binding in a JSON/YAML file instead of using this command syntax above. You then reference the SA within the POD spec.\n The oc/docker/kubectl commands are included into the container from the host when the container image is built. These commands are used by the watch logic to interact with the replica containers.\n Starting with release 1.7.1 crunchy-watch source code is relocated to https://github.com/crunchydata/crunchy-watch\n Environment Variables CRUNCHY_WATCH_HEALTHCHECK_INTERVAL - the time to sleep in seconds between checking on the primary\n CRUNCHY_WATCH_FAILOVER_WAIT - the time to sleep in seconds between triggering the failover and updating its label (default is 40 secs)\n PG_CONTAINER_NAME - if set, the name of the container to refer to when doing an exec, this is required if you have more than 1 container in your database pod\n CRUNCHY_WATCH_PRIMARY - the primary service name\n CRUNCHY_WATCH_REPLICA - the replica service name\n PG_PRIMARY_PORT - database port to use when checking the database\n CRUNCHY_WATCH_USERNAME - database user account to use when checking the database using pg_isready utility\n CRUNCHY_WATCH_DATABASE - database to use when checking the database using pg_isready\n REPLICA_TO_TRIGGER_LABEL - the pod name of a replica that you want to choose as the new primary in a failover; this will override the normal replica selection\n CRUNCHY_WATCH_PRE_HOOK - path to an executable file to run before failover is processed.\n CRUNCHY_WATCH_POST_HOOK - path to an executable file to run after failover is processed.\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Logic The watch container will watch the primary, if the primary dies, then the watcher will:\n create the trigger file on the replica that will become the new primary\n change the labels on the replica to be those of the primary\n start watching the new primary in case that falls over next\n look for replicas that have the metadata label value of replicatype=trigger to prefer the failover to. If found, it will use the first replica with that label; if not found, it will use the first replica it finds.\n Example of looking for the failover replica:\n oc get pod -l name=pg-replica-rc-dc NAME READY STATUS RESTARTS AGE pg-replica-rc-dc 1/1 Running 2 16m pg-replica-rc-dc-1-96qs8 1/1 Running 1 16m oc get pod -l replicatype=trigger NAME READY STATUS RESTARTS AGE pg-replica-rc-dc 1/1 Running 2 16m crunchy-vacuum Description The crunchy-vacuum container allows you to perform a SQL VACUUM job against a PostgreSQL database container. You specify a database to vacuum using various environment variables which are listed below. It is possible to run different vacuum operations either manually or automatically through scheduling.\n The crunchy-vacuum image is executed, passed in the Postgres connection parameters to the single-primary PostgreSQL container. The type of vacuum performed is dictated by the environment variables passed into the job.\n Environment Variables The complete set of environment variables read by the crunchy-vacuum job include:\n VAC_FULL - when set to true adds the FULL parameter to the VACUUM command\n VAC_TABLE - when set, allows you to specify a single table to vacuum, when not specified, the entire database tables are vacuumed\n JOB_HOST - required variable is the postgres host we connect to\n PG_USER - required variable is the postgres user we connect with\n PG_DATABASE - required variable is the postgres database we connect to\n PG_PASSWORD - required variable is the postgres user password we connect with\n PG_PORT - allows you to override the default value of 5432\n VAC_ANALYZE - when set to true adds the ANALYZE parameter to the VACUUM command\n VAC_VERBOSE - when set to true adds the VERBOSE parameter to the VACUUM command\n VAC_FREEZE - when set to true adds the FREEZE parameter to the VACUUM command\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n crunchy-dba The crunchy-dba container implements a cron scheduler. The purpose of the crunchy-dba container is to offer a way to perform simple DBA tasks that occur on some form of schedule such as backup jobs or running a vacuum on a single Postgres database container.\n You can either run the crunchy-dba container as a single pod or include the container within a database pod.\n The crunchy-dba container makes use of a Service Account to perform the startup of scheduled jobs. The Kube Job type is used to execute the scheduled jobs with a Restart policy of Never.\n Environment Variables The following environment variables control the actions of crunchy-dba:\n OSE_PROJECT - required, the OSE project name to log into\n JOB_HOST - required, the PostgreSQL container name the action will be taken against\n VAC_SCHEDULE - if set, this will start a vacuum job container. The setting value must be a valid cron expression as described below.\n BACKUP_SCHEDULE - if set, this will start a backup job container. The setting value must be a valid cron expression as described below.\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n For a vacuum job, you are required to supply the following environment variables:\n JOB_HOST\n PG_USER\n PG_PASSWORD\n PG_DATABASE - defaults to postgres when not specified\n PG_PORT - defaults to 5432 when not specified\n VAC_ANALYZE(optional) - defaults to true when not specified\n VAC_FULL(optional) - defaults to true when not specified\n VAC_VERBOSE(optional) - defaults to true when not specified\n VAC_FREEZE(optional) - defaults to false when not specified\n VAC_TABLE(optional) - defaults to all tables when not specified, or you can set this value to indicate a single table to vacuum\n For a backup job, you are required to supply the following environment variables:\n JOB_HOST\n PG_USER - database user used to perform the backup\n PG_PASSWORD - database user password used to perform the backup\n PG_PORT - port value used when connecting for a backup to the database\n BACKUP_PV_CAPACITY - a value like 1Gi is used to define the PV storage capacity\n BACKUP_PV_PATH - the storage path used to build the PV\n BACKUP_PV_HOST - the storage host used to build the PV\n BACKUP_PVC_STORAGE - a value like 75M means to allow 75 megabytes for the PVC used in performing the backup\n CRON Expression Format A cron expression represents a set of times, using 6 space-separated fields.\n Table 1. Table Fields Field name Mandatory? Allowed values Allowed special characters Seconds\n Yes\n 0-59\n * / , -\n Minutes\n Yes\n 0-59\n * / , -\n Hours\n Yes\n 0-23\n * / , -\n Day of month\n Yes\n 1-31\n * / , - ?\n Month\n Yes\n 1-12 or JAN-DEC\n * / , -\n Day of week\n Yes\n 0-6 or SUN-SAT\n * / , - ?\n Note Month and Day-of-week field values are case insensitive. SUN'', Sun'', and ``sun'' are equally accepted. Special Characters Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month.\n Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form *\\/\u0026#8230;\u0026#8203;'' is equivalent to the form first-last/\u0026#8230;\u0026#8203;'', that is, an increment over the largest possible range of the field. The form N/\u0026#8230;\u0026#8203;'' is accepted as meaning N-MAX/\u0026#8230;\u0026#8203;'', that is, starting at N, use the increment until the end of that specific range. It does not wrap around.\n Comma ( , ) Commas are used to separate items of a list. For example, using ``MON,WED,FRI'' in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays.\n Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive.\n Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank.\n Predefined schedules You may use one of several pre-defined schedules in place of a cron expression.\n Table 2. Table Predefined Schedules Entry Description Equivalent To @yearly (or @annually)\n Run once a year, midnight, Jan. 1st\n 0 0 0 1 1 *\n @monthly\n Run once a month, midnight, first of month\n 0 0 0 1 * *\n @weekly\n Run once a week, midnight on Sunday\n 0 0 0 * * 0\n @daily (or @midnight)\n Run once a day, midnight\n 0 0 0 * * *\n @hourly\n Run once an hour, beginning of hour\n 0 0 * * * *\n Intervals You may also schedule a job to execute at fixed intervals. This is supported by formatting the cron spec like this:\n @every \u0026lt;duration\u0026gt; where ``duration'' is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration).\n For example, ``@every 1h30m10s'' would indicate a schedule that activates every 1 hour, 30 minutes, 10 seconds.\n Note The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. Time zones All interpretation and scheduling is done in the machines local time zone (as provided by the Go time package (http://www.golang.org/pkg/time). Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run!\n crunchy-pgbouncer Crunchy pgBouncer is a lightweight connection pooler for PostgreSQL databases.\n Environment Variables REQUIRED ARGS PGBOUNCER_PASSWORD - the password of the pgbouncer role in PostgreSQL. Must be also set on the primary database.\n PG_SERVICE - the hostname of the database service\n OPTIONAL/EXTENDED ARGS DEFAULT_POOL_SIZE - default 20, how many server connections to allow per user/database pair.\n MAX_CLIENT_CONN - default 100, maximum number of client connections allowed.\n MAX_DB_CONNECTIONS - default unlimited, do not allow more than this many connections per-database.\n MIN_POOL_SIZE - default 0 (disabled), adds more server connections to pool if below this number.\n POOL_MODE - default session, specifies when a server connection can be reused by other clients. Possible values: session, transaction and statement.\n RESERVE_POOL_SIZE - default 0 (disabled), how many additional connections to allow to a pool.\n RESERVE_POOL_TIMEOUT - default 5, if a client has not been serviced in this many seconds, pgbouncer enables use of additional connections from reserve pool. 0 disables.\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Features The following features are supported by the crunchy-pgbouncer container:\n Crunchy pgBouncer uses auth_query to authenticate users. This requires only the pgbouncer username and password in users.txt. Automatically generated from environment variables.\n Mount a custom users.txt and pgbouncer.ini configurations for advanced usage.\n Tune pooling parameters via environment variables.\n Connect to the administration database in pgBouncer to view statistics of the target databases.\n Restrictions OpenShift: If custom configurations aren\u0026#8217;t being mounted, an emptydir volume is required to be mounted at /pgconf.\n Superusers cannot connect through the connection pooler.\n crunchy-backrest-restore The crunchy-backrest-restore container executes the pgbackrest utility, allowing FULL and DELTA restore capability. See the pgbackrest guide for more details. https://github.com/pgbackrest/pgbackrest\n Environment Variables Required: * STANZA - desired stanza to restore from. For most cases this should be set to db.\n Optional: * DELTA - when set to true, this will configure pgBackRest to do a delta restore. Delta restores do not require pgdata directoy to be empty. This will only pull in differences between pgdata and the backup. * BACKREST_CUSTOM_OPTS - pass in custom parameters to pgBackRest for advanced use cases (such as point in time recovery). * CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Features The following features are supported by the crunchy-backrest-restore container:\n mount pgbackrest.conf config file via /pgconf volume\n mount the /backrestrepo for access to pgbackrest archives\n Restrictions for configuration, has to have pgbackrest.conf files mounted in /pgconf\n must have valid pgbackrest archive directory mounted in /backrestrepo\n crunchy-pgadmin4 The crunchy-ppgadmin4 container executes the pgadmin4 web application.\n The pgadmin4 project is found at the following location: https://www.pgadmin.org/\n pgadmin4 provides a web user interface to PostgreSQL databases. A sample screenshot is below:\n Environment Variables PGADMIN_SETUP_EMAIL - required, set this value to the email address used for pgAdmin4 login.\n PGADMIN_SETUP_PASSWORD - required, set this value to a password used for pgAdmin4 login. This should be a strong password.\n SERVER_PORT - default is 5050, set this value to a change the port pgAdmin4 listens on.\n ENABLE_TLS - default is false, set this value to true to enable HTTPS on the pgAdmin4 container. This requires a server.key and server.crt to be mounted on the /certs directory.\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Features The following features are supported by the crunchy-pgadmin4 container:\n expose port (5050 by default) which is the web server port\n mount a certificate and key to the /certs directory and set ENABLE_TLS to true to activate HTTPS mode.\n Set username and password for login via environment variables.\n Restrictions An emptyDir, with write access, must be mounted to the /run/httpd directory in OpenShift.\n crunchy-pgrestore The restore image provides a means of performing a restore of a dump from pg_dump or pg_dumpall via psql or pg_restore to a Postgres container database.\n Dump-file Input Location As the input files for crunchy-pgrestore, files generated by crunchy-pgdump are retrieved in a mounted backup volume location, using the database host name plus -backups as a sub-directory, then followed by a unique backup directory based upon a date/timestamp. It is left to the user to restore database dump archives in this current version of the container.\n Dependencies The container is meant to be using NFS or a similar network file system to retrieve database dumps to be restored via psql or pg_restore.\n Environment Variables REQUIRED ARGS PGRESTORE_DB - Database to connect to\n PGRESTORE_HOST - Hostname of the PostgreSQL database to connect to\n PGRESTORE_PASS - Password of the PostgreSQL role used by the pgdump container\n PGRESTORE_USER - PostgreSQL Role used by the pgdump container\n OPTIONAL/EXTENDED ARGS PGDUMP_BACKUP_HOST - Hostname of the PostgreSQL server that was backed up by pgdump container. Used when restoring a backup to a new host.\n PGRESTORE_BACKUP_TIMESTAMP - Timestamp of the backup to restore from. Default is empty (restores from latest backup)\n PGRESTORE_CUSTOM_OPTS - Advanced options to pass into pg_restore. Default is empty\n PGRESTORE_PORT - Port of the PostgreSQL database to connect to. Default is 5432\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs Note: this mode can reveal secrets in logs.\n Note: For a list of advanced options for configuring the PGRESTORE_CUSTOM_OPTS variable, see the official documentation:\n https://www.postgresql.org/docs/current/static/app-pgrestore.html\n crunchy-upgrade The crunchy-upgrade container contains both the 9.5/9.6 and 9.6/10 Postgres packages in order to perform a pg_upgrade from 9.5 to 9.6 or 9.6 to 10 versions.\n Environment Variables OLD_DATABASE_NAME - required, refers to the database (pod) name that we want to convert\n NEW_DATABASE_NAME - required, refers to the database (pod) name that we give to the upgraded database\n OLD_VERSION - required, the Postgres version of the old database\n NEW_VERSION - required, the Postgres version of the new database\n PG_LOCALE - if set, the locale you want to create the database with, if not set, the default locale is used\n CHECKSUMS - if set, this value is used to enable the --data-checksums option when initdb is executed at initialization, if not set, the default is to not enable data checksums\n XLOGDIR - if set, initdb will use the specified directory for WAL\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Features The following features are supported by the crunchy-upgrade container:\n supports only a pg_upgrade of the Postgres database\n doesn\u0026#8217;t alter the old database files\n creates the new database directory\n Restrictions does NOT support a postgis upgrade currently\n all environment variables are required\n supports upgrades from 9.5/9.6 to 10\n crunchy-sim The crunchy-sim container is a simple traffic simulator for PostgreSQL.\n Environment Variables PGSIM_HOST - required, the PostgreSQL host address\n PGSIM_PORT - required, the PostgreSQL host port\n PGSIM_USERNAME - required, the PostgreSQL username\n PGSIM_PASSWORD - required, the PostgreSQL password\n PGSIM_DATABASE - required, the database to connect\n PGSIM_INTERVAL - required, The units of the simulation interval\n PGSIM_MININTERVAL - required, the minimum interval value\n PGSIM_MAXINTERVAL - requited, the maximum interval value\n CRUNCHY_DEBUG - default is false, set this value to true to debugging in logs. Note: this mode can reveal secrets in logs.\n Valid values for PGSIM_INTERVAL are as follows:\n millisecond\n second\n minute\n Features Creates a single connection to PostgreSQL and will execute queries over a specified interval range.\n Queries are specified through a simple YAML file. Each query is a name-value pair and can span multiple lines by utilizing scalar notation (|'' or \u0026gt;'') as defined by the YAML spec.\n Queries are randomly chosen for execution.\n Restrictions Only one connection is created for all queries. \u0026gt;\u0026gt;\u0026gt;\u0026gt;\u0026gt;\u0026gt;\u0026gt; aae224745f5caf481c142b4ccbf3332ab4f45f8e\n "
},
{
"uri": "https://crunchydata.github.io/crunchy-containers/installation/storage-configuration/",
"title": "Storage Configuration",
"tags": [],
"description": "",
- "content": "Table of Contents Available Storage Types HostPath NFS Dynamic Storage Next Steps Latest Release: 2.1.0 2018-08-09\n Available Storage Types The Container Suite is tested on 3 different storage backends:\n hostPath (single node testing)\n NFS (single and multi-node testing)\n Gluster (dynamic storage on separate Gluster cluster)\n Other storage backends work as well including GCE, EBS, ScaleIO, and others, but may require you to modify various examples or configuration.\n Environment variables are set to determine how and what storage is to be used.\n HostPath HostPath is the simplest storage backend to setup. It is only feasible on a single node but is sufficient for testing the examples. In your .bashrc file, add the following variables to specify unique settings to the HostPath directory:\n export CCP_SECURITY_CONTEXT=\"\" export CCP_STORAGE_PATH=/data export CCP_STORAGE_MODE=ReadWriteMany export CCP_STORAGE_CAPACITY=400M Note It may be necessary to grant your user in OpenShift or Kubernetes the rights to modify the hostaccess SCC. This can be done with the command: oadm policy add-scc-to-user hostaccess $(oc whoami) NFS NFS can also be used as a storage mechanism. Instructions on setting up NFS are found at the bottom of this guide.\n For testing with NFS, in your .bashrc file, include the following variables that are unique to your NFS environment:\n export CCP_SECURITY_CONTEXT='\"supplementalGroups\": [65534]' export CCP_STORAGE_PATH=/mnt/nfsfileshare export CCP_NFS_IP=192.168.0.118 export CCP_STORAGE_MODE=ReadWriteMany export CCP_STORAGE_CAPACITY=400M In this example above the group ownership of the NFS mount is assumed to be nfsnobody or 65534.\n Configuration Notes for NFS Note: Most of the Crunchy containers run as the postgres UID (26), but you will notice that when supplementalGroups is specified, the pod will include the nfsnobody group in the list of groups for the pod user.\n First, if you are running your NFS system with SELinux in enforcing mode, you will need to run the following command to allow NFS write permissions:\n sudo setsebool -P virt_use_nfs 1 Detailed instructions that you can use for setting up a NFS server on Centos 7 are provided in the following link.\n http://www.itzgeek.com/how-tos/linux/centos-how-tos/how-to-setup-nfs-server-on-centos-7-rhel-7-fedora-22.html\n if you are running your client on a VM, you will need to add 'insecure' to the exportfs file on the NFS server due to the way port translation is done between the VM host and the VM instance.\n For more details on this bug, please see the following link.\n http://serverfault.com/questions/107546/mount-nfs-access-denied-by-server-while-mounting\n A suggested best practice for tuning NFS for PostgreSQL is to configure the PostgreSQL fstab mount options like so:\n proto=tcp,suid,rw,vers=3,proto=tcp,timeo=600,retrans=2,hard,fg,rsize=8192,wsize=8192 Network options:\n MTU=9000 If interested in mounting the same NFS share multiple times on the same mount point, look into the noac mount option.\n Dynamic Storage Dynamic storage classes can be used for the examples. Gluster is only one example of a storage type that supports dynamic storage classes. The environment variable CCP_STORAGE_CLASS is used in the examples to determine whether or not to create a PersistentVolume manually or that it will be created dynamically using a StorageClass. In the case of GKE, the default StorageClass is named default. Storage class names are determined by the Kubernetes administrator and can vary.\n Setting up a Gluster cluster will offer you the ability to use dynamic storage provisioning in the examples. A set of example Gluster configuration files is found at $CCPROOT/examples/gluster. This configuration is for a 3 node Gluster cluster which runs on a Centos7 Minimal VM deployment.\n Using block storage requires a security context to be set as follows:\n export CCP_SECURITY_CONTEXT='\"fsGroup\":26' export CCP_STORAGE_CLASS=standard export CCP_STORAGE_MODE=ReadWriteOnce export CCP_STORAGE_CAPACITY=400M Next Steps There are many ways to configure the examples and containers further. Descriptions of each container offered and the variables that can be used to customize them are found on the Container Specifications page.\n Information can be found on the full scope of examples that can be performed on the Getting Started page.\n "
+ "content": "Table of Contents Available Storage Types HostPath NFS Dynamic Storage Next Steps Latest Release: 2.1.0 2018-08-13\n Available Storage Types The Container Suite is tested on 3 different storage backends:\n hostPath (single node testing)\n NFS (single and multi-node testing)\n Gluster (dynamic storage on separate Gluster cluster)\n Other storage backends work as well including GCE, EBS, ScaleIO, and others, but may require you to modify various examples or configuration.\n Environment variables are set to determine how and what storage is to be used.\n HostPath HostPath is the simplest storage backend to setup. It is only feasible on a single node but is sufficient for testing the examples. In your .bashrc file, add the following variables to specify unique settings to the HostPath directory:\n export CCP_SECURITY_CONTEXT=\"\" export CCP_STORAGE_PATH=/data export CCP_STORAGE_MODE=ReadWriteMany export CCP_STORAGE_CAPACITY=400M Note It may be necessary to grant your user in OpenShift or Kubernetes the rights to modify the hostaccess SCC. This can be done with the command: oadm policy add-scc-to-user hostaccess $(oc whoami) NFS NFS can also be used as a storage mechanism. Instructions on setting up NFS are found at the bottom of this guide.\n For testing with NFS, in your .bashrc file, include the following variables that are unique to your NFS environment:\n export CCP_SECURITY_CONTEXT='\"supplementalGroups\": [65534]' export CCP_STORAGE_PATH=/mnt/nfsfileshare export CCP_NFS_IP=192.168.0.118 export CCP_STORAGE_MODE=ReadWriteMany export CCP_STORAGE_CAPACITY=400M In this example above the group ownership of the NFS mount is assumed to be nfsnobody or 65534.\n Configuration Notes for NFS Note: Most of the Crunchy containers run as the postgres UID (26), but you will notice that when supplementalGroups is specified, the pod will include the nfsnobody group in the list of groups for the pod user.\n First, if you are running your NFS system with SELinux in enforcing mode, you will need to run the following command to allow NFS write permissions:\n sudo setsebool -P virt_use_nfs 1 Detailed instructions that you can use for setting up a NFS server on Centos 7 are provided in the following link.\n http://www.itzgeek.com/how-tos/linux/centos-how-tos/how-to-setup-nfs-server-on-centos-7-rhel-7-fedora-22.html\n if you are running your client on a VM, you will need to add 'insecure' to the exportfs file on the NFS server due to the way port translation is done between the VM host and the VM instance.\n For more details on this bug, please see the following link.\n http://serverfault.com/questions/107546/mount-nfs-access-denied-by-server-while-mounting\n A suggested best practice for tuning NFS for PostgreSQL is to configure the PostgreSQL fstab mount options like so:\n proto=tcp,suid,rw,vers=3,proto=tcp,timeo=600,retrans=2,hard,fg,rsize=8192,wsize=8192 Network options:\n MTU=9000 If interested in mounting the same NFS share multiple times on the same mount point, look into the noac mount option.\n Dynamic Storage Dynamic storage classes can be used for the examples. Gluster is only one example of a storage type that supports dynamic storage classes. The environment variable CCP_STORAGE_CLASS is used in the examples to determine whether or not to create a PersistentVolume manually or that it will be created dynamically using a StorageClass. In the case of GKE, the default StorageClass is named default. Storage class names are determined by the Kubernetes administrator and can vary.\n Setting up a Gluster cluster will offer you the ability to use dynamic storage provisioning in the examples. A set of example Gluster configuration files is found at $CCPROOT/examples/gluster. This configuration is for a 3 node Gluster cluster which runs on a Centos7 Minimal VM deployment.\n Using block storage requires a security context to be set as follows:\n export CCP_SECURITY_CONTEXT='\"fsGroup\":26' export CCP_STORAGE_CLASS=standard export CCP_STORAGE_MODE=ReadWriteOnce export CCP_STORAGE_CAPACITY=400M Next Steps There are many ways to configure the examples and containers further. Descriptions of each container offered and the variables that can be used to customize them are found on the Container Specifications page.\n Information can be found on the full scope of examples that can be performed on the Getting Started page.\n "
},
{
"uri": "https://crunchydata.github.io/crunchy-containers/container-specifications/crunchy-backrest-restore/",
@@ -74,7 +74,7 @@
"title": "Contributing",
"tags": [],
"description": "",
- "content": "Latest Release: 2.1.0 2018-08-09\n Getting Started Welcome! Thank you for your interest in contributing. Before submitting a new issue or pull request to the Crunchy Container Suite project on GitHub, please review any open or closed issues here in addition to any existing open pull requests.\n Documentation The documentation website (located at https://crunchydata.github.io/crunchy-containers/) is generated using Hugo and GitHub Pages.\n Hosting Hugo Locally (Optional) If you would like to build the documentation locally, view the official Installing Hugo guide to set up Hugo locally. You can then start the server by running the following commands -\n cd $CCPROOT/hugo/ vi config.toml hugo server When you edit config.toml, you\u0026#8217;ll set baseURL = \"/\". This will make the local version of the Hugo server accessible by default from localhost:1313. Once you\u0026#8217;ve run hugo server, that will let you interactively make changes to the documentation as desired and view the updates in real-time.\n Note Please make sure to revert baseURL back to its original value of https://crunchydata.github.io/crunchy-containers if you choose to run the server locally. Contributing to the Documentation When you\u0026#8217;re ready to commit a change, please view and run the script located at $CCPROOT/generate-docs.sh which will automatically generate a new set of webpages using Hugo that will update the live website after the change has been committed to the repository.\n "
+ "content": "Latest Release: 2.1.0 2018-08-13\n Getting Started Welcome! Thank you for your interest in contributing. Before submitting a new issue or pull request to the Crunchy Container Suite project on GitHub, please review any open or closed issues here in addition to any existing open pull requests.\n Documentation The documentation website (located at https://crunchydata.github.io/crunchy-containers/) is generated using Hugo and GitHub Pages.\n Hosting Hugo Locally (Optional) If you would like to build the documentation locally, view the official Installing Hugo guide to set up Hugo locally. You can then start the server by running the following commands -\n cd $CCPROOT/hugo/ vi config.toml hugo server When you edit config.toml, you\u0026#8217;ll set baseURL = \"/\". This will make the local version of the Hugo server accessible by default from localhost:1313. Once you\u0026#8217;ve run hugo server, that will let you interactively make changes to the documentation as desired and view the updates in real-time.\n Note Please make sure to revert baseURL back to its original value of https://crunchydata.github.io/crunchy-containers if you choose to run the server locally. Contributing to the Documentation When you\u0026#8217;re ready to commit a change, please view and run the script located at $CCPROOT/generate-docs.sh which will automatically generate a new set of webpages using Hugo that will update the live website after the change has been committed to the repository.\n "
},
{
"uri": "https://crunchydata.github.io/crunchy-containers/container-specifications/crunchy-pgdump/",
@@ -179,14 +179,14 @@
"title": "Openshift Templates",
"tags": [],
"description": "",
- "content": "Latest Release: 2.1.0 2018-08-09\n The OpenShift template examples referenced in this documentation are located at the following directory:\n $CCPROOT/examples/ocp-templates/ The following templates for PostgreSQL are provided:\n pgAdmin4\n Primary/Replica\n Primary Backup with Secrets\n Primary Restore with Secrets\n Primary Backup\n Primary Restore\n Single Primary\n "
+ "content": "Latest Release: 2.1.0 2018-08-13\n The OpenShift template examples referenced in this documentation are located at the following directory:\n $CCPROOT/examples/ocp-templates/ The following templates for PostgreSQL are provided:\n pgAdmin4\n Primary/Replica\n Primary Backup with Secrets\n Primary Restore with Secrets\n Primary Backup\n Primary Restore\n Single Primary\n "
},
{
"uri": "https://crunchydata.github.io/crunchy-containers/getting-started/kubernetes-and-openshift/",
"title": "Kubernetes and Openshift",
"tags": [],
"description": "",
- "content": "Table of Contents Getting Started Example Conventions Administration Password Management Kubernetes Secrets pgAdmin4 pgAdmin4 with TLS Upgrade Cron Scheduler Vacuum systemd Backup and Restoration pg_dump pg_restore pgBackRest pgBackRest with SSHD pg_basebackup Point in Time Recovery (PITR) Connection Pooling pgBouncer pgPool II Database Single Primary PostgreSQL Deployment Replication Synchronous Replication Statefulsets Geospatial (PostGIS) Custom Configuration SSL Authentication Docker Swarm Failover Watch Metrics and Performance pgBadger Metrics Collection pg_audit Latest Release: 2.1.0 2018-08-09\n Getting Started The examples located in the kube directory work on both Kubernetes and OpenShift. Ensure the CCP_CLI environment variable is set to the correct binary for your environment.\n Set the environment variable in .bashrc to ensure the examples will work in your environment.\n # Kubernetes export CCP_CLI=kubectl # OpenShift export CCP_CLI=oc Here are some useful resources for finding the right commands to troubleshoot \u0026amp; modify containers in the various environments shown in this guide:\n Docker Cheat Sheet\n Kubectl Cheat Sheet\n OpenShift Cheat Sheet\n Helm Cheat Sheet\n Example Conventions The examples provided in Container Suite are simple examples that are meant to demonstrate key Container Suite features. These examples can be used to build more production level deployments as dictated by user requirements specific to their operating environments.\n The examples generally follow these conventions:\n There is a run.sh script that you will execute to start the example.\n There is a cleanup.sh script that you will execute to shutdown and cleanup the example.\n Each example will create resources such as Secrets, ConfigMaps, Services, and PersistentVolumeClaims, all which follow a naming convention of \u0026lt;example name\u0026gt;-\u0026lt;optional description suffix\u0026gt;. For example an example called primary might have a PersistentVolumeClaim called primary-pgconf to describe the purpose of that particular PVC.\n The folder names for each example give a clue as to which Container Suite feature it demonstrates. For instance, the examples/kube/pgaudit example demonstrates how to enable the pg_audit capability of the crunchy-postgres container.\n Administration Password Management The passwords used for the PostgreSQL user accounts are generated by the OpenShift process command. To inspect what value is supplied, you can inspect the primary pod as follows:\n ${CCP_CLI} get pod pr-primary -o json | grep -C 1 'PG_USER\\|PG_PASSWORD\\|PG_DATABASE' This will give you the environment variable values for the database created by default in addition to the username and password of the standard user.\n PG_USER\n PG_PASSWORD\n PG_DATABASE\n Kubernetes Secrets You can use Kubernetes Secrets to set and maintain your database credentials. Secrets requires you base64 encode your user and password values as follows:\n echo -n 'myuserid' | base64 You will paste these values into your JSON secrets files for values.\n This example allows you to set the PostgreSQL passwords using Kubernetes Secrets.\n The secret uses a base64 encoded string to represent the values to be read by the container during initialization. The encoded password value is password. Run the example as follows:\n cd $CCPROOT/examples/kube/secret ./run.sh The secrets are mounted in the /pguser, /pgprimary, and /pgroot volumes within the container and read during initialization. The container scripts create a PostgreSQL user with those values, and sets the passwords for the primary user and PostgreSQL superuser using the mounted secret volumes.\n When using secrets, you do NOT have to specify the following environment variables if you specify all three secrets volumes:\n PG_USER\n PG_PASSWORD\n PG_ROOT_PASSWORD\n PG_PRIMARY_USER\n PG_PRIMARY_PASSWORD\n You can test the container as follows. In all cases, the password is password:\n psql -h secret -U pguser1 postgres psql -h secret -U postgres postgres psql -h secret -U primaryuser postgres pgAdmin4 This example deploys the pgadmin4 v2 web user interface for PostgreSQL without TLS.\n After running the example, you should be able to browse to http://127.0.0.1:5050 and log into the web application using a user ID of admin@admin.com and password of password.\n If you are running this example using Kubernetes or OpenShift, replace 127.0.0.1:5050 with the \u0026lt;NODE_IP\u0026gt;:30000.\n To get the node IP, run the following:\n ${CCP_CLI} describe pod pgadmin4-http | grep Node: See the pgAdmin4 documentation for more details.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker To run this example, run the following:\n cd $CCPROOT/examples/docker/pgadmin4-http ./run.sh Kubernetes and OpenShift Start the container as follows:\n cd $CCPROOT/examples/kube/pgadmin4-http ./run.sh An emptyDir with write access must be mounted to the /run/httpd directory in OpenShift.\n pgAdmin4 with TLS This example deploys the pgadmin4 v2 web user interface for PostgreSQL with TLS.\n After running the example, you should be able to browse to https://127.0.0.1:5050 and log into the web application using a user ID of admin@admin.com and password of password.\n If you are running this example using Kubernetes or OpenShift, replace 127.0.0.1:5050 with the \u0026lt;NODE_IP\u0026gt;:30000.\n To get the node IP, run the following:\n ${CCP_CLI} describe pod pgadmin4-https | grep Node: See the pgadmin4 documentation for more details.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker To run this example, run the following:\n cd $CCPROOT/examples/docker/pgadmin4-https ./run.sh Kubernetes and OpenShift Start the container as follows:\n cd $CCPROOT/examples/kube/pgadmin4-https ./run.sh An emptyDir with write access must be mounted to the /run/httpd directory in OpenShift.\n Upgrade This example assumes you have run primary using a PG 9.5 or 9.6 image such as centos7-9.5.14-2.1.0 prior to running this upgrade.\n Starting in release 1.3.1, the upgrade container will let you perform a pg_upgrade either from a PostgreSQL version 9.5 database to 9.6 or from 9.6 to 10.\n Prior to running this example, make sure your CCP_IMAGE_TAG environment variable is using the next major version of PostgreSQL that you want to upgrade to. For example, if you\u0026#8217;re upgrading from 9.5 to 9.6, make sure the variable references a PG 9.6 image such as centos7-9.6.10-2.1.0.\n This will create the following in your Kubernetes environment:\n a Kubernetes Job running the crunchy-upgrade container\n a new data directory name upgrade found in the pgnewdata PVC\n Data checksums on the Crunchy PostgreSQL container were enabled by default in version 2.1.0. When trying to upgrade, it\u0026#8217;s required that both the old database and the new database have the same data checksums setting. Prior to upgrade, check if data_checksums were enabled on the database by running the following SQL: SHOW data_checksums\n Kubernetes and OpenShift Start the upgrade as follows:\n cd $CCPROOT/examples/kube/upgrade ./run.sh If successful, the Job will end with a successful status. Verify the results of the Job by examining the Job\u0026#8217;s pod log:\n ${CCP_CLI} get pod -a -l job-name=upgrade-job ${CCP_CLI} logs -l job-name=upgrade-job You can verify the upgraded database by running the post-upgrade.sh script in the examples/kube/upgrade directory. This will create a PostgreSQL pod that mounts the upgraded volume.\n Cron Scheduler The crunchy-dba container implements a cron scheduler. The purpose of the crunchy-dba container is to offer a way to perform simple DBA tasks that occur on some form of schedule such as backup jobs or running a vacuum on a single PostgreSQL database container (such as the primary example).\n You can either run the crunchy-dba container as a single pod or include the container within a database pod.\n The crunchy-dba container makes use of a Service Account to perform the startup of scheduled jobs. The Kubernetes Job type is used to execute the scheduled jobs with a Restart policy of Never.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Kubernetes and OpenShift The script to schedule vacuum on a regular schedule is executed through the following commands:\n # Kubernetes cd $CCPROOT/examples/kube/dba ./run-kube-vac.sh # OpenShift cd $CCPROOT/examples/kube/dba ./run-ocp-vac.sh To run the script for scheduled backups, run the following in the same directory:\n # Kubernetes cd $CCPROOT/examples/kube/dba ./run-kube-backup.sh # OpenShift cd $CCPROOT/examples/kube/dba ./run-ocp-backup.sh Individual parameters for both can be modified within their respective JSON files; please see the Container Specifications document for a full list of what can be modified.\n Vacuum You can perform a PostgreSQL vacuum command by running the crunchy-vacuum container. You specify a database to vacuum using environment variables. By default, vacuum is executed against the primary example container.\n The crunchy-vacuum container image exists to allow a DBA a way to run a job either individually or scheduled to perform a variety of vacuum operations.\n This example performs a vacuum on a single table in the primary PostgreSQL database. The crunchy-vacuum image is executed with the PostgreSQL connection parameters to the single-primary PostgreSQL container. The type of vacuum performed is dictated by the environment variables passed into the job; these are defined with further detail here.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Run the example as follows:\n cd $CCPROOT/examples/docker/vacuum ./run.sh Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/vacuum/ ./run.sh Verify the job is completed:\n ${CCP_CLI} get job systemd The crunchy-pg.service is an example of a systemd unit file that starts and stops a container named crunchy-pg that has already been created.\n The example scripts are located in the following directory:\n $CCPROOT/examples/systemd/ There are two scripts within the directory.\n test-start.sh This script is called by the systemd start execution. The trick with this script is that it blocks forever after starting the docker crunchy-pg container. The blocking in the script keeps systemd happy and thinking that this is a normal daemon.\n test-stop.sh This script stops the test-start.sh script and also stops the crunchy-pg Docker container.\n Backup and Restoration pg_dump The script assumes you are going to backup the primary example and that container is running.\n This example assumes you have configured a storage filesystem as described in the Storage Configuration document.\n A successful backup will perform pg_dump/pg_dumpall on the primary and store the resulting files in the mounted volume under a directory named \u0026lt;HOSTNAME\u0026gt;-backups as a sub-directory, then followed by a unique backup directory based upon a date and timestamp - allowing any number of backups to be kept.\n For more information on how to configure this container, please see the Container Specifications document.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Run the backup with this command:\n cd $CCPROOT/examples/docker/pgdump ./run.sh Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/pgdump ./run.sh The Kubernetes Job type executes a pod and then the pod exits. You can view the Job status using this command:\n ${CCP_CLI} get job The pgdump.json file within that directory specifies options that control the behavior of the pgdump job. Examples of this include whether to run pg_dump vs pg_dumpall and advanced options for specific backup use cases.\n pg_restore The script assumes you are going to restore to the primary example and that container is running and a backup has been created using the pgdump example..\n This example assumes you have configured a storage filesystem as described in the Storage Configuration document.\n Successful use of the crunchy-pgrestore container will run a job to restore files generated by pg_dump/pg_dumpall to a container via psql/pg_restore; then container will terminate successfully and signal job completion.\n For more information on how to configure this container, please see the Container Specifications document.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Run the restore with this command:\n cd $CCPROOT/examples/docker/pgrestore ./run.sh Kubernetes and OpenShift By default, pgrestore container will automatically restore from the most recent backup. If you want to restore to a specific backup, edit the pgrestore.json file and update the PGRESTORE_BACKUP_TIMESTAMP setting to specify the backup path you want to restore with. For example:\n \"name\":\"PGRESTORE_BACKUP_TIMESTAMP\", \"value\":\"2018-03-27-14-35-33\" Running the example:\n cd $CCPROOT/examples/kube/pgrestore ./run.sh The Kubernetes Job type executes a pod and then the pod exits. You can view the Job status using this command:\n ${CCP_CLI} get job The pgrestore.json file within that directory specifies options that control the behavior of the pgrestore job.\n pgBackRest pgbackrest is a utility that performs a backup, restore, and archive function for a PostgreSQL database. pgbackrest is written and maintained by David Steele, and more information can be found on the official website.\n Backups are currently performed by manually executing pgbackrest commands against the desired pod. Restores can now be performed via the crunchy-backrest-restore container, which offers FULL or DELTA restore capability.\n pgbackrest is configured using a pgbackrest.conf file that is mounted into the crunchy-postgres container at /pgconf.\n If you place a pgbackrest.conf file within this mounted directory, it will trigger the use of pgbackrest within the PostgreSQL container as the archive_command and will turn on the archive_mode to begin archival. It is still required to define the ARCHIVE_TIMEOUT environment variable within your container configuration as it is set to a disable value of 0 by default.\n The following changes will be made to the container\u0026#8217;s postgresql.conf file:\n ARCHIVE_MODE=on ARCHIVE_TIMEOUT=60 ARCHIVE_COMMAND='pgbackrest --stanza=db archive-push %p' If you are using a crunchy-postgres image older than 1.7.1, archive_command must specify where the pgbackrest.conf file is located:\n ARCHIVE_COMMAND='pgbackrest --config=/pgconf/pgbackrest.conf --stanza=db archive-push %p' This requires you use a pgbackrest stanza name of db within the pgbackrest.conf file you mount.\n When set, WAL files generated by the database will be written out to the /backrestrepo/HOSTNAME-backups mount point.\n Additionally, the Crunchy Postgres container can templatize pgbackrest.conf files by searching for the HOSTNAME values in a mounted pgbackrest.conf file.\n For example, db-path=/pgdata/HOSTNAME will render to db-path=/pgdata/primary if the container\u0026#8217;s hostname is primary. HOSTNAME will be replaced with the value of PGDATA_PATH_OVERRIDE when working with deployments/replicasets.\n The templating example above works for db-path settings, however, repo-path should follow the convention repo-path=/backrestrepo/HOSTNAME-backups in cases where volumes are being mounted to a single mount point (such as hostPath or NFS). Without the additional -backups the backups will populate in the pgdata directory.\n Kubernetes and OpenShift Backup Start the example as follows:\n cd $CCPROOT/examples/kube/backrest/backup ./run.sh This will create the following in your Kubernetes environment:\n configMap containing pgbackrest.conf\n PostgreSQL pod with pgBackRest configured\n PostgreSQL service\n PVC for the PGDATA directory\n PVC for the BackRest Backups and Archives directory\n Examine the /backrestrepo location to view the archive directory and ensure WAL archiving is working.\n You can create a backup using backrest using this command within the container:\n ${CCP_CLI} exec -it backrest /bin/bash pgbackrest --stanza=db backup --type=full Async Archiving pgBackRest supports asyncronous archiving to pull and push Write Ahead Logs. Asynchronous operation is more efficient because it can reuse connections and take advantage of parallelism. For more information on async archiving, see the pgBackRest official documentation.\n This will create the following in your Kubernetes environment:\n configMap containing pgbackrest.conf\n PostgreSQL pod with pgBackRest configured and archiving asynchronously.\n PostgreSQL service\n PVC for the PGDATA directory\n PVC for the BackRest Backups and Archives directory\n Start the example as follows:\n cd $CCPROOT/examples/kube/backrest/async-archiving ./run.sh Examine the /backrestrepo/HOSTNAME-backups location to view the archive directory and ensure WAL archiving is working.\n Examine the /pgdata/HOSTNAME-spool location to view the transient directory used for async archiving.\n You can create a backup using backrest using this command within the container:\n ${CCP_CLI} exec -it backrest /bin/bash pgbackrest --stanza=db backup A spooling directory is automatically created in both /pgdata and /pgwal. It is advised to configure pgBackRest to use the spooling location closest to the Write Ahead Log.\n If the PostgreSQL container was created using the XLOGDIR variable, the /pgwal/HOSTNAME-spool directory should be configured in pgbackrest.conf as such: spool-path=/pgwal/HOSTNAME-spool. If WAL resides on PGDATA, use: spool-path=/pgdata/HOSTNAME-spool\n Restore There are three options to choose from when performing a restore: * Delta - only restore missing files from PGDATA * Full - restore all files, pgdata must be empty * Point in Time Recovery (PITR) - delta restore to a certain point in time\n PITR This example uses the backrest/backup example. It should be left running and a pgBackRest backup has been created.\n Start the example as follows:\n cd $CCPROOT/examples/kube/backrest/pitr ./run.sh This will create the following in your Kubernetes environment:\n configMap containing pgbackrest.conf\n Backrest-Restore pod with pgBackRest configured for PITR restore\n pgBackRest will restore the pgdata volume mounted to the restore container to the point in time specified by the PITR_TARGET environment variable. To get a compliant timestamp, PostgreSQL can be queried using the following SQL:\n psql -U postgres -Atc 'select current_timestamp' After a successful restore, run the following to start the restored PostgreSQL container:\n cd $CCPROOT/examples/kube/backrest/pitr ./post-restore.sh Full This example uses the backrest/backup example. It does not need to be running but a pgBackRest backup is required.\n Start the example as follows:\n cd $CCPROOT/examples/kube/backrest/full ./run.sh This will create the following in your Kubernetes environment:\n configMap containing pgbackrest.conf\n Backrest-Restore pod with pgBackRest configured for full restore\n New PVC for the PGDATA directory (full restores require PGDATA to be empty)\n pgBackRest will restore all files to the pgdata volume mounted to the restore container.\n After a successful restore, run the following to start the restored PostgreSQL container:\n cd $CCPROOT/examples/kube/backrest/full ./post-restore.sh Delta This example uses the backrest/backup example. It does not need to be running but a pgBackRest backup is required.\n Start the example as follows:\n cd $CCPROOT/examples/kube/backrest/delta ./run.sh This will create the following in your Kubernetes environment:\n configMap containing pgbackrest.conf\n Backrest-Restore pod with pgBackRest configured for full restore\n pgBackRest will restore files missing to the pgdata volume mounted to the restore container.\n After a successful restore, run the following to start the restored PostgreSQL container:\n cd $CCPROOT/examples/kube/backrest/delta ./post-restore.sh Docker Backup Start the example as follows:\n cd $CCPROOT/examples/docker/backrest/backup ./run.sh This will create the following in your Docker environment:\n PostgreSQL container with pgBackRest configured\n Volume for the PGDATA directory\n Volume for the pgbackrest.conf configuration\n Volume for the BackRest Backups and Archives directory\n Examine the /backrestrepo location to view the archive directory and ensure WAL archiving is working.\n You can create a backup using backrest using this command within the container:\n docker exec -it backrest /bin/bash pgbackrest --stanza=db backup --type=full Async Archiving This will create the following in your Docker environment:\n PostgreSQL container with pgBackRest configured\n Volume for the PGDATA directory\n Volume for the pgbackrest.conf configuration\n Volume for the BackRest Backups and Archives directory\n Start the example as follows:\n cd $CCPROOT/examples/docker/backrest/async-archiving ./run.sh Examine the /backrestrepo/HOSTNAME-backups location to view the archive directory and ensure WAL archiving is working.\n Examine the /pgdata/HOSTNAME-spool location to view the transient directory used for async archiving.\n You can create a backup using backrest using this command within the container:\n docker exec -it backrest /bin/bash pgbackrest --stanza=db backup A spooling directory is automatically created in both /pgdata and /pgwal. It is advised to configure pgBackRest to use the spooling location closest to the Write Ahead Log.\n If the PostgreSQL container was created using the XLOGDIR variable, the /pgwal/HOSTNAME-spool directory should be configured in pgbackrest.conf as such: spool-path=/pgwal/HOSTNAME-spool. If WAL resides on PGDATA, use: spool-path=/pgdata/HOSTNAME-spool\n Restore PITR This example uses the backrest/backup example. It should be left running and a pgBackRest backup has been created.\n Start the example as follows:\n cd $CCPROOT/examples/docker/backrest/pitr ./run.sh This will create the following in your Docker environment:\n Backrest-Restore container with pgBackRest configured for PITR restore\n pgBackRest will restore the pgdata volume mounted to the restore container to the point in time specified by the PITR_TARGET environment variable. To get a compliant timestamp, PostgreSQL can be queried using the following SQL:\n psql -U postgres -Atc 'select current_timestamp' After a successful restore, run the following to start the restored PostgreSQL container:\n cd $CCPROOT/examples/docker/backrest/pitr ./post-restore.sh Full This example uses the backrest/backup example. It does not need to be running but a pgBackRest backup is required.\n Start the example as follows:\n cd $CCPROOT/examples/docker/backrest/full ./run.sh This will create the following in your Docker environment:\n Backrest-Restore pod with pgBackRest configured for full restore\n New Volume for the PGDATA directory (full restores require PGDATA to be empty)\n pgBackRest will restore all files to the pgdata volume mounted to the restore container.\n After a successful restore, run the following to start the restored PostgreSQL container:\n cd $CCPROOT/examples/docker/backrest/full ./post-restore.sh Delta This example uses the backrest/backup example. It does not need to be running but a pgBackRest backup is required.\n Start the example as follows:\n cd $CCPROOT/examples/kube/backrest/delta ./run.sh This will create the following in your Docker environment:\n Backrest-Restore pod with pgBackRest configured for full restore\n pgBackRest will restore files missing to the pgdata volume mounted to the restore container.\n After a successful restore, run the following to start the restored PostgreSQL container:\n cd $CCPROOT/examples/kube/backrest/delta ./post-restore.sh pgBackRest with SSHD The PostgreSQL and PostgreSQL GIS containers can enable an SSH daemon to allow developers to do DBA tasks on the database server without the need for exec privileges. An administrator who deploys the SSHD enabled PostgreSQL database can specify the authorized public keys for access to the database server.\n In order to activate SSHD in the PostgreSQL containers, the following files need to be mounted to the PostgreSQL container:\n SSH Host keys mounted on the /sshd volume. Three keys are required:\n ssh_host_rsa_key\n ssh_host_ecdsa_key\n ssh_host_ed25519_key\n sshd_config mounted on the /pgconf volume\n authorized_keys mounted on the /pgconf volume\n SSHD can be enabled in the PostgreSQL containers by adding the following line:\n ENABLE_SSHD=true The authorized_keys file is mounted on the /pgconf directory. In order to support using this mount for authentication the following must be set in sshd_config:\n AuthorizedKeysFile /pgconf/authorized_keys StrictModes no For OpenShift deployments, the following configuration needs to be set in sshd_config:\n UsePAM no Docker Start the example as follows:\n cd $CCPROOT/examples/docker/postgres-sshd ./run.sh This will create the following in your Docker environment:\n A volume named pgconf which contains the pgbackrest.conf, pg_hba.conf, postgresql.conf, sshd_config, authorized_keys file\n A volume named sshd containing the SSH Host keys\n postgres-sshd container pgbackrest archive and sshd enabled. An initial stanza db will be created on initialization\n After running the example, SSH to the container using the forwarded port 2022:\n ssh -i ./keys/id_rsa -p 2022 postgres@0.0.0.0 Kubernetes / OpenShift Start the example as follows:\n cd $CCPROOT/examples/kube/postgres-sshd ./run.sh This will create the following in your Kubernetes environment:\n A configMap named pgconf which contains the pgbackrest.conf, pg_hba.conf, postgresql.conf, sshd_config, authorized_keys file\n A secret named sshd-secrets containing the SSH Host keys\n postgres-sshd pod with pgbackrest archive and sshd enabled. An initial stanza db will be created on initialization\n postgres-sshd service with port 2022 for SSH\n After running the example, SSH to the service using the postgres-sshd service available in Kubernetes:\n ssh -i ./keys/id_rsa -p 2022 postgres@postgres-sshd Using pgBackrest via SSH If a pgbackrest.conf file is located on the /pgconf volume and archiving is enabled, it\u0026#8217;s possible to run backups using the pgBackrest utility.\n With the SSHD service running, the following command will issue a pgBackrest backup.\n ssh -i ./keys/id_rsa -p 2022 postgres@postgres-sshd pgbackrest --stanza=db backup To list all the available pgBackrest backups, run the following:\n ssh -i ./keys/id_rsa -p 2022 postgres@postgres-sshd pgbackrest info pg_basebackup The script assumes you are going to backup the primary container created in the first example, so you need to ensure that container is running. This example assumes you have configured storage as described in the Storage Configuration documentation. Things to point out with this example include its use of persistent volumes and volume claims to store the backup data files.\n A successful backup will perform pg_basebackup on the primary container and store the backup in the $CCP_STORAGE_PATH volume under a directory named primary-backups. Each backup will be stored in a subdirectory with a timestamp as the name, allowing any number of backups to be kept.\n The backup script will do the following:\n Start up a backup container named backup\n Run pg_basebackup on the container named primary\n Store the backup in the /tmp/backups/primary-backups directory\n Exit after the backup\n When you are ready to restore from the backup, the restore example runs a PostgreSQL container using the backup location. Upon initialization, the container will use rsync to copy the backup data to this new container and then launch PostgreSQL using the original backed-up data.\n The restore script will do the following:\n Start up a container named restore\n Copy the backup files from the previous backup example into /pgdata\n Start up the container using the backup files\n Map the PostgreSQL port of 5432 in the container to your local host port of 12001\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Run the backup with this command:\n cd $CCPROOT/examples/docker/backup ./run.sh When you\u0026#8217;re ready to restore, a restore example is provided.\n It\u0026#8217;s required to specified a backup path for this example. To get the correct path check the backup job logs or a timestamp:\n docker logs backup-vpk9l | grep BACKUP_PATH Wed May 9 20:32:00 UTC 2018 INFO: BACKUP_PATH is set to /pgdata/primary-backups/2018-05-09-20-32-00. BACKUP_PATH can also be discovered by looking at the backup mount directly (if access to the storage is available to the user).\n An example of BACKUP_PATH is as followed:\n \"name\": \"BACKUP_PATH\", \"value\": \"primary-backups/2018-05-09-20-32-00\" When you are ready to restore from the backup created, run the following example:\n cd $CCPROOT/examples/docker/restore ./run.sh Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/backup ./run.sh The Kubernetes Job type executes a pod and then the pod exits. You can view the Job status using this command:\n ${CCP_CLI} get job When you\u0026#8217;re ready to restore, a restore example is provided.\n It\u0026#8217;s required to specified a backup path for this example. To get the correct path check the backup job logs or a timestamp:\n kubectl logs backup-vpk9l | grep BACKUP_PATH Wed May 9 20:32:00 UTC 2018 INFO: BACKUP_PATH is set to /pgdata/primary-backups/2018-05-09-20-32-00. BACKUP_PATH can also be discovered by looking at the backup mount directly (if access to the storage is available to the user).\n An example of BACKUP_PATH defined as a variable within the JSON script is as follows:\n \"name\": \"BACKUP_PATH\", \"value\": \"primary-backups/2018-05-09-20-32-00\" Running the example:\n cd $CCPROOT/examples/kube/restore ./run.sh Test the restored database as follows:\n psql -h restore -U postgres postgres Point in Time Recovery (PITR) PITR (point-in-time-recovery) is a feature that allows for recreating a database from backup and log files at a certain point in time. This is done using a write ahead log (WAL) which is kept in the pg_wal directory within PGDATA. Changes made to the database files over time are recorded in these log files, which allows it to be used for disaster recovery purposes.\n When using PITR as a backup method, in order to restore from the last checkpoint in the event of a database or system failure, it is only necessary to save these log files plus a full backup. This provides an additional advantage in that it is not necessary to keep multiple full backups on hand, which consume space and time to create. This is because point in time recovery allows you to \"replay\" the log files and recover your database to any point since the last full backup.\n More detailed information about Write Ahead Log (WAL) archiving can be found here.\n By default in the crunchy-postgres container, WAL logging is not enabled. To enable WAL logging outside of this example, set the following environment variables when starting the crunchy-postgres container:\n ARCHIVE_MODE=on ARCHIVE_TIMEOUT=60 These variables set the same name settings within the postgresql.conf file that is used by the database. When set, WAL files generated by the database will be written out to the /pgwal mount point.\n A full backup is required to do a PITR. crunchy-backup currently performs this role within the example, running a pg_basebackup on the database. This is a requirement for PITR. After a backup is performed, code is added into crunchy-postgres which will also check to see if you want to do a PITR.\n There are three volume mounts used with the PITR example.\n /recover - When specified within a crunchy-postgres container, PITR is activated during container startup.\n /backup - This is used to find the base backup you want to recover from.\n /pgwal - This volume is used to write out new WAL files from the newly restored database container.\n Some environment variables used to manipulate the point in time recovery logic:\n The RECOVERY_TARGET_NAME environment variable is used to tell the PITR logic what the name of the target is.\n RECOVERY_TARGET_TIME is also an optional environment variable that restores using a known time stamp.\n If you don\u0026#8217;t specify either of these environment variables, then the PITR logic will assume you want to restore using all the WAL files or essentially the last known recovery point.\n The RECOVERY_TARGET_INCLUSIVE environment variable is also available to let you control the setting of the recovery.conf setting recovery_target_inclusive. If you do not set this environment variable the default is true.\n Once you recover a database using PITR, it will be in read-only mode. To make the database resume as a writable database, run the following SQL command:\n postgres=# select pg_wal_replay_resume(); If you\u0026#8217;re running the PITR example for PostgreSQL versions 9.5 or 9.6, please note that starting in PostgreSQL version 10, the pg_xlog directory was renamed to pg_wal. Additionally, all usages of the function pg_xlog_replay_resume were changed to pg_wal_replay_resume.\n It takes about 1 minute for the database to become ready for use after initially starting.\n WAL segment files are written to the /tmp directory. Leaving the example running for a long time could fill up your /tmp directory.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Create a database container as follows:\n cd $CCPROOT/examples/docker/pitr ./run-pitr.sh Next, we will create a base backup of that database using this:\n ./run-backup-pitr.sh After creating the base backup of the database, WAL segment files are created every 60 seconds that contain any database changes. These segments are stored in the /tmp/pitr/pitr/pg_wal directory.\n Next, create some recovery targets within the database by running the SQL commands against the pitr database as follows:\n ./run-sql.sh This will create recovery targets named beforechanges, afterchanges, and nomorechanges. It will create a table, pitrtest, between the beforechanges and afterchanges targets. It will also run a SQL CHECKPOINT to flush out the changes to WAL segments. These labels can be used to mark the points in the recovery process that will be referenced when creating the restored database.\n Next, now that we have a base backup and a set of WAL files containing our database changes, we can shut down the pitr database to simulate a database failure. Do this by running the following:\n docker stop pitr Next, let\u0026#8217;s edit the restore script to use the base backup files created in the step above. You can view the backup path name under the /tmp/backups/pitr-backups/ directory. You will see another directory inside of this path with a name similar to 2018-03-21-21-03-29. Copy and paste that value into the run-restore-pitr.sh script in the BACKUP environment variable.\n After that, run the script.\n vi ./run-restore-pitr.sh ./run-restore-pitr.sh The WAL segments are read and applied when restoring from the database backup. At this point, you should be able to verify that the database was restored to the point before creating the test table:\n psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'table pitrtest' This SQL command should show that the pitrtest table does not exist at this recovery time. The output should be similar to:\n ERROR: relation \"pitrtest\" does not exist PostgreSQL allows you to pause the recovery process if the target name or time is specified. This pause would allow a DBA a chance to review the recovery time/name and see if this is what they want or expect. If so, the DBA can run the following command to resume and complete the recovery:\n psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'select pg_wal_replay_resume()' Until you run the statement above, the database will be left in read-only mode.\n Next, run the script to restore the database to the afterchanges restore point. Update the RECOVERY_TARGET_NAME to afterchanges:\n vi ./run-restore-pitr.sh ./run-restore-pitr.sh After this restore, you should be able to see the test table:\n psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'table pitrtest' psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'select pg_wal_replay_resume()' Lastly, start a recovery using all of the WAL files. This will get the restored database as current as possible. To do so, edit the script to remove the RECOVERY_TARGET_NAME environment setting completely:\n ./run-restore-pitr.sh sleep 30 psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'table pitrtest' psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'create table foo (id int)' At this point, you should be able to create new data in the restored database and the test table should be present. When you recover the entire WAL history, resuming the recovery is not necessary to enable writes.\n Kubernetes and OpenShift Start by running the example database container:\n cd $CCPROOT/examples/kube/pitr ./run-pitr.sh This step will create a database container, pitr. This container is configured to continuously write WAL segment files to a mounted volume (/pgwal).\n After you start the database, you will create a base backup using this command:\n ./run-backup-pitr.sh This will create a backup and write the backup files to a persistent volume (/pgbackup).\n Next, create some recovery targets within the database by running the SQL commands against the pitr database as follows:\n ./run-sql.sh This will create recovery targets named beforechanges, afterchanges, and nomorechanges. It will create a table, pitrtest, between the beforechanges and afterchanges targets. It will also run a SQL CHECKPOINT to flush out the changes to WAL segments.\n Next, now that we have a base backup and a set of WAL files containing our database changes, we can shut down the pitr database to simulate a database failure. Do this by running the following:\n ${CCP_CLI} delete pod pitr Next, we will create 3 different restored database containers based upon the base backup and the saved WAL files.\n First, get the BACKUP_PATH created by the backup-pitr example by viewing the pods logs:\n ${CCP_CLI} logs backup-pitr-8sfkh | grep PATH Thu May 10 18:07:58 UTC 2018 INFO: BACKUP_PATH is set to /pgdata/pitr-backups/2018-05-10-18-07-58. Edit the restore-pitr.json file and change the BACKUP_PATH environment variable using the path discovered above (note: /pgdata/ is not required and should be excluded in the variable):\n { \"name\": \"BACKUP_PATH\", \"value\": \"pitr-backups/2018-05-10-18-07-58\" { Next, we restore prior to the beforechanges recovery target. This recovery point is before the pitrtest table is created.\n Edit the restore-pitr.json file, and edit the environment variable to indicate we want to use the beforechanges recovery point:\n { \"name\": \"RECOVERY_TARGET_NAME\", \"value\": \"beforechanges\" { Then run the following to create the restored database container:\n ./run-restore-pitr.sh After the database has restored, you should be able to perform a test to see if the recovery worked as expected:\n psql -h restore-pitr -U postgres postgres -c 'table pitrtest' psql -h restore-pitr -U postgres postgres -c 'create table foo (id int)' The output of these commands should show that the pitrtest table is not present. It should also show that you can not create a new table because the database is paused in read-only mode.\n To make the database resume as a writable database, run the following SQL command:\n select pg_wal_replay_resume(); It should then be possible to write to the database:\n psql -h restore-pitr -U postgres postgres -c 'create table foo (id int)' You can also test that if afterchanges is specified, that the pitrtest table is present but that the database is still in recovery mode.\n Lastly, you can test a full recovery using all of the WAL files, if you remove the RECOVERY_TARGET_NAME environment variable completely.\n The storage portions of this example can all be found under $CCP_STORAGE_PATH.\n Connection Pooling pgBouncer Crunchy pgBouncer is a lightweight connection pooler for PostgreSQL databases.\n The following examples create the following containers: * pgBouncer Primary * pgBouncer Replica * PostgreSQL Primary * PostgreSQL Replica\n In Kubernetes and OpenShift, this example will also create: * pgBouncer Primary Service * pgBouncer Replica Service * Primary Service * Replica Service * PostgreSQL Secrets * pgBouncer Secrets\n To cleanup the objects created by this example, run the following in the pgbouncer example directory:\n ./cleanup.sh For more information on pgBouncer, see the official website.\n Docker Run the pgbouncer example:\n cd $CCPROOT/examples/docker/pgbouncer ./run.sh Once all containers have deployed and are ready for use, psql to the target databases through pgBouncer:\n psql -d userdb -h 0.0.0.0 -p 6432 -U testuser psql -d userdb -h 0.0.0.0 -p 6433 -U testuser To connect to the administration database within pgbouncer, connect using psql:\n psql -d pgbouncer -h 0.0.0.0 -p 6432 -U pgbouncer psql -d pgbouncer -h 0.0.0.0 -p 6433 -U pgbouncer Kubernetes and OpenShift OpenShift: If custom configurations aren\u0026#8217;t being mounted, an emptydir volume is required to be mounted at /pgconf.\n Run the pgbouncer example:\n cd $CCPROOT/examples/kube/pgbouncer ./run.sh Once all containers have deployed and are ready for use, psql to the target databases through pgBouncer:\n psql -d userdb -h pgbouncer-primary -p 6432 -U testuser psql -d userdb -h pgbouncer-replica -p 6432 -U testuser To connect to the administration database within pgbouncer, connect using psql:\n psql -d pgbouncer -h pgbouncer-primary -p 6432 -U pgbouncer -c \"SHOW SERVERS\" psql -d pgbouncer -h pgbouncer-replica -p 6432 -U pgbouncer -c \"SHOW SERVERS\" pgPool II An example is provided that will run a pgPool II container in conjunction with the primary-replica example provided above.\n You can execute both INSERT and SELECT statements after connecting to pgpool. The container will direct INSERT statements to the primary and SELECT statements will be sent round-robin to both the primary and replica.\n The container creates a default database called userdb, a default user called testuser and a default password of password.\n You can view the nodes that pgpool is configured for by running:\n psql -h pgpool -U testuser userdb -c 'show pool_nodes' To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Create the container as follows:\n cd $CCPROOT/examples/docker/pgpool ./run.sh The example is configured to allow the testuser to connect to the userdb database.\n psql -h localhost -U testuser -p 12003 userdb Kubernetes and OpenShift Run the following command to deploy the pgpool service:\n cd $CCPROOT/examples/kube/pgpool ./run.sh The example is configured to allow the testuser to connect to the userdb database.\n psql -h pgpool -U testuser userdb Database Single Primary This example starts a single PostgreSQL container and service, the most simple of examples.\n The container creates a default database called userdb, a default user called testuser and a default password of password.\n For all environments, the script additionally creates:\n A persistent volume claim\n A crunchy-postgres container named primary\n The database using predefined environment variables\n And specifically for the Kubernetes and OpenShift environments:\n A pod named primary\n A service named primary\n A PVC named primary-pgdata\n The database using predefined environment variables\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker To create the example and run the container:\n cd $CCPROOT/examples/docker/primary ./run.sh Connect from your local host as follows:\n psql -h localhost -U testuser -W userdb Kubernetes and OpenShift To create the example:\n cd $CCPROOT/examples/kube/primary ./run.sh Connect from your local host as follows:\n psql -h primary -U postgres postgres Helm This example resides under the $CCPROOT/examples/helm directory. View the README to run this example using Helm here.\n PostgreSQL Deployment Starting in release 1.2.8, the PostgreSQL container can accept an environment variable named PGDATA_PATH_OVERRIDE. If set, the /pgdata/subdir path will use a subdirectory name of your choosing instead of the default which is the hostname of the container.\n This example shows how a Deployment of a PostgreSQL primary is supported. A pod is a deployment that uses a hostname generated by Kubernetes; because of this, a new hostname will be defined upon restart of the primary pod.\n For finding the /pgdata that pertains to the pod, you will need to specify a /pgdata/subdir name that never changes. This requirement is handled by the PGDATA_PATH_OVERRIDE environment variable.\n The container creates a default database called userdb, a default user called testuser and a default password of password.\n This example will create the following in your Kubernetes and OpenShift environments:\n primary-deployment service which uses a PVC to persist PostgreSQL data\n replica-deployment service, uses emptyDir persistence\n primary-deployment deployment of replica count 1 for the primary PostgreSQL database pod\n replica-deployment deployment of replica count 1 for the replica\n replica2-deployment deployment of replica count 1 for the 2nd replica\n ConfigMap to hold a custom postgresql.conf, setup.sql, and pg_hba.conf files\n Secrets for the primary user, superuser, and normal user to hold the passwords\n Volume mount for /backrestrepo and /pgwal\n The persisted data for the PostgreSQL primary is found under /pgdata/primary-deployment. If you delete the primary pod, the deployment will create another pod for the primary and will be able to start up immediately since it works out of the same /pgdata/primary-deployment data directory.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Kubernetes and OpenShift Start the example as follows:\n cd $CCPROOT/examples/kube/primary-deployment ./run.sh Replication This example starts a primary and a replica pod containing a PostgreSQL database.\n The container creates a default database called userdb, a default user called testuser and a default password of password.\n For the Docker environment, the script additionally creates:\n A docker volume using the local driver for the primary\n A docker volume using the local driver for the replica\n A container named primary binding to port 12007\n A container named replica binding to port 12008\n A mapping of the PostgreSQL port 5432 within the container to the localhost port 12000\n The database using predefined environment variables\n And specifically for the Kubernetes and OpenShift environments:\n emptyDir volumes for persistence\n A pod named pr-primary\n A pod named pr-replica\n A pod named pr-replica-2\n A service named pr-primary\n A service named pr-replica\n The database using predefined environment variables\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker To create the example and run the container:\n cd $CCPROOT/examples/docker/primary-replica ./run.sh Connect from your local host as follows:\n psql -h localhost -p 12007 -U testuser -W userdb psql -h localhost -p 12008 -U testuser -W userdb Docker-Compose Running the example:\n cd $CCPROOT/examples/compose/primary-replica docker-compose up To deploy more than one replica, run the following:\n docker-compose up --scale db-replica=3 To connect to the created database containers, first identify the ports exposed on the containers:\n docker ps Next, using psql, connect to the service:\n psql -d userdb -h 0.0.0.0 -p \u0026lt;CONTAINER_PORT\u0026gt; -U testuser See PG_PASSWORD in docker-compose.yml for the user password.\n To tear down the example, run the following:\n docker-compose stop docker-compose rm Kubernetes and OpenShift Run the following command to deploy a primary and replica database cluster:\n cd $CCPROOT/examples/kube/primary-replica ./run.sh It takes about a minute for the replica to begin replicating with the primary. To test out replication, see if replication is underway with this command:\n psql -h pr-primary -U postgres postgres -c 'table pg_stat_replication' If you see a line returned from that query it means the primary is replicating to the replica. Try creating some data on the primary:\n psql -h pr-primary -U postgres postgres -c 'create table foo (id int)' psql -h pr-primary -U postgres postgres -c 'insert into foo values (1)' Then verify that the data is replicated to the replica:\n psql -h pr-replica -U postgres postgres -c 'table foo' primary-replica-dc\n If you wanted to experiment with scaling up the number of replicas, you can run the following example:\n cd $CCPROOT/examples/kube/primary-replica-dc ./run.sh You can verify that replication is working using the same commands as above.\n This example creates 2 replicas when it initially starts. To scale up the number of replicas and view what the deployment looks like before and after, run these commands:\n ${CCP_CLI} get deployment ${CCP_CLI} scale --current-replicas=2 --replicas=3 deployment/replica-dc ${CCP_CLI} get deployment ${CCP_CLI} get pod You can verify that you now have 3 replicas by running this query on the primary:\n psql -h primary-dc -U postgres postgres -c 'table pg_stat_replication' Helm This example resides under the $CCPROOT/examples/helm directory. View the README to run this example using Helm here.\n Synchronous Replication This example deploys a PostgreSQL cluster with a primary, a synchronous replica, and an asynchronous replica. The two replicas share the same service.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker To run this example, run the following:\n cd $CCPROOT/examples/docker/sync ./run.sh You can test the replication status on the primary by using the following command and the password password:\n psql -h 127.0.0.1 -p 12010 -U postgres postgres -c 'table pg_stat_replication' You should see 2 rows; 1 for the asynchronous replica and 1 for the synchronous replica. The sync_state column shows values of async or sync.\n You can test replication to the replicas by first entering some data on the primary, and secondly querying the replicas for that data:\n psql -h 127.0.0.1 -p 12010 -U postgres postgres -c 'create table foo (id int)' psql -h 127.0.0.1 -p 12010 -U postgres postgres -c 'insert into foo values (1)' psql -h 127.0.0.1 -p 12011 -U postgres postgres -c 'table foo' psql -h 127.0.0.1 -p 12012 -U postgres postgres -c 'table foo' Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/sync ./run.sh Connect to the primarysync and replicasync databases as follows for both the Kubernetes and OpenShift environments:\n psql -h primarysync -U postgres postgres -c 'create table test (id int)' psql -h primarysync -U postgres postgres -c 'insert into test values (1)' psql -h primarysync -U postgres postgres -c 'table pg_stat_replication' psql -h replicasync -U postgres postgres -c 'select inet_server_addr(), * from test' psql -h replicasync -U postgres postgres -c 'select inet_server_addr(), * from test' psql -h replicasync -U postgres postgres -c 'select inet_server_addr(), * from test' This set of queries will show you the IP address of the PostgreSQL replica container. Note the changing IP address due to the round-robin service proxy being used for both replicas. The example queries also show that both replicas are replicating successfully from the primary.\n Statefulsets This example deploys a statefulset named statefulset. The statefulset is a new feature in Kubernetes as of version 1.5 and in OpenShift Origin as of version 3.5. Statefulsets have replaced PetSets going forward.\n Please view this Kubernetes description to better understand what a Statefulset is and how it works.\n This example creates 2 PostgreSQL containers to form the set. At startup, each container will examine its hostname to determine if it is the first container within the set of containers.\n The first container is determined by the hostname suffix assigned by Kubernetes to the pod. This is an ordinal value starting with 0. If a container sees that it has an ordinal value of 0, it will update the container labels to add a new label of:\n name=$PG_PRIMARY_HOST In this example, PG_PRIMARY_HOST is specified as statefulset-primary.\n By default, the containers specify a value of name=statefulset-replica.\n There are 2 services that end user applications will use to access the PostgreSQL cluster, one service (statefulset-primary) routes to the primary container and the other (statefulset-replica) to the replica containers.\n $ ${CCP_CLI} get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.96.0.1 \u0026lt;none\u0026gt; 443/TCP 22h statefulset None \u0026lt;none\u0026gt; 5432/TCP 1h statefulset-primary 10.97.168.138 \u0026lt;none\u0026gt; 5432/TCP 1h statefulset-replica 10.97.218.221 \u0026lt;none\u0026gt; 5432/TCP 1h To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Kubernetes and OpenShift Start the example as follows:\n cd $CCPROOT/examples/kube/statefulset ./run.sh You can access the primary database as follows:\n psql -h statefulset-primary -U postgres postgres You can access the replica databases as follows:\n psql -h statefulset-replica -U postgres postgres You can scale the number of containers using this command; this will essentially create an additional replica database.\n ${CCP_CLI} scale --replicas=3 statefulset statefulset Helm This example resides under the $CCPROOT/examples/helm directory. View the README to run this example using Helm here.\n Geospatial (PostGIS) An example is provided that will run a PostgreSQL with PostGIS pod and service in Kubernetes and OpenShift and a container in Docker.\n The container creates a default database called userdb, a default user called testuser and a default password of password.\n You can view the extensions that postgres-gis has enabled by running the following command and viewing the listed PostGIS packages:\n psql -h postgres-gis -U testuser userdb -c '\\dx' To validate that PostGIS is installed and which version is running, run the command:\n psql -h postgres-gis -U testuser userdb -c \"SELECT postgis_full_version();\" You should expect to see output similar to:\n postgis_full_version ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- POSTGIS=\"2.4.2 r16113\" PGSQL=\"100\" GEOS=\"3.5.0-CAPI-1.9.0 r4084\" PROJ=\"Rel. 4.8.0, 6 March 2012\" GDAL=\"GDAL 1.11.4, released 2016/01/25\" LIBXML=\"2.9.1\" LIBJSON=\"0.11\" TOPOLOGY RASTER (1 row) As an exercise for invoking some of the basic PostGIS functionality for validation, try defining a 2D geometry point while giving inputs of longitude and latitude through this command.\n psql -h postgres-gis -U testuser userdb -c \"select ST_MakePoint(28.385200,-81.563900);\" You should expect to see output similar to:\n st_makepoint -------------------------------------------- 0101000000516B9A779C623C40B98D06F0166454C0 (1 row) To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Create the container as follows:\n cd $CCPROOT/examples/docker/postgres-gis ./run.sh Enter the following command to connect to the postgres-gis container that is mapped to your local port 12000:\n psql -h localhost -U testuser -p 12000 userdb Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/postgres-gis ./run.sh Custom Configuration You can use your own version of the SQL file setup.sql to customize the initialization of database data and objects when the container and database are created.\n This works by placing a file named setup.sql within the /pgconf mounted volume directory. Portions of the setup.sql file are required for the container to work; please see comments within the sample setup.sql file.\n If you mount a /pgconf volume, crunchy-postgres will look at that directory for postgresql.conf, pg_hba.conf, pg_ident.conf, SSL server/ca certificates and setup.sql. If it finds one of them it will use that file instead of the default files.\n Docker This example can be run as follows for the Docker environment:\n cd $CCPROOT/examples/docker/custom-config ./run.sh Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/custom-config ./run.sh SSL Authentication This example shows how you can configure PostgreSQL to use SSL for client authentication.\n The example requires SSL certificates and keys to be created. Included in the examples directory is a script to create self-signed certificates (server and client) for the example: $CCPROOT/examples/ssl-creator.sh.\n The example creates a client certificate for the user testuser. Furthermore, the server certificate is created for the server name custom-config-ssl.\n If as a client it\u0026#8217;s required to confirm the identity of the server, verify-full can be specified in the connection string. This will check if the server and the server certificate have the same name:\n psql postgresql://custom-config-ssl:5432/postgres?sslmode=verify-full -U testuser\" To connect via IP, sslmode can be changed to require.\n psql postgresql://IP_OF_PGSQL:5432/postgres?sslmode=require -U testuser\" This example can be run as follows for the Docker environment:\n cd $CCPROOT/examples/docker/custom-config-ssl ./run.sh And the example can be run in the following directory for the Kubernetes and OpenShift environments:\n cd $CCPROOT/examples/kube/custom-config-ssl ./run.sh A required step to make this example work is to define in your /etc/hosts file an entry that maps custom-config-ssl to the service IP address for the container.\n For instance, if your service has an address as follows:\n ${CCP_CLI} get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE custom-config-ssl 172.30.211.108 \u0026lt;none\u0026gt; 5432/TCP Then your /etc/hosts file needs an entry like this:\n 172.30.211.108 custom-config-ssl For production Kubernetes and OpenShift installations, it will likely be preferred for DNS names to resolve to the PostgreSQL service name and generate server certificates using the DNS names instead of the example name custom-config-ssl.\n In order to connect via certificate, environment variables must be set that point to the client certificates. Source the env.sh file to set environment varaibles for the example:\n source env.sh If as a client it\u0026#8217;s required to confirm the identity of the server, verify-full can be specified in the connection string. This will check if the server and the server certificate have the same name:\n psql postgresql://custom-config-ssl:5432/userdb?sslmode=verify-full -U testuser\" To connect via IP, sslmode can be changed to require.\n psql postgresql://IP_OF_PGSQL:5432/userdb?sslmode=require -U testuser\" You should see a connection that looks like the following:\n psql (10.5) SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) Type \"help\" for help. userdb=\u0026gt; Docker Swarm This example shows how to run a primary and replica database container on a Docker Swarm (v.1.12) cluster.\n First, set up a cluster. The Kubernetes libvirt coreos cluster example works well; see coreos-libvirt-cluster.\n Next, on each node, create the Swarm using these Swarm Install instructions.\n Include this command on the manager node:\n docker swarm init --advertise-addr 192.168.10.1 Then this command on all the worker nodes:\n docker swarm join \\ --token SWMTKN-1-65cn5wa1qv76l8l45uvlsbprogyhlprjpn27p1qxjwqmncn37o-015egopg4jhtbmlu04faon82u \\ 192.168.10.1.37 Before creating Swarm services, it is necessary to define an overlay network to be used by the services you will create. This can be done as follows:\n docker network create --driver overlay crunchynet We want to have the primary database always placed on a specific node. This is accomplished using node constraints as follows:\n docker node inspect kubernetes-node-1 | grep ID docker node update --label-add type=primary 18yrb7m650umx738rtevojpqy In the above example, the kubernetes-node-1 node with ID 18yrb7m650umx738rtevojpqy has a user defined label of primary added to it. The primary service specifies primary as a constraint when created; this tells Swarm to place the service on that specific node. The replica specifies a constraint of node.labels.type != primary to have the replica always placed on a node that is not hosting the primary service.\n Docker After you set up the Swarm cluster, you can then run this example as follows on the Swarm Manager Node:\n cd $CCPROOT/examples/docker/swarm-service ./run.sh You can then find the nodes that are running the primary and replica containers by:\n docker service ps primary docker service ps replica You can also scale up the number of replica containers.\n docker service scale replica=2 docker service ls Verify you have two replicas within PostgreSQL by viewing the pg_stat_replication table. The password is password by default when logged into the kubernetes-node-1 host:\n docker exec -it $(docker ps -q) psql -U postgres -c 'table pg_stat_replication' postgres You should see a row for each replica along with its replication status.\n Failover Watch This example shows how to run the crunchy-watch container to perform an automated failover. For the example to work, the host on which you are running needs to allow read-write access to /run/docker.sock. The crunchy-watch container runs as the postgres user, so adjust the file permissions of /run/docker.sock accordingly.\n The primary-replica example is required to be run before this example.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Run the example as follows:\n cd $CCPROOT/examples/docker/watch ./run.sh This will start the watch container which tests every few seconds whether the primary database is running, if not, it will trigger a failover using docker exec on the replica host.\n Test it out by stopping the primary:\n docker stop primary docker logs watch Look at the watch container logs to see it perform the failover.\n Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/watch ./run.sh Check out the log of the watch container as follows:\n ${CCP_CLI} log watch Then trigger a failover using this command:\n ${CCP_CLI} delete pod pr-primary Resume watching the watch container\u0026#8217;s log and verify that it detects the primary is not reachable and performs a failover on the replica.\n A final test is to see if the old replica is now a fully functioning primary by inserting some test data into it as follows:\n psql -h pr-primary -U postgres postgres -c 'create table failtest (id int)' The above command still works because the watch container has changed the labels of the replica to make it a primary, so the primary service will still work and route now to the new primary even though the pod is named replica.\n You can view the labels on a pod with this command:\n ${CCP_CLI} describe pod pr-replica | grep Label Metrics and Performance pgBadger pgbadger is a PostgreSQL tool that reads the log files from a specified database in order to produce a HTML report that shows various PostgreSQL statistics and graphs. This example runs the pgbadger HTTP server against a crunchy-postgres container and illustrates how to view the generated reports.\n The port utilized for this tool is port 14000 for Docker environments and port 10000 for Kubernetes and OpenShift environments.\n A requirement to build this container from source is golang. On RHEL 7.2, golang is found in the 'server optional' repository which needs to be enabled in order to install this dependency.\n sudo subscription-manager repos --enable=rhel-7-server-optional-rpms The container creates a default database called userdb, a default user called testuser and a default password of password.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Run the example as follows:\n cd $CCPROOT/examples/docker/pgbadger ./run.sh After execution, the container will run and provide a simple HTTP command you can browse to view the report. As you run queries against the database, you can invoke this URL to generate updated reports:\n curl -L http://127.0.0.1:14000/api/badgergenerate Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/pgbadger ./run.sh After execution, the container will run and provide a simple HTTP command you can browse to view the report. As you run queries against the database, you can invoke this URL to generate updated reports:\n curl -L http://badger:10000/api/badgergenerate You can view the database container logs using these commands:\n ${CCP_CLI} logs pgbadger -c pgbadger ${CCP_CLI} logs pgbadger -c postgres Metrics Collection You can collect various PostgreSQL metrics from your database container by running a crunchy-collect container that points to your database container.\n This example starts up 5 containers:\n Collect (crunchy-collect)\n Grafana (crunchy-grafana)\n PostgreSQL (crunchy-postgres)\n Prometheus (crunchy-prometheus)\n Every 5 seconds by default, Prometheus will scrape the Collect container for metrics. These metrics will then be visualized by Grafana.\n By default, Prometheus detects which environment its running on (Docker, Kubernetes, or OpenShift Container Platform) and applies a default configuration. If this container is running on Kubernetes or OpenShift Container Platform, it will use the Kubernetes API to discover pods with the label \"crunchy-collect\": \"true\".\n The collect container must have this label to be discovered in these environments. Additionally, the collect container uses a special PostgreSQL role ccp_monitoring. This user is created by setting the PGMONITOR_PASSWORD environment variable on the PostgreSQL container.\n Discovering pods requires a cluster role service account. See the Kubernetes and OpenShift metrics JSON file for more details.\n For Docker environments the collect hostname must be specified as an environment variable.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker To start this set of containers, run the following:\n cd $CCPROOT/examples/docker/metrics ./run.sh You will be able to access the Grafana and Prometheus services from the following web addresses:\n Grafana (http://0.0.0.0:3000)\n Prometheus (http://0.0.0.0:9090)\n The crunchy-postgres container is accessible on port 5432.\n Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/metrics ./run.sh It\u0026#8217;s required to use port-forward to access the Grafana dashboard. To start the port-forward, run the following command:\n ${CCP_CLI} port-forward metrics 3000:3000 ${CCP_CLI} port-forward metrics 9090:9090 Grafana dashboard can be then accessed from http://127.0.0.01:3000\n Prometheus dashboard can be then accessed from http://127.0.0.01:9090\n You can view the container logs using these command:\n ${CCP_CLI} logs -c grafana metrics ${CCP_CLI} logs -c prometheus metrics ${CCP_CLI} logs -c collect primary ${CCP_CLI} logs -c postgres primary ${CCP_CLI} logs -c collect replica ${CCP_CLI} logs -c postgres replica pg_audit This example provides an example of enabling pg_audit output. As of release 1.3, pg_audit is included in the crunchy-postgres container and is added to the PostgreSQL shared library list in postgresql.conf.\n Given the numerous ways pg_audit can be configured, the exact pg_audit configuration is left to the user to define. pg_audit allows you to configure auditing rules either in postgresql.conf or within your SQL script.\n For this test, we place pg_audit statements within a SQL script and verify that auditing is enabled and working. If you choose to configure pg_audit via a postgresql.conf file, then you will need to define your own custom file and mount it to override the default postgresql.conf file.\n Docker Run the following to create a database container:\n cd $CCPROOT/examples/docker/pgaudit ./run.sh This starts an instance of the pg_audit container (running crunchy-postgres) on port 12005 on localhost. The test script is then automatically executed.\n This test executes a SQL file which contains pg_audit configuration statements as well as executes some basic SQL commands. These SQL commands will cause pg_audit to create log messages in the pg_log log file created by the database container.\n Kubernetes and OpenShift Run the following:\n cd $CCPROOT/examples/kube/pgaudit ./run.sh The script will create the pg_audit pod (running the crunchy-postgres container) on the Kubernetes instance and then execute a SQL file which contains pg_audit configuration statements as well as some basic SQL commands. These SQL commands will cause pg_audit to create log messages in the pg_log file created by the database container.\n "
+ "content": "Table of Contents Getting Started Example Conventions Administration Password Management Kubernetes Secrets pgAdmin4 pgAdmin4 with TLS Upgrade Cron Scheduler Vacuum systemd Backup and Restoration pg_dump pg_restore pgBackRest pgBackRest with SSHD pg_basebackup Point in Time Recovery (PITR) Connection Pooling pgBouncer pgPool II Database Single Primary PostgreSQL Deployment Replication Synchronous Replication Statefulsets Geospatial (PostGIS) Custom Configuration SSL Authentication Docker Swarm Failover Watch Metrics and Performance pgBadger Metrics Collection pg_audit Latest Release: 2.1.0 2018-08-13\n Getting Started The examples located in the kube directory work on both Kubernetes and OpenShift. Ensure the CCP_CLI environment variable is set to the correct binary for your environment.\n Set the environment variable in .bashrc to ensure the examples will work in your environment.\n # Kubernetes export CCP_CLI=kubectl # OpenShift export CCP_CLI=oc Here are some useful resources for finding the right commands to troubleshoot \u0026amp; modify containers in the various environments shown in this guide:\n Docker Cheat Sheet\n Kubectl Cheat Sheet\n OpenShift Cheat Sheet\n Helm Cheat Sheet\n Example Conventions The examples provided in Container Suite are simple examples that are meant to demonstrate key Container Suite features. These examples can be used to build more production level deployments as dictated by user requirements specific to their operating environments.\n The examples generally follow these conventions:\n There is a run.sh script that you will execute to start the example.\n There is a cleanup.sh script that you will execute to shutdown and cleanup the example.\n Each example will create resources such as Secrets, ConfigMaps, Services, and PersistentVolumeClaims, all which follow a naming convention of \u0026lt;example name\u0026gt;-\u0026lt;optional description suffix\u0026gt;. For example an example called primary might have a PersistentVolumeClaim called primary-pgconf to describe the purpose of that particular PVC.\n The folder names for each example give a clue as to which Container Suite feature it demonstrates. For instance, the examples/kube/pgaudit example demonstrates how to enable the pg_audit capability of the crunchy-postgres container.\n Administration Password Management The passwords used for the PostgreSQL user accounts are generated by the OpenShift process command. To inspect what value is supplied, you can inspect the primary pod as follows:\n ${CCP_CLI} get pod pr-primary -o json | grep -C 1 'PG_USER\\|PG_PASSWORD\\|PG_DATABASE' This will give you the environment variable values for the database created by default in addition to the username and password of the standard user.\n PG_USER\n PG_PASSWORD\n PG_DATABASE\n Kubernetes Secrets You can use Kubernetes Secrets to set and maintain your database credentials. Secrets requires you base64 encode your user and password values as follows:\n echo -n 'myuserid' | base64 You will paste these values into your JSON secrets files for values.\n This example allows you to set the PostgreSQL passwords using Kubernetes Secrets.\n The secret uses a base64 encoded string to represent the values to be read by the container during initialization. The encoded password value is password. Run the example as follows:\n cd $CCPROOT/examples/kube/secret ./run.sh The secrets are mounted in the /pguser, /pgprimary, and /pgroot volumes within the container and read during initialization. The container scripts create a PostgreSQL user with those values, and sets the passwords for the primary user and PostgreSQL superuser using the mounted secret volumes.\n When using secrets, you do NOT have to specify the following environment variables if you specify all three secrets volumes:\n PG_USER\n PG_PASSWORD\n PG_ROOT_PASSWORD\n PG_PRIMARY_USER\n PG_PRIMARY_PASSWORD\n You can test the container as follows. In all cases, the password is password:\n psql -h secret -U pguser1 postgres psql -h secret -U postgres postgres psql -h secret -U primaryuser postgres pgAdmin4 This example deploys the pgadmin4 v2 web user interface for PostgreSQL without TLS.\n After running the example, you should be able to browse to http://127.0.0.1:5050 and log into the web application using a user ID of admin@admin.com and password of password.\n If you are running this example using Kubernetes or OpenShift, replace 127.0.0.1:5050 with the \u0026lt;NODE_IP\u0026gt;:30000.\n To get the node IP, run the following:\n ${CCP_CLI} describe pod pgadmin4-http | grep Node: See the pgAdmin4 documentation for more details.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker To run this example, run the following:\n cd $CCPROOT/examples/docker/pgadmin4-http ./run.sh Kubernetes and OpenShift Start the container as follows:\n cd $CCPROOT/examples/kube/pgadmin4-http ./run.sh An emptyDir with write access must be mounted to the /run/httpd directory in OpenShift.\n pgAdmin4 with TLS This example deploys the pgadmin4 v2 web user interface for PostgreSQL with TLS.\n After running the example, you should be able to browse to https://127.0.0.1:5050 and log into the web application using a user ID of admin@admin.com and password of password.\n If you are running this example using Kubernetes or OpenShift, replace 127.0.0.1:5050 with the \u0026lt;NODE_IP\u0026gt;:30000.\n To get the node IP, run the following:\n ${CCP_CLI} describe pod pgadmin4-https | grep Node: See the pgadmin4 documentation for more details.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker To run this example, run the following:\n cd $CCPROOT/examples/docker/pgadmin4-https ./run.sh Kubernetes and OpenShift Start the container as follows:\n cd $CCPROOT/examples/kube/pgadmin4-https ./run.sh An emptyDir with write access must be mounted to the /run/httpd directory in OpenShift.\n Upgrade This example assumes you have run primary using a PG 9.5 or 9.6 image such as centos7-9.5.14-2.1.0 prior to running this upgrade.\n Starting in release 1.3.1, the upgrade container will let you perform a pg_upgrade either from a PostgreSQL version 9.5 database to 9.6 or from 9.6 to 10.\n Prior to running this example, make sure your CCP_IMAGE_TAG environment variable is using the next major version of PostgreSQL that you want to upgrade to. For example, if you\u0026#8217;re upgrading from 9.5 to 9.6, make sure the variable references a PG 9.6 image such as centos7-9.6.10-2.1.0.\n This will create the following in your Kubernetes environment:\n a Kubernetes Job running the crunchy-upgrade container\n a new data directory name upgrade found in the pgnewdata PVC\n Data checksums on the Crunchy PostgreSQL container were enabled by default in version 2.1.0. When trying to upgrade, it\u0026#8217;s required that both the old database and the new database have the same data checksums setting. Prior to upgrade, check if data_checksums were enabled on the database by running the following SQL: SHOW data_checksums\n Kubernetes and OpenShift Before running the example, ensure you edit upgrade.json and update the OLD_VERSION and NEW_VERSION parameters to the ones relevant to your situation.\n Start the upgrade as follows:\n cd $CCPROOT/examples/kube/upgrade ./run.sh If successful, the Job will end with a successful status. Verify the results of the Job by examining the Job\u0026#8217;s pod log:\n ${CCP_CLI} get pod -l job-name=upgrade ${CCP_CLI} logs -l job-name=upgrade You can verify the upgraded database by running the post-upgrade.sh script in the examples/kube/upgrade directory. This will create a PostgreSQL pod that mounts the upgraded volume.\n Cron Scheduler The crunchy-dba container implements a cron scheduler. The purpose of the crunchy-dba container is to offer a way to perform simple DBA tasks that occur on some form of schedule such as backup jobs or running a vacuum on a single PostgreSQL database container (such as the primary example).\n You can either run the crunchy-dba container as a single pod or include the container within a database pod.\n The crunchy-dba container makes use of a Service Account to perform the startup of scheduled jobs. The Kubernetes Job type is used to execute the scheduled jobs with a Restart policy of Never.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Kubernetes and OpenShift The script to schedule vacuum on a regular schedule is executed through the following commands:\n # Kubernetes cd $CCPROOT/examples/kube/dba ./run-kube-vac.sh # OpenShift cd $CCPROOT/examples/kube/dba ./run-ocp-vac.sh To run the script for scheduled backups, run the following in the same directory:\n # Kubernetes cd $CCPROOT/examples/kube/dba ./run-kube-backup.sh # OpenShift cd $CCPROOT/examples/kube/dba ./run-ocp-backup.sh Individual parameters for both can be modified within their respective JSON files; please see the Container Specifications document for a full list of what can be modified.\n Vacuum You can perform a PostgreSQL vacuum command by running the crunchy-vacuum container. You specify a database to vacuum using environment variables. By default, vacuum is executed against the primary example container.\n The crunchy-vacuum container image exists to allow a DBA a way to run a job either individually or scheduled to perform a variety of vacuum operations.\n This example performs a vacuum on a single table in the primary PostgreSQL database. The crunchy-vacuum image is executed with the PostgreSQL connection parameters to the single-primary PostgreSQL container. The type of vacuum performed is dictated by the environment variables passed into the job; these are defined with further detail here.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Run the example as follows:\n cd $CCPROOT/examples/docker/vacuum ./run.sh Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/vacuum/ ./run.sh Verify the job is completed:\n ${CCP_CLI} get job systemd The crunchy-pg.service is an example of a systemd unit file that starts and stops a container named crunchy-pg that has already been created.\n The example scripts are located in the following directory:\n $CCPROOT/examples/systemd/ There are two scripts within the directory.\n test-start.sh This script is called by the systemd start execution. The trick with this script is that it blocks forever after starting the docker crunchy-pg container. The blocking in the script keeps systemd happy and thinking that this is a normal daemon.\n test-stop.sh This script stops the test-start.sh script and also stops the crunchy-pg Docker container.\n Backup and Restoration pg_dump The script assumes you are going to backup the primary example and that container is running.\n This example assumes you have configured a storage filesystem as described in the Storage Configuration document.\n A successful backup will perform pg_dump/pg_dumpall on the primary and store the resulting files in the mounted volume under a directory named \u0026lt;HOSTNAME\u0026gt;-backups as a sub-directory, then followed by a unique backup directory based upon a date and timestamp - allowing any number of backups to be kept.\n For more information on how to configure this container, please see the Container Specifications document.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Run the backup with this command:\n cd $CCPROOT/examples/docker/pgdump ./run.sh Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/pgdump ./run.sh The Kubernetes Job type executes a pod and then the pod exits. You can view the Job status using this command:\n ${CCP_CLI} get job The pgdump.json file within that directory specifies options that control the behavior of the pgdump job. Examples of this include whether to run pg_dump vs pg_dumpall and advanced options for specific backup use cases.\n pg_restore The script assumes you are going to restore to the primary example and that container is running and a backup has been created using the pgdump example..\n This example assumes you have configured a storage filesystem as described in the Storage Configuration document.\n Successful use of the crunchy-pgrestore container will run a job to restore files generated by pg_dump/pg_dumpall to a container via psql/pg_restore; then container will terminate successfully and signal job completion.\n For more information on how to configure this container, please see the Container Specifications document.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Run the restore with this command:\n cd $CCPROOT/examples/docker/pgrestore ./run.sh Kubernetes and OpenShift By default, pgrestore container will automatically restore from the most recent backup. If you want to restore to a specific backup, edit the pgrestore.json file and update the PGRESTORE_BACKUP_TIMESTAMP setting to specify the backup path you want to restore with. For example:\n \"name\":\"PGRESTORE_BACKUP_TIMESTAMP\", \"value\":\"2018-03-27-14-35-33\" Running the example:\n cd $CCPROOT/examples/kube/pgrestore ./run.sh The Kubernetes Job type executes a pod and then the pod exits. You can view the Job status using this command:\n ${CCP_CLI} get job The pgrestore.json file within that directory specifies options that control the behavior of the pgrestore job.\n pgBackRest pgbackrest is a utility that performs a backup, restore, and archive function for a PostgreSQL database. pgbackrest is written and maintained by David Steele, and more information can be found on the official website.\n Backups are currently performed by manually executing pgbackrest commands against the desired pod. Restores can now be performed via the crunchy-backrest-restore container, which offers FULL or DELTA restore capability.\n pgbackrest is configured using a pgbackrest.conf file that is mounted into the crunchy-postgres container at /pgconf.\n If you place a pgbackrest.conf file within this mounted directory, it will trigger the use of pgbackrest within the PostgreSQL container as the archive_command and will turn on the archive_mode to begin archival. It is still required to define the ARCHIVE_TIMEOUT environment variable within your container configuration as it is set to a disable value of 0 by default.\n The following changes will be made to the container\u0026#8217;s postgresql.conf file:\n ARCHIVE_MODE=on ARCHIVE_TIMEOUT=60 ARCHIVE_COMMAND='pgbackrest --stanza=db archive-push %p' If you are using a crunchy-postgres image older than 1.7.1, archive_command must specify where the pgbackrest.conf file is located:\n ARCHIVE_COMMAND='pgbackrest --config=/pgconf/pgbackrest.conf --stanza=db archive-push %p' This requires you use a pgbackrest stanza name of db within the pgbackrest.conf file you mount.\n When set, WAL files generated by the database will be written out to the /backrestrepo/HOSTNAME-backups mount point.\n Additionally, the Crunchy Postgres container can templatize pgbackrest.conf files by searching for the HOSTNAME values in a mounted pgbackrest.conf file.\n For example, db-path=/pgdata/HOSTNAME will render to db-path=/pgdata/primary if the container\u0026#8217;s hostname is primary. HOSTNAME will be replaced with the value of PGDATA_PATH_OVERRIDE when working with deployments/replicasets.\n The templating example above works for db-path settings, however, repo-path should follow the convention repo-path=/backrestrepo/HOSTNAME-backups in cases where volumes are being mounted to a single mount point (such as hostPath or NFS). Without the additional -backups the backups will populate in the pgdata directory.\n Kubernetes and OpenShift Backup Start the example as follows:\n cd $CCPROOT/examples/kube/backrest/backup ./run.sh This will create the following in your Kubernetes environment:\n configMap containing pgbackrest.conf\n PostgreSQL pod with pgBackRest configured\n PostgreSQL service\n PVC for the PGDATA directory\n PVC for the BackRest Backups and Archives directory\n Examine the /backrestrepo location to view the archive directory and ensure WAL archiving is working.\n You can create a backup using backrest using this command within the container:\n ${CCP_CLI} exec -it backrest /bin/bash pgbackrest --stanza=db backup --type=full Async Archiving pgBackRest supports asyncronous archiving to pull and push Write Ahead Logs. Asynchronous operation is more efficient because it can reuse connections and take advantage of parallelism. For more information on async archiving, see the pgBackRest official documentation.\n This will create the following in your Kubernetes environment:\n configMap containing pgbackrest.conf\n PostgreSQL pod with pgBackRest configured and archiving asynchronously.\n PostgreSQL service\n PVC for the PGDATA directory\n PVC for the BackRest Backups and Archives directory\n Start the example as follows:\n cd $CCPROOT/examples/kube/backrest/async-archiving ./run.sh Examine the /backrestrepo/HOSTNAME-backups location to view the archive directory and ensure WAL archiving is working.\n Examine the /pgdata/HOSTNAME-spool location to view the transient directory used for async archiving.\n You can create a backup using backrest using this command within the container:\n ${CCP_CLI} exec -it backrest-async-archive /bin/bash pgbackrest --stanza=db backup A spooling directory is automatically created in both /pgdata and /pgwal. It is advised to configure pgBackRest to use the spooling location closest to the Write Ahead Log.\n If the PostgreSQL container was created using the XLOGDIR variable, the /pgwal/HOSTNAME-spool directory should be configured in pgbackrest.conf as such: spool-path=/pgwal/HOSTNAME-spool. If WAL resides on PGDATA, use: spool-path=/pgdata/HOSTNAME-spool\n Restore There are three options to choose from when performing a restore:\n Delta - only restore missing files from PGDATA\n Full - restore all files, pgdata must be empty\n Point in Time Recovery (PITR) - delta restore to a certain point in time\n PITR This example uses the backrest/backup example. It should be left running and a pgBackRest backup has been created.\n Start the example as follows:\n cd $CCPROOT/examples/kube/backrest/pitr ./run.sh This will create the following in your Kubernetes environment:\n configMap containing pgbackrest.conf\n Backrest-Restore pod with pgBackRest configured for PITR restore\n pgBackRest will restore the pgdata volume mounted to the restore container to the point in time specified by the PITR_TARGET environment variable. To get a compliant timestamp, PostgreSQL can be queried using the following SQL:\n psql -U postgres -Atc 'select current_timestamp' After a successful restore, run the following to start the restored PostgreSQL container:\n cd $CCPROOT/examples/kube/backrest/pitr ./post-restore.sh Full This example uses the backrest/backup example. It does not need to be running but a pgBackRest backup is required.\n Start the example as follows:\n cd $CCPROOT/examples/kube/backrest/full ./run.sh This will create the following in your Kubernetes environment:\n configMap containing pgbackrest.conf\n Backrest-Restore pod with pgBackRest configured for full restore\n New PVC for the PGDATA directory (full restores require PGDATA to be empty)\n pgBackRest will restore all files to the pgdata volume mounted to the restore container.\n After a successful restore, run the following to start the restored PostgreSQL container:\n cd $CCPROOT/examples/kube/backrest/full ./post-restore.sh Delta This example uses the backrest/backup example. It does not need to be running but a pgBackRest backup is required.\n Start the example as follows:\n cd $CCPROOT/examples/kube/backrest/delta ./run.sh This will create the following in your Kubernetes environment:\n configMap containing pgbackrest.conf\n Backrest-Restore pod with pgBackRest configured for full restore\n pgBackRest will restore files missing to the pgdata volume mounted to the restore container.\n After a successful restore, run the following to start the restored PostgreSQL container:\n cd $CCPROOT/examples/kube/backrest/delta ./post-restore.sh Docker Backup Start the example as follows:\n cd $CCPROOT/examples/docker/backrest/backup ./run.sh This will create the following in your Docker environment:\n PostgreSQL container with pgBackRest configured\n Volume for the PGDATA directory\n Volume for the pgbackrest.conf configuration\n Volume for the BackRest Backups and Archives directory\n Examine the /backrestrepo location to view the archive directory and ensure WAL archiving is working.\n You can create a backup using backrest using this command within the container:\n docker exec -it backrest /bin/bash pgbackrest --stanza=db backup --type=full Async Archiving This will create the following in your Docker environment:\n PostgreSQL container with pgBackRest configured\n Volume for the PGDATA directory\n Volume for the pgbackrest.conf configuration\n Volume for the BackRest Backups and Archives directory\n Start the example as follows:\n cd $CCPROOT/examples/docker/backrest/async-archiving ./run.sh Examine the /backrestrepo/HOSTNAME-backups location to view the archive directory and ensure WAL archiving is working.\n Examine the /pgdata/HOSTNAME-spool location to view the transient directory used for async archiving.\n You can create a backup using backrest using this command within the container:\n docker exec -it backrest /bin/bash pgbackrest --stanza=db backup A spooling directory is automatically created in both /pgdata and /pgwal. It is advised to configure pgBackRest to use the spooling location closest to the Write Ahead Log.\n If the PostgreSQL container was created using the XLOGDIR variable, the /pgwal/HOSTNAME-spool directory should be configured in pgbackrest.conf as such: spool-path=/pgwal/HOSTNAME-spool. If WAL resides on PGDATA, use: spool-path=/pgdata/HOSTNAME-spool\n Restore PITR This example uses the backrest/backup example. It should be left running and a pgBackRest backup has been created.\n Start the example as follows:\n cd $CCPROOT/examples/docker/backrest/pitr ./run.sh This will create the following in your Docker environment:\n Backrest-Restore container with pgBackRest configured for PITR restore\n pgBackRest will restore the pgdata volume mounted to the restore container to the point in time specified by the PITR_TARGET environment variable. To get a compliant timestamp, PostgreSQL can be queried using the following SQL:\n psql -U postgres -Atc 'select current_timestamp' After a successful restore, run the following to start the restored PostgreSQL container:\n cd $CCPROOT/examples/docker/backrest/pitr ./post-restore.sh Full This example uses the backrest/backup example. It does not need to be running but a pgBackRest backup is required.\n Start the example as follows:\n cd $CCPROOT/examples/docker/backrest/full ./run.sh This will create the following in your Docker environment:\n Backrest-Restore pod with pgBackRest configured for full restore\n New Volume for the PGDATA directory (full restores require PGDATA to be empty)\n pgBackRest will restore all files to the pgdata volume mounted to the restore container.\n After a successful restore, run the following to start the restored PostgreSQL container:\n cd $CCPROOT/examples/docker/backrest/full ./post-restore.sh Delta This example uses the backrest/backup example. It does not need to be running but a pgBackRest backup is required.\n Start the example as follows:\n cd $CCPROOT/examples/kube/backrest/delta ./run.sh This will create the following in your Docker environment:\n Backrest-Restore pod with pgBackRest configured for full restore\n pgBackRest will restore files missing to the pgdata volume mounted to the restore container.\n After a successful restore, run the following to start the restored PostgreSQL container:\n cd $CCPROOT/examples/kube/backrest/delta ./post-restore.sh pgBackRest with SSHD The PostgreSQL and PostgreSQL GIS containers can enable an SSH daemon to allow developers to do DBA tasks on the database server without the need for exec privileges. An administrator who deploys the SSHD enabled PostgreSQL database can specify the authorized public keys for access to the database server.\n In order to activate SSHD in the PostgreSQL containers, the following files need to be mounted to the PostgreSQL container:\n SSH Host keys mounted on the /sshd volume. Three keys are required:\n ssh_host_rsa_key\n ssh_host_ecdsa_key\n ssh_host_ed25519_key\n sshd_config mounted on the /pgconf volume\n authorized_keys mounted on the /pgconf volume\n SSHD can be enabled in the PostgreSQL containers by adding the following line:\n ENABLE_SSHD=true The authorized_keys file is mounted on the /pgconf directory. In order to support using this mount for authentication the following must be set in sshd_config:\n AuthorizedKeysFile /pgconf/authorized_keys StrictModes no For OpenShift deployments, the following configuration needs to be set in sshd_config:\n UsePAM no Docker Start the example as follows:\n cd $CCPROOT/examples/docker/postgres-sshd ./run.sh This will create the following in your Docker environment:\n A volume named pgconf which contains the pgbackrest.conf, pg_hba.conf, postgresql.conf, sshd_config, authorized_keys file\n A volume named sshd containing the SSH Host keys\n postgres-sshd container pgbackrest archive and sshd enabled. An initial stanza db will be created on initialization\n After running the example, SSH to the container using the forwarded port 2022:\n ssh -i ./keys/id_rsa -p 2022 postgres@0.0.0.0 Kubernetes / OpenShift Start the example as follows:\n cd $CCPROOT/examples/kube/postgres-sshd ./run.sh This will create the following in your Kubernetes environment:\n A configMap named pgconf which contains the pgbackrest.conf, pg_hba.conf, postgresql.conf, sshd_config, authorized_keys file\n A secret named sshd-secrets containing the SSH Host keys\n postgres-sshd pod with pgbackrest archive and sshd enabled. An initial stanza db will be created on initialization\n postgres-sshd service with port 2022 for SSH\n After running the example, SSH to the service using the postgres-sshd service available in Kubernetes:\n ssh -i ./keys/id_rsa -p 2022 postgres@postgres-sshd Using pgBackrest via SSH If a pgbackrest.conf file is located on the /pgconf volume and archiving is enabled, it\u0026#8217;s possible to run backups using the pgBackrest utility.\n With the SSHD service running, the following command will issue a pgBackrest backup.\n ssh -i ./keys/id_rsa -p 2022 postgres@postgres-sshd pgbackrest --stanza=db backup To list all the available pgBackrest backups, run the following:\n ssh -i ./keys/id_rsa -p 2022 postgres@postgres-sshd pgbackrest info pg_basebackup The script assumes you are going to backup the primary container created in the first example, so you need to ensure that container is running. This example assumes you have configured storage as described in the Storage Configuration documentation. Things to point out with this example include its use of persistent volumes and volume claims to store the backup data files.\n A successful backup will perform pg_basebackup on the primary container and store the backup in the $CCP_STORAGE_PATH volume under a directory named primary-backups. Each backup will be stored in a subdirectory with a timestamp as the name, allowing any number of backups to be kept.\n The backup script will do the following:\n Start up a backup container named backup\n Run pg_basebackup on the container named primary\n Store the backup in the /tmp/backups/primary-backups directory\n Exit after the backup\n When you are ready to restore from the backup, the restore example runs a PostgreSQL container using the backup location. Upon initialization, the container will use rsync to copy the backup data to this new container and then launch PostgreSQL using the original backed-up data.\n The restore script will do the following:\n Start up a container named restore\n Copy the backup files from the previous backup example into /pgdata\n Start up the container using the backup files\n Map the PostgreSQL port of 5432 in the container to your local host port of 12001\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Run the backup with this command:\n cd $CCPROOT/examples/docker/backup ./run.sh When you\u0026#8217;re ready to restore, a restore example is provided.\n It\u0026#8217;s required to specified a backup path for this example. To get the correct path check the backup job logs or a timestamp:\n docker logs backup-vpk9l | grep BACKUP_PATH Wed May 9 20:32:00 UTC 2018 INFO: BACKUP_PATH is set to /pgdata/primary-backups/2018-05-09-20-32-00. BACKUP_PATH can also be discovered by looking at the backup mount directly (if access to the storage is available to the user).\n An example of BACKUP_PATH is as followed:\n \"name\": \"BACKUP_PATH\", \"value\": \"primary-backups/2018-05-09-20-32-00\" When you are ready to restore from the backup created, run the following example:\n cd $CCPROOT/examples/docker/restore ./run.sh Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/backup ./run.sh The Kubernetes Job type executes a pod and then the pod exits. You can view the Job status using this command:\n ${CCP_CLI} get job When you\u0026#8217;re ready to restore, a restore example is provided.\n It\u0026#8217;s required to specified a backup path for this example. To get the correct path check the backup job logs or a timestamp:\n kubectl logs backup-vpk9l | grep BACKUP_PATH Wed May 9 20:32:00 UTC 2018 INFO: BACKUP_PATH is set to /pgdata/primary-backups/2018-05-09-20-32-00. BACKUP_PATH can also be discovered by looking at the backup mount directly (if access to the storage is available to the user).\n An example of BACKUP_PATH defined as a variable within the JSON script is as follows:\n \"name\": \"BACKUP_PATH\", \"value\": \"primary-backups/2018-05-09-20-32-00\" Running the example:\n cd $CCPROOT/examples/kube/restore ./run.sh Test the restored database as follows:\n psql -h restore -U postgres postgres Point in Time Recovery (PITR) PITR (point-in-time-recovery) is a feature that allows for recreating a database from backup and log files at a certain point in time. This is done using a write ahead log (WAL) which is kept in the pg_wal directory within PGDATA. Changes made to the database files over time are recorded in these log files, which allows it to be used for disaster recovery purposes.\n When using PITR as a backup method, in order to restore from the last checkpoint in the event of a database or system failure, it is only necessary to save these log files plus a full backup. This provides an additional advantage in that it is not necessary to keep multiple full backups on hand, which consume space and time to create. This is because point in time recovery allows you to \"replay\" the log files and recover your database to any point since the last full backup.\n More detailed information about Write Ahead Log (WAL) archiving can be found here.\n By default in the crunchy-postgres container, WAL logging is not enabled. To enable WAL logging outside of this example, set the following environment variables when starting the crunchy-postgres container:\n ARCHIVE_MODE=on ARCHIVE_TIMEOUT=60 These variables set the same name settings within the postgresql.conf file that is used by the database. When set, WAL files generated by the database will be written out to the /pgwal mount point.\n A full backup is required to do a PITR. crunchy-backup currently performs this role within the example, running a pg_basebackup on the database. This is a requirement for PITR. After a backup is performed, code is added into crunchy-postgres which will also check to see if you want to do a PITR.\n There are three volume mounts used with the PITR example.\n /recover - When specified within a crunchy-postgres container, PITR is activated during container startup.\n /backup - This is used to find the base backup you want to recover from.\n /pgwal - This volume is used to write out new WAL files from the newly restored database container.\n Some environment variables used to manipulate the point in time recovery logic:\n The RECOVERY_TARGET_NAME environment variable is used to tell the PITR logic what the name of the target is.\n RECOVERY_TARGET_TIME is also an optional environment variable that restores using a known time stamp.\n If you don\u0026#8217;t specify either of these environment variables, then the PITR logic will assume you want to restore using all the WAL files or essentially the last known recovery point.\n The RECOVERY_TARGET_INCLUSIVE environment variable is also available to let you control the setting of the recovery.conf setting recovery_target_inclusive. If you do not set this environment variable the default is true.\n Once you recover a database using PITR, it will be in read-only mode. To make the database resume as a writable database, run the following SQL command:\n postgres=# select pg_wal_replay_resume(); If you\u0026#8217;re running the PITR example for PostgreSQL versions 9.5 or 9.6, please note that starting in PostgreSQL version 10, the pg_xlog directory was renamed to pg_wal. Additionally, all usages of the function pg_xlog_replay_resume were changed to pg_wal_replay_resume.\n It takes about 1 minute for the database to become ready for use after initially starting.\n WAL segment files are written to the /tmp directory. Leaving the example running for a long time could fill up your /tmp directory.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Create a database container as follows:\n cd $CCPROOT/examples/docker/pitr ./run-pitr.sh Next, we will create a base backup of that database using this:\n ./run-backup-pitr.sh After creating the base backup of the database, WAL segment files are created every 60 seconds that contain any database changes. These segments are stored in the /tmp/pitr/pitr/pg_wal directory.\n Next, create some recovery targets within the database by running the SQL commands against the pitr database as follows:\n ./run-sql.sh This will create recovery targets named beforechanges, afterchanges, and nomorechanges. It will create a table, pitrtest, between the beforechanges and afterchanges targets. It will also run a SQL CHECKPOINT to flush out the changes to WAL segments. These labels can be used to mark the points in the recovery process that will be referenced when creating the restored database.\n Next, now that we have a base backup and a set of WAL files containing our database changes, we can shut down the pitr database to simulate a database failure. Do this by running the following:\n docker stop pitr Next, let\u0026#8217;s edit the restore script to use the base backup files created in the step above. You can view the backup path name under the /tmp/backups/pitr-backups/ directory. You will see another directory inside of this path with a name similar to 2018-03-21-21-03-29. Copy and paste that value into the run-restore-pitr.sh script in the BACKUP environment variable.\n After that, run the script.\n vi ./run-restore-pitr.sh ./run-restore-pitr.sh The WAL segments are read and applied when restoring from the database backup. At this point, you should be able to verify that the database was restored to the point before creating the test table:\n psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'table pitrtest' This SQL command should show that the pitrtest table does not exist at this recovery time. The output should be similar to:\n ERROR: relation \"pitrtest\" does not exist PostgreSQL allows you to pause the recovery process if the target name or time is specified. This pause would allow a DBA a chance to review the recovery time/name and see if this is what they want or expect. If so, the DBA can run the following command to resume and complete the recovery:\n psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'select pg_wal_replay_resume()' Until you run the statement above, the database will be left in read-only mode.\n Next, run the script to restore the database to the afterchanges restore point. Update the RECOVERY_TARGET_NAME to afterchanges:\n vi ./run-restore-pitr.sh ./run-restore-pitr.sh After this restore, you should be able to see the test table:\n psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'table pitrtest' psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'select pg_wal_replay_resume()' Lastly, start a recovery using all of the WAL files. This will get the restored database as current as possible. To do so, edit the script to remove the RECOVERY_TARGET_NAME environment setting completely:\n ./run-restore-pitr.sh sleep 30 psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'table pitrtest' psql -h 127.0.0.1 -p 12001 -U postgres postgres -c 'create table foo (id int)' At this point, you should be able to create new data in the restored database and the test table should be present. When you recover the entire WAL history, resuming the recovery is not necessary to enable writes.\n Kubernetes and OpenShift Start by running the example database container:\n cd $CCPROOT/examples/kube/pitr ./run-pitr.sh This step will create a database container, pitr. This container is configured to continuously write WAL segment files to a mounted volume (/pgwal).\n After you start the database, you will create a base backup using this command:\n ./run-backup-pitr.sh This will create a backup and write the backup files to a persistent volume (/pgbackup).\n Next, create some recovery targets within the database by running the SQL commands against the pitr database as follows:\n ./run-sql.sh This will create recovery targets named beforechanges, afterchanges, and nomorechanges. It will create a table, pitrtest, between the beforechanges and afterchanges targets. It will also run a SQL CHECKPOINT to flush out the changes to WAL segments.\n Next, now that we have a base backup and a set of WAL files containing our database changes, we can shut down the pitr database to simulate a database failure. Do this by running the following:\n ${CCP_CLI} delete pod pitr Next, we will create 3 different restored database containers based upon the base backup and the saved WAL files.\n First, get the BACKUP_PATH created by the backup-pitr example by viewing the pods logs:\n ${CCP_CLI} logs backup-pitr-8sfkh | grep PATH Thu May 10 18:07:58 UTC 2018 INFO: BACKUP_PATH is set to /pgdata/pitr-backups/2018-05-10-18-07-58. Edit the restore-pitr.json file and change the BACKUP_PATH environment variable using the path discovered above (note: /pgdata/ is not required and should be excluded in the variable):\n { \"name\": \"BACKUP_PATH\", \"value\": \"pitr-backups/2018-05-10-18-07-58\" { Next, we restore prior to the beforechanges recovery target. This recovery point is before the pitrtest table is created.\n Edit the restore-pitr.json file, and edit the environment variable to indicate we want to use the beforechanges recovery point:\n { \"name\": \"RECOVERY_TARGET_NAME\", \"value\": \"beforechanges\" { Then run the following to create the restored database container:\n ./run-restore-pitr.sh After the database has restored, you should be able to perform a test to see if the recovery worked as expected:\n psql -h restore-pitr -U postgres postgres -c 'table pitrtest' psql -h restore-pitr -U postgres postgres -c 'create table foo (id int)' The output of these commands should show that the pitrtest table is not present. It should also show that you can not create a new table because the database is paused in read-only mode.\n To make the database resume as a writable database, run the following SQL command:\n select pg_wal_replay_resume(); It should then be possible to write to the database:\n psql -h restore-pitr -U postgres postgres -c 'create table foo (id int)' You can also test that if afterchanges is specified, that the pitrtest table is present but that the database is still in recovery mode.\n Lastly, you can test a full recovery using all of the WAL files, if you remove the RECOVERY_TARGET_NAME environment variable completely.\n The storage portions of this example can all be found under $CCP_STORAGE_PATH.\n Connection Pooling pgBouncer Crunchy pgBouncer is a lightweight connection pooler for PostgreSQL databases.\n The following examples create the following containers:\n pgBouncer Primary\n pgBouncer Replica\n PostgreSQL Primary\n PostgreSQL Replica\n In Kubernetes and OpenShift, this example will also create:\n pgBouncer Primary Service\n pgBouncer Replica Service\n Primary Service\n Replica Service\n PostgreSQL Secrets\n pgBouncer Secrets\n To cleanup the objects created by this example, run the following in the pgbouncer example directory:\n ./cleanup.sh For more information on pgBouncer, see the official website.\n Docker Run the pgbouncer example:\n cd $CCPROOT/examples/docker/pgbouncer ./run.sh Once all containers have deployed and are ready for use, psql to the target databases through pgBouncer:\n psql -d userdb -h 0.0.0.0 -p 6432 -U testuser psql -d userdb -h 0.0.0.0 -p 6433 -U testuser To connect to the administration database within pgbouncer, connect using psql:\n psql -d pgbouncer -h 0.0.0.0 -p 6432 -U pgbouncer psql -d pgbouncer -h 0.0.0.0 -p 6433 -U pgbouncer Kubernetes and OpenShift OpenShift: If custom configurations aren\u0026#8217;t being mounted, an emptydir volume is required to be mounted at /pgconf.\n Run the pgbouncer example:\n cd $CCPROOT/examples/kube/pgbouncer ./run.sh Once all containers have deployed and are ready for use, psql to the target databases through pgBouncer:\n psql -d userdb -h pgbouncer-primary -p 6432 -U testuser psql -d userdb -h pgbouncer-replica -p 6432 -U testuser To connect to the administration database within pgbouncer, connect using psql:\n psql -d pgbouncer -h pgbouncer-primary -p 6432 -U pgbouncer -c \"SHOW SERVERS\" psql -d pgbouncer -h pgbouncer-replica -p 6432 -U pgbouncer -c \"SHOW SERVERS\" pgPool II An example is provided that will run a pgPool II container in conjunction with the primary-replica example provided above.\n You can execute both INSERT and SELECT statements after connecting to pgpool. The container will direct INSERT statements to the primary and SELECT statements will be sent round-robin to both the primary and replica.\n The container creates a default database called userdb, a default user called testuser and a default password of password.\n You can view the nodes that pgpool is configured for by running:\n psql -h pgpool -U testuser userdb -c 'show pool_nodes' To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Create the container as follows:\n cd $CCPROOT/examples/docker/pgpool ./run.sh The example is configured to allow the testuser to connect to the userdb database.\n psql -h localhost -U testuser -p 12003 userdb Kubernetes and OpenShift Run the following command to deploy the pgpool service:\n cd $CCPROOT/examples/kube/pgpool ./run.sh The example is configured to allow the testuser to connect to the userdb database.\n psql -h pgpool -U testuser userdb Database Single Primary This example starts a single PostgreSQL container and service, the most simple of examples.\n The container creates a default database called userdb, a default user called testuser and a default password of password.\n For all environments, the script additionally creates:\n A persistent volume claim\n A crunchy-postgres container named primary\n The database using predefined environment variables\n And specifically for the Kubernetes and OpenShift environments:\n A pod named primary\n A service named primary\n A PVC named primary-pgdata\n The database using predefined environment variables\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker To create the example and run the container:\n cd $CCPROOT/examples/docker/primary ./run.sh Connect from your local host as follows:\n psql -h localhost -U testuser -W userdb Kubernetes and OpenShift To create the example:\n cd $CCPROOT/examples/kube/primary ./run.sh Connect from your local host as follows:\n psql -h primary -U postgres postgres Helm This example resides under the $CCPROOT/examples/helm directory. View the README to run this example using Helm here.\n PostgreSQL Deployment Starting in release 1.2.8, the PostgreSQL container can accept an environment variable named PGDATA_PATH_OVERRIDE. If set, the /pgdata/subdir path will use a subdirectory name of your choosing instead of the default which is the hostname of the container.\n This example shows how a Deployment of a PostgreSQL primary is supported. A pod is a deployment that uses a hostname generated by Kubernetes; because of this, a new hostname will be defined upon restart of the primary pod.\n For finding the /pgdata that pertains to the pod, you will need to specify a /pgdata/subdir name that never changes. This requirement is handled by the PGDATA_PATH_OVERRIDE environment variable.\n The container creates a default database called userdb, a default user called testuser and a default password of password.\n This example will create the following in your Kubernetes and OpenShift environments:\n primary-deployment service which uses a PVC to persist PostgreSQL data\n replica-deployment service, uses emptyDir persistence\n primary-deployment deployment of replica count 1 for the primary PostgreSQL database pod\n replica-deployment deployment of replica count 1 for the replica\n replica2-deployment deployment of replica count 1 for the 2nd replica\n ConfigMap to hold a custom postgresql.conf, setup.sql, and pg_hba.conf files\n Secrets for the primary user, superuser, and normal user to hold the passwords\n Volume mount for /backrestrepo and /pgwal\n The persisted data for the PostgreSQL primary is found under /pgdata/primary-deployment. If you delete the primary pod, the deployment will create another pod for the primary and will be able to start up immediately since it works out of the same /pgdata/primary-deployment data directory.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Kubernetes and OpenShift Start the example as follows:\n cd $CCPROOT/examples/kube/primary-deployment ./run.sh Replication This example starts a primary and a replica pod containing a PostgreSQL database.\n The container creates a default database called userdb, a default user called testuser and a default password of password.\n For the Docker environment, the script additionally creates:\n A docker volume using the local driver for the primary\n A docker volume using the local driver for the replica\n A container named primary binding to port 12007\n A container named replica binding to port 12008\n A mapping of the PostgreSQL port 5432 within the container to the localhost port 12000\n The database using predefined environment variables\n And specifically for the Kubernetes and OpenShift environments:\n emptyDir volumes for persistence\n A pod named pr-primary\n A pod named pr-replica\n A pod named pr-replica-2\n A service named pr-primary\n A service named pr-replica\n The database using predefined environment variables\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker To create the example and run the container:\n cd $CCPROOT/examples/docker/primary-replica ./run.sh Connect from your local host as follows:\n psql -h localhost -p 12007 -U testuser -W userdb psql -h localhost -p 12008 -U testuser -W userdb Docker-Compose Running the example:\n cd $CCPROOT/examples/compose/primary-replica docker-compose up To deploy more than one replica, run the following:\n docker-compose up --scale db-replica=3 To connect to the created database containers, first identify the ports exposed on the containers:\n docker ps Next, using psql, connect to the service:\n psql -d userdb -h 0.0.0.0 -p \u0026lt;CONTAINER_PORT\u0026gt; -U testuser See PG_PASSWORD in docker-compose.yml for the user password.\n To tear down the example, run the following:\n docker-compose stop docker-compose rm Kubernetes and OpenShift Run the following command to deploy a primary and replica database cluster:\n cd $CCPROOT/examples/kube/primary-replica ./run.sh It takes about a minute for the replica to begin replicating with the primary. To test out replication, see if replication is underway with this command:\n psql -h pr-primary -U postgres postgres -c 'table pg_stat_replication' If you see a line returned from that query it means the primary is replicating to the replica. Try creating some data on the primary:\n psql -h pr-primary -U postgres postgres -c 'create table foo (id int)' psql -h pr-primary -U postgres postgres -c 'insert into foo values (1)' Then verify that the data is replicated to the replica:\n psql -h pr-replica -U postgres postgres -c 'table foo' primary-replica-dc\n If you wanted to experiment with scaling up the number of replicas, you can run the following example:\n cd $CCPROOT/examples/kube/primary-replica-dc ./run.sh You can verify that replication is working using the same commands as above.\n This example creates 2 replicas when it initially starts. To scale up the number of replicas and view what the deployment looks like before and after, run these commands:\n ${CCP_CLI} get deployment ${CCP_CLI} scale --current-replicas=2 --replicas=3 deployment/replica-dc ${CCP_CLI} get deployment ${CCP_CLI} get pod You can verify that you now have 3 replicas by running this query on the primary:\n psql -h primary-dc -U postgres postgres -c 'table pg_stat_replication' Helm This example resides under the $CCPROOT/examples/helm directory. View the README to run this example using Helm here.\n Synchronous Replication This example deploys a PostgreSQL cluster with a primary, a synchronous replica, and an asynchronous replica. The two replicas share the same service.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker To run this example, run the following:\n cd $CCPROOT/examples/docker/sync ./run.sh You can test the replication status on the primary by using the following command and the password password:\n psql -h 127.0.0.1 -p 12010 -U postgres postgres -c 'table pg_stat_replication' You should see 2 rows; 1 for the asynchronous replica and 1 for the synchronous replica. The sync_state column shows values of async or sync.\n You can test replication to the replicas by first entering some data on the primary, and secondly querying the replicas for that data:\n psql -h 127.0.0.1 -p 12010 -U postgres postgres -c 'create table foo (id int)' psql -h 127.0.0.1 -p 12010 -U postgres postgres -c 'insert into foo values (1)' psql -h 127.0.0.1 -p 12011 -U postgres postgres -c 'table foo' psql -h 127.0.0.1 -p 12012 -U postgres postgres -c 'table foo' Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/sync ./run.sh Connect to the primarysync and replicasync databases as follows for both the Kubernetes and OpenShift environments:\n psql -h primarysync -U postgres postgres -c 'create table test (id int)' psql -h primarysync -U postgres postgres -c 'insert into test values (1)' psql -h primarysync -U postgres postgres -c 'table pg_stat_replication' psql -h replicasync -U postgres postgres -c 'select inet_server_addr(), * from test' psql -h replicasync -U postgres postgres -c 'select inet_server_addr(), * from test' psql -h replicasync -U postgres postgres -c 'select inet_server_addr(), * from test' This set of queries will show you the IP address of the PostgreSQL replica container. Note the changing IP address due to the round-robin service proxy being used for both replicas. The example queries also show that both replicas are replicating successfully from the primary.\n Statefulsets This example deploys a statefulset named statefulset. The statefulset is a new feature in Kubernetes as of version 1.5 and in OpenShift Origin as of version 3.5. Statefulsets have replaced PetSets going forward.\n Please view this Kubernetes description to better understand what a Statefulset is and how it works.\n This example creates 2 PostgreSQL containers to form the set. At startup, each container will examine its hostname to determine if it is the first container within the set of containers.\n The first container is determined by the hostname suffix assigned by Kubernetes to the pod. This is an ordinal value starting with 0. If a container sees that it has an ordinal value of 0, it will update the container labels to add a new label of:\n name=$PG_PRIMARY_HOST In this example, PG_PRIMARY_HOST is specified as statefulset-primary.\n By default, the containers specify a value of name=statefulset-replica.\n There are 2 services that end user applications will use to access the PostgreSQL cluster, one service (statefulset-primary) routes to the primary container and the other (statefulset-replica) to the replica containers.\n $ ${CCP_CLI} get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.96.0.1 \u0026lt;none\u0026gt; 443/TCP 22h statefulset None \u0026lt;none\u0026gt; 5432/TCP 1h statefulset-primary 10.97.168.138 \u0026lt;none\u0026gt; 5432/TCP 1h statefulset-replica 10.97.218.221 \u0026lt;none\u0026gt; 5432/TCP 1h To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Kubernetes and OpenShift Start the example as follows:\n cd $CCPROOT/examples/kube/statefulset ./run.sh You can access the primary database as follows:\n psql -h statefulset-primary -U postgres postgres You can access the replica databases as follows:\n psql -h statefulset-replica -U postgres postgres You can scale the number of containers using this command; this will essentially create an additional replica database.\n ${CCP_CLI} scale --replicas=3 statefulset statefulset Helm This example resides under the $CCPROOT/examples/helm directory. View the README to run this example using Helm here.\n Geospatial (PostGIS) An example is provided that will run a PostgreSQL with PostGIS pod and service in Kubernetes and OpenShift and a container in Docker.\n The container creates a default database called userdb, a default user called testuser and a default password of password.\n You can view the extensions that postgres-gis has enabled by running the following command and viewing the listed PostGIS packages:\n psql -h postgres-gis -U testuser userdb -c '\\dx' To validate that PostGIS is installed and which version is running, run the command:\n psql -h postgres-gis -U testuser userdb -c \"SELECT postgis_full_version();\" You should expect to see output similar to:\n postgis_full_version ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- POSTGIS=\"2.4.2 r16113\" PGSQL=\"100\" GEOS=\"3.5.0-CAPI-1.9.0 r4084\" PROJ=\"Rel. 4.8.0, 6 March 2012\" GDAL=\"GDAL 1.11.4, released 2016/01/25\" LIBXML=\"2.9.1\" LIBJSON=\"0.11\" TOPOLOGY RASTER (1 row) As an exercise for invoking some of the basic PostGIS functionality for validation, try defining a 2D geometry point while giving inputs of longitude and latitude through this command.\n psql -h postgres-gis -U testuser userdb -c \"select ST_MakePoint(28.385200,-81.563900);\" You should expect to see output similar to:\n st_makepoint -------------------------------------------- 0101000000516B9A779C623C40B98D06F0166454C0 (1 row) To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Create the container as follows:\n cd $CCPROOT/examples/docker/postgres-gis ./run.sh Enter the following command to connect to the postgres-gis container that is mapped to your local port 12000:\n psql -h localhost -U testuser -p 12000 userdb Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/postgres-gis ./run.sh Custom Configuration You can use your own version of the SQL file setup.sql to customize the initialization of database data and objects when the container and database are created.\n This works by placing a file named setup.sql within the /pgconf mounted volume directory. Portions of the setup.sql file are required for the container to work; please see comments within the sample setup.sql file.\n If you mount a /pgconf volume, crunchy-postgres will look at that directory for postgresql.conf, pg_hba.conf, pg_ident.conf, SSL server/ca certificates and setup.sql. If it finds one of them it will use that file instead of the default files.\n Docker This example can be run as follows for the Docker environment:\n cd $CCPROOT/examples/docker/custom-config ./run.sh Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/custom-config ./run.sh SSL Authentication This example shows how you can configure PostgreSQL to use SSL for client authentication.\n The example requires SSL certificates and keys to be created. Included in the examples directory is a script to create self-signed certificates (server and client) for the example: $CCPROOT/examples/ssl-creator.sh.\n The example creates a client certificate for the user testuser. Furthermore, the server certificate is created for the server name custom-config-ssl.\n If as a client it\u0026#8217;s required to confirm the identity of the server, verify-full can be specified in the connection string. This will check if the server and the server certificate have the same name:\n psql postgresql://custom-config-ssl:5432/postgres?sslmode=verify-full -U testuser\" To connect via IP, sslmode can be changed to require.\n psql postgresql://IP_OF_PGSQL:5432/postgres?sslmode=require -U testuser\" This example can be run as follows for the Docker environment:\n cd $CCPROOT/examples/docker/custom-config-ssl ./run.sh And the example can be run in the following directory for the Kubernetes and OpenShift environments:\n cd $CCPROOT/examples/kube/custom-config-ssl ./run.sh A required step to make this example work is to define in your /etc/hosts file an entry that maps custom-config-ssl to the service IP address for the container.\n For instance, if your service has an address as follows:\n ${CCP_CLI} get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE custom-config-ssl 172.30.211.108 \u0026lt;none\u0026gt; 5432/TCP Then your /etc/hosts file needs an entry like this:\n 172.30.211.108 custom-config-ssl For production Kubernetes and OpenShift installations, it will likely be preferred for DNS names to resolve to the PostgreSQL service name and generate server certificates using the DNS names instead of the example name custom-config-ssl.\n In order to connect via certificate, environment variables must be set that point to the client certificates. Source the env.sh file to set environment varaibles for the example:\n source env.sh If as a client it\u0026#8217;s required to confirm the identity of the server, verify-full can be specified in the connection string. This will check if the server and the server certificate have the same name:\n psql postgresql://custom-config-ssl:5432/userdb?sslmode=verify-full -U testuser\" To connect via IP, sslmode can be changed to require.\n psql postgresql://IP_OF_PGSQL:5432/userdb?sslmode=require -U testuser\" You should see a connection that looks like the following:\n psql (10.5) SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) Type \"help\" for help. userdb=\u0026gt; Docker Swarm This example shows how to run a primary and replica database container on a Docker Swarm (v.1.12) cluster.\n First, set up a cluster. The Kubernetes libvirt coreos cluster example works well; see coreos-libvirt-cluster.\n Next, on each node, create the Swarm using these Swarm Install instructions.\n Include this command on the manager node:\n docker swarm init --advertise-addr 192.168.10.1 Then this command on all the worker nodes:\n docker swarm join \\ --token SWMTKN-1-65cn5wa1qv76l8l45uvlsbprogyhlprjpn27p1qxjwqmncn37o-015egopg4jhtbmlu04faon82u \\ 192.168.10.1.37 Before creating Swarm services, it is necessary to define an overlay network to be used by the services you will create. This can be done as follows:\n docker network create --driver overlay crunchynet We want to have the primary database always placed on a specific node. This is accomplished using node constraints as follows:\n docker node inspect kubernetes-node-1 | grep ID docker node update --label-add type=primary 18yrb7m650umx738rtevojpqy In the above example, the kubernetes-node-1 node with ID 18yrb7m650umx738rtevojpqy has a user defined label of primary added to it. The primary service specifies primary as a constraint when created; this tells Swarm to place the service on that specific node. The replica specifies a constraint of node.labels.type != primary to have the replica always placed on a node that is not hosting the primary service.\n Docker After you set up the Swarm cluster, you can then run this example as follows on the Swarm Manager Node:\n cd $CCPROOT/examples/docker/swarm-service ./run.sh You can then find the nodes that are running the primary and replica containers by:\n docker service ps primary docker service ps replica You can also scale up the number of replica containers.\n docker service scale replica=2 docker service ls Verify you have two replicas within PostgreSQL by viewing the pg_stat_replication table. The password is password by default when logged into the kubernetes-node-1 host:\n docker exec -it $(docker ps -q) psql -U postgres -c 'table pg_stat_replication' postgres You should see a row for each replica along with its replication status.\n Failover Watch This example shows how to run the crunchy-watch container to perform an automated failover. For the example to work, the host on which you are running needs to allow read-write access to /run/docker.sock. The crunchy-watch container runs as the postgres user, so adjust the file permissions of /run/docker.sock accordingly.\n The primary-replica example is required to be run before this example.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Run the example as follows:\n cd $CCPROOT/examples/docker/watch ./run.sh This will start the watch container which tests every few seconds whether the primary database is running, if not, it will trigger a failover using docker exec on the replica host.\n Test it out by stopping the primary:\n docker stop primary docker logs watch Look at the watch container logs to see it perform the failover.\n Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/watch ./run.sh Check out the log of the watch container as follows:\n ${CCP_CLI} log watch Then trigger a failover using this command:\n ${CCP_CLI} delete pod pr-primary Resume watching the watch container\u0026#8217;s log and verify that it detects the primary is not reachable and performs a failover on the replica.\n A final test is to see if the old replica is now a fully functioning primary by inserting some test data into it as follows:\n psql -h pr-primary -U postgres postgres -c 'create table failtest (id int)' The above command still works because the watch container has changed the labels of the replica to make it a primary, so the primary service will still work and route now to the new primary even though the pod is named replica.\n You can view the labels on a pod with this command:\n ${CCP_CLI} describe pod pr-replica | grep Label Metrics and Performance pgBadger pgbadger is a PostgreSQL tool that reads the log files from a specified database in order to produce a HTML report that shows various PostgreSQL statistics and graphs. This example runs the pgbadger HTTP server against a crunchy-postgres container and illustrates how to view the generated reports.\n The port utilized for this tool is port 14000 for Docker environments and port 10000 for Kubernetes and OpenShift environments.\n A requirement to build this container from source is golang. On RHEL 7.2, golang is found in the 'server optional' repository which needs to be enabled in order to install this dependency.\n sudo subscription-manager repos --enable=rhel-7-server-optional-rpms The container creates a default database called userdb, a default user called testuser and a default password of password.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker Run the example as follows:\n cd $CCPROOT/examples/docker/pgbadger ./run.sh After execution, the container will run and provide a simple HTTP command you can browse to view the report. As you run queries against the database, you can invoke this URL to generate updated reports:\n curl -L http://127.0.0.1:14000/api/badgergenerate Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/pgbadger ./run.sh After execution, the container will run and provide a simple HTTP command you can browse to view the report. As you run queries against the database, you can invoke this URL to generate updated reports:\n curl -L http://pgbadger:10000/api/badgergenerate You can view the database container logs using these commands:\n ${CCP_CLI} logs pgbadger -c pgbadger ${CCP_CLI} logs pgbadger -c postgres Metrics Collection You can collect various PostgreSQL metrics from your database container by running a crunchy-collect container that points to your database container.\n This example starts up 5 containers:\n Collect (crunchy-collect)\n Grafana (crunchy-grafana)\n PostgreSQL (crunchy-postgres)\n Prometheus (crunchy-prometheus)\n Every 5 seconds by default, Prometheus will scrape the Collect container for metrics. These metrics will then be visualized by Grafana.\n By default, Prometheus detects which environment its running on (Docker, Kubernetes, or OpenShift Container Platform) and applies a default configuration. If this container is running on Kubernetes or OpenShift Container Platform, it will use the Kubernetes API to discover pods with the label \"crunchy-collect\": \"true\".\n The collect container must have this label to be discovered in these environments. Additionally, the collect container uses a special PostgreSQL role ccp_monitoring. This user is created by setting the PGMONITOR_PASSWORD environment variable on the PostgreSQL container.\n Discovering pods requires a cluster role service account. See the Kubernetes and OpenShift metrics JSON file for more details.\n For Docker environments the collect hostname must be specified as an environment variable.\n To shutdown the instance and remove the container for each example, run the following:\n ./cleanup.sh Docker To start this set of containers, run the following:\n cd $CCPROOT/examples/docker/metrics ./run.sh You will be able to access the Grafana and Prometheus services from the following web addresses:\n Grafana (http://0.0.0.0:3000)\n Prometheus (http://0.0.0.0:9090)\n The crunchy-postgres container is accessible on port 5432.\n Kubernetes and OpenShift Running the example:\n cd $CCPROOT/examples/kube/metrics ./run.sh It\u0026#8217;s required to use port-forward to access the Grafana dashboard. To start the port-forward, run the following command:\n ${CCP_CLI} port-forward metrics 3000:3000 ${CCP_CLI} port-forward metrics 9090:9090 Grafana dashboard can be then accessed from http://127.0.0.01:3000\n Prometheus dashboard can be then accessed from http://127.0.0.01:9090\n You can view the container logs using these command:\n ${CCP_CLI} logs -c grafana metrics ${CCP_CLI} logs -c prometheus metrics ${CCP_CLI} logs -c collect primary ${CCP_CLI} logs -c postgres primary ${CCP_CLI} logs -c collect replica ${CCP_CLI} logs -c postgres replica pg_audit This example provides an example of enabling pg_audit output. As of release 1.3, pg_audit is included in the crunchy-postgres container and is added to the PostgreSQL shared library list in postgresql.conf.\n Given the numerous ways pg_audit can be configured, the exact pg_audit configuration is left to the user to define. pg_audit allows you to configure auditing rules either in postgresql.conf or within your SQL script.\n For this test, we place pg_audit statements within a SQL script and verify that auditing is enabled and working. If you choose to configure pg_audit via a postgresql.conf file, then you will need to define your own custom file and mount it to override the default postgresql.conf file.\n Docker Run the following to create a database container:\n cd $CCPROOT/examples/docker/pgaudit ./run.sh This starts an instance of the pg_audit container (running crunchy-postgres) on port 12005 on localhost. The test script is then automatically executed.\n This test executes a SQL file which contains pg_audit configuration statements as well as executes some basic SQL commands. These SQL commands will cause pg_audit to create log messages in the pg_log log file created by the database container.\n Kubernetes and OpenShift Run the following:\n cd $CCPROOT/examples/kube/pgaudit ./run.sh The script will create the pg_audit pod (running the crunchy-postgres container) on the Kubernetes instance and then execute a SQL file which contains pg_audit configuration statements as well as some basic SQL commands. These SQL commands will cause pg_audit to create log messages in the pg_log file created by the database container.\n "
},
{
"uri": "https://crunchydata.github.io/crunchy-containers/getting-started/openshift-templates/single-primary/",
@@ -242,7 +242,7 @@
"title": "Crunchy Data Container Suite",
"tags": [],
"description": "",
- "content": " Latest Release: 2.1.0 2018-08-09\n General Please view the official Crunchy Data Container Suite documentation here. If you are interested in contributing or making an update to the documentation, please view the Contributing Guidelines.\n Warning: The master branch is considered unstable. Please consult the tagged release currently deployed in your environment.\n What is the Container Suite? The Crunchy Container Suite provides Docker containers that enable rapid deployment of PostgreSQL, including administration and monitoring tools. Multiple styles of deploying PostgreSQL clusters are supported.\n Requirements The containers will execute in the following environments:\n Docker 1.12 and above\n Openshift 3.4 and above\n Kubernetes 1.5 and above\n Containers The project includes the following containers:\n crunchy-postgres - executes Postgres\n crunchy-postgres-gis - executes Postgres plus the PostGIS extensions\n crunchy-backup - performs a full database backup\n crunchy-pgpool - executes pgpool\n crunchy-pgbadger - executes pgbadger\n crunchy-watch - performs a form of automated failover\n crunchy-collect - collects Postgres metrics\n crunchy-prometheus -stores Postgres metrics\n crunchy-grafana - graphs Postgres metrics\n crunchy-pgbouncer - pgbouncer connection pooler and simple form of failover\n crunchy-pgadmin4 - pgadmin4 web application\n crunchy-dba - implements a cron scheduler to perform simple DBA tasks\n crunchy-upgrade - allows you to perform a major postgres upgrade using pg_upgrade\n crunchy-backrest-restore - allows you to perform a pgbackrest restore\n crunchy-sim - executes queries over a specified interval range for Postgres traffic simulation purposes\n crunchy-pgdump - provides a means of performing either a pg_dump or pg_dumpall on a Postgres database\n crunchy-pgrestore - provides a means of performing a restore of a dump from pg_dump or pg_dumpall via psql or pg_restore to a Postgres container database\n For Kubernetes users of these containers, there is an associated project worth taking a look at that uses the containers found in this repo and provides a higher level automation. That project is the postgres-operator.\n Further descriptions of each of these containers and environment variables that can be used to tune them can be found in the Container Specifications document.\n Getting Started Complete build and install documentation is found here. The provided Dockerfiles build the containers on a Centos 7 base image and use the community PostgreSQL RPMs.\n Crunchy provides a commercially supported version of these containers built upon RHEL 7 and the Crunchy supported PostgreSQL. Contact Crunchy for more details at http://www.crunchydata.com.\n Usage Various examples are provided in the Getting Started documentation for running in Docker, Kubernetes, and OpenShift environments.\n You will need to set up your environment as per the Installation documentation in order to execute the examples.\n "
+ "content": " Latest Release: 2.1.0 2018-08-13\n General Please view the official Crunchy Data Container Suite documentation here. If you are interested in contributing or making an update to the documentation, please view the Contributing Guidelines.\n Warning: The master branch is considered unstable. Please consult the tagged release currently deployed in your environment.\n What is the Container Suite? The Crunchy Container Suite provides Docker containers that enable rapid deployment of PostgreSQL, including administration and monitoring tools. Multiple styles of deploying PostgreSQL clusters are supported.\n Requirements The containers will execute in the following environments:\n Docker 1.12 and above\n Openshift 3.4 and above\n Kubernetes 1.5 and above\n Containers The project includes the following containers:\n crunchy-postgres - executes Postgres\n crunchy-postgres-gis - executes Postgres plus the PostGIS extensions\n crunchy-backup - performs a full database backup\n crunchy-pgpool - executes pgpool\n crunchy-pgbadger - executes pgbadger\n crunchy-watch - performs a form of automated failover\n crunchy-collect - collects Postgres metrics\n crunchy-prometheus -stores Postgres metrics\n crunchy-grafana - graphs Postgres metrics\n crunchy-pgbouncer - pgbouncer connection pooler and simple form of failover\n crunchy-pgadmin4 - pgadmin4 web application\n crunchy-dba - implements a cron scheduler to perform simple DBA tasks\n crunchy-upgrade - allows you to perform a major postgres upgrade using pg_upgrade\n crunchy-backrest-restore - allows you to perform a pgbackrest restore\n crunchy-sim - executes queries over a specified interval range for Postgres traffic simulation purposes\n crunchy-pgdump - provides a means of performing either a pg_dump or pg_dumpall on a Postgres database\n crunchy-pgrestore - provides a means of performing a restore of a dump from pg_dump or pg_dumpall via psql or pg_restore to a Postgres container database\n For Kubernetes users of these containers, there is an associated project worth taking a look at that uses the containers found in this repo and provides a higher level automation. That project is the postgres-operator.\n Further descriptions of each of these containers and environment variables that can be used to tune them can be found in the Container Specifications document.\n Getting Started Complete build and install documentation is found here. The provided Dockerfiles build the containers on a Centos 7 base image and use the community PostgreSQL RPMs.\n Crunchy provides a commercially supported version of these containers built upon RHEL 7 and the Crunchy supported PostgreSQL. Contact Crunchy for more details at http://www.crunchydata.com.\n Usage Various examples are provided in the Getting Started documentation for running in Docker, Kubernetes, and OpenShift environments.\n You will need to set up your environment as per the Installation documentation in order to execute the examples.\n "
},
{
"uri": "https://crunchydata.github.io/crunchy-containers/categories/",
diff --git a/docs/index.xml b/docs/index.xml
index 94aafb85b..e5ed834bd 100644
--- a/docs/index.xml
+++ b/docs/index.xml
@@ -28,7 +28,7 @@ Features The following features are supported by the crunchy-postgres container:
Tue, 08 May 2018 08:52:09 -0700https://crunchydata.github.io/crunchy-containers/installation/environment-setup/
- Table of Contents Requirements Project Environment Installing Requirements CentOS 7 RHEL 7 PostgreSQL Docker OpenShift Kubernetes Helm Creating a Demo Namespace Kubernetes OpenShift Next Steps Latest Release: 2.1.0 2018-08-09
+ Table of Contents Requirements Project Environment Installing Requirements CentOS 7 RHEL 7 PostgreSQL Docker OpenShift Kubernetes Helm Creating a Demo Namespace Kubernetes OpenShift Next Steps Latest Release: 2.1.0 2018-08-13
Requirements The Crunchy Container Suite can run on different environments including:
Docker 1.12+
OpenShift Container Platform 3.6+
@@ -51,7 +51,7 @@ Features The following features are supported by the crunchy-postgres container:
Tue, 08 May 2018 08:52:16 -0700https://crunchydata.github.io/crunchy-containers/installation/build-the-containers/
- Table of Contents Build the Containers Next Steps Latest Release: 2.1.0 2018-08-09
+ Table of Contents Build the Containers Next Steps Latest Release: 2.1.0 2018-08-13
Build the Containers At this point, you have a decision to make - either download prebuilt containers from Dockerhub, or build the containers on your local host.
To download the prebuilt containers, make sure you can login to Dockerhub, and then run the following:
docker login cd $CCPROOT ./bin/pull-from-dockerhub.sh Or if you’d rather build the containers from source, perform a container build as follows:
@@ -75,7 +75,7 @@ Features The following features are supported by the crunchy-backup container:
Tue, 08 May 2018 08:52:46 -0700https://crunchydata.github.io/crunchy-containers/installation/storage-configuration/
- Table of Contents Available Storage Types HostPath NFS Dynamic Storage Next Steps Latest Release: 2.1.0 2018-08-09
+ Table of Contents Available Storage Types HostPath NFS Dynamic Storage Next Steps Latest Release: 2.1.0 2018-08-13
Available Storage Types The Container Suite is tested on 3 different storage backends:
hostPath (single node testing)
NFS (single and multi-node testing)
diff --git a/docs/installation/build-the-containers/index.html b/docs/installation/build-the-containers/index.html
index 82451fda1..223a510d8 100644
--- a/docs/installation/build-the-containers/index.html
+++ b/docs/installation/build-the-containers/index.html
@@ -5,7 +5,7 @@
-
+
Build the Containers :: Crunchy Data Container Suite Documentation
@@ -405,7 +405,7 @@
Build the Containers
-
Latest Release: 2.1.0 2018-08-09
+
Latest Release: 2.1.0 2018-08-13
Build the Containers
diff --git a/docs/installation/environment-setup/index.html b/docs/installation/environment-setup/index.html
index 15719b587..de54dd280 100644
--- a/docs/installation/environment-setup/index.html
+++ b/docs/installation/environment-setup/index.html
@@ -5,7 +5,7 @@
-
+
Environment Setup :: Crunchy Data Container Suite Documentation
@@ -102,6 +102,71 @@
diff --git a/docs/installation/index.html b/docs/installation/index.html
index fa02aa7a9..e1135f90c 100644
--- a/docs/installation/index.html
+++ b/docs/installation/index.html
@@ -5,7 +5,7 @@
-
+
Installation :: Crunchy Data Container Suite Documentation
@@ -394,7 +394,7 @@
Installation
-
Latest Release: 2.1.0 2018-08-09
+
Latest Release: 2.1.0 2018-08-13
Installation
diff --git a/docs/installation/index.xml b/docs/installation/index.xml
index e02273a9f..5ce157bb0 100644
--- a/docs/installation/index.xml
+++ b/docs/installation/index.xml
@@ -17,7 +17,7 @@
Tue, 08 May 2018 08:52:09 -0700https://crunchydata.github.io/crunchy-containers/installation/environment-setup/
- Table of Contents Requirements Project Environment Installing Requirements CentOS 7 RHEL 7 PostgreSQL Docker OpenShift Kubernetes Helm Creating a Demo Namespace Kubernetes OpenShift Next Steps Latest Release: 2.1.0 2018-08-09
+ Table of Contents Requirements Project Environment Installing Requirements CentOS 7 RHEL 7 PostgreSQL Docker OpenShift Kubernetes Helm Creating a Demo Namespace Kubernetes OpenShift Next Steps Latest Release: 2.1.0 2018-08-13
Requirements The Crunchy Container Suite can run on different environments including:
Docker 1.12+
OpenShift Container Platform 3.6+
@@ -31,7 +31,7 @@
Tue, 08 May 2018 08:52:16 -0700https://crunchydata.github.io/crunchy-containers/installation/build-the-containers/
- Table of Contents Build the Containers Next Steps Latest Release: 2.1.0 2018-08-09
+ Table of Contents Build the Containers Next Steps Latest Release: 2.1.0 2018-08-13
Build the Containers At this point, you have a decision to make - either download prebuilt containers from Dockerhub, or build the containers on your local host.
To download the prebuilt containers, make sure you can login to Dockerhub, and then run the following:
docker login cd $CCPROOT ./bin/pull-from-dockerhub.sh Or if you’d rather build the containers from source, perform a container build as follows:
@@ -43,7 +43,7 @@
Tue, 08 May 2018 08:52:46 -0700https://crunchydata.github.io/crunchy-containers/installation/storage-configuration/
- Table of Contents Available Storage Types HostPath NFS Dynamic Storage Next Steps Latest Release: 2.1.0 2018-08-09
+ Table of Contents Available Storage Types HostPath NFS Dynamic Storage Next Steps Latest Release: 2.1.0 2018-08-13
Available Storage Types The Container Suite is tested on 3 different storage backends:
hostPath (single node testing)
NFS (single and multi-node testing)
diff --git a/docs/installation/storage-configuration/index.html b/docs/installation/storage-configuration/index.html
index f77653f9b..7c2b25208 100644
--- a/docs/installation/storage-configuration/index.html
+++ b/docs/installation/storage-configuration/index.html
@@ -5,7 +5,7 @@
-
+
Storage Configuration :: Crunchy Data Container Suite Documentation
@@ -408,7 +408,7 @@
Storage Configuration
-
Latest Release: 2.1.0 2018-08-09
+
Latest Release: 2.1.0 2018-08-13
Available Storage Types
diff --git a/docs/tags/index.html b/docs/tags/index.html
index a6c9473a6..0ce429e68 100644
--- a/docs/tags/index.html
+++ b/docs/tags/index.html
@@ -5,7 +5,7 @@
-
+
Tags :: Crunchy Data Container Suite Documentation
diff --git a/docs/theme-original/style.css b/docs/theme-original/style.css
index 72fe05c94..ed9380cec 100644
--- a/docs/theme-original/style.css
+++ b/docs/theme-original/style.css
@@ -292,7 +292,8 @@ h4 {
margin: 1.5rem 0 0.75rem 0; }
h5 {
- font-size: 0.5rem;
+ font-size: 1rem;
+ color: #515151 !important;
line-height: 110% !important;
margin: 1rem 0 0.2rem 0; }
diff --git a/generate-docs.sh b/generate-docs.sh
index 71b593f6b..559b7eee2 100755
--- a/generate-docs.sh
+++ b/generate-docs.sh
@@ -13,13 +13,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-# if [[ $(git status -s) ]]
-# then
-# echo "The working directory is dirty. Please commit any pending changes."
-# exit 1;
-# fi
+if [[ $(git status -s) ]]
+then
+ echo "The working directory is dirty. Please commit any pending changes."
+ exit 1;
+fi
echo "Moving to the Hugo subdirectory..."
+
# Navigate to directory containing Hugo files
cd ${CCPROOT?}/hugo/
@@ -29,6 +30,6 @@ hugo
# Add all changes and commit, push to GitHub
echo "Updating GitHub repository..."
-# git add --all && git commit -m "Publishing documentation"
+git add --all && git commit -m "Publishing documentation"
echo "Next Steps: Push your commits to your working repository and submit a pull request."
diff --git a/hugo/content/getting-started/kubernetes-and-openshift/_index.adoc b/hugo/content/getting-started/kubernetes-and-openshift/_index.adoc
index f89304d2a..4e0d79181 100644
--- a/hugo/content/getting-started/kubernetes-and-openshift/_index.adoc
+++ b/hugo/content/getting-started/kubernetes-and-openshift/_index.adoc
@@ -229,6 +229,11 @@ were enabled on the database by running the following SQL: `SHOW data_checksums`
==== Kubernetes and OpenShift
+{{% notice tip %}}
+Before running the example, ensure you edit `upgrade.json` and update the `OLD_VERSION`
+and `NEW_VERSION` parameters to the ones relevant to your situation.
+{{% /notice %}}
+
Start the upgrade as follows:
....
cd $CCPROOT/examples/kube/upgrade
@@ -238,8 +243,8 @@ cd $CCPROOT/examples/kube/upgrade
If successful, the Job will end with a **successful** status. Verify
the results of the Job by examining the Job's pod log:
....
-${CCP_CLI} get pod -a -l job-name=upgrade-job
-${CCP_CLI} logs -l job-name=upgrade-job
+${CCP_CLI} get pod -l job-name=upgrade
+${CCP_CLI} logs -l job-name=upgrade
....
You can verify the upgraded database by running the `post-upgrade.sh` script in the
@@ -573,7 +578,7 @@ used for async archiving.
You can create a backup using backrest using this command within the container:
....
-${CCP_CLI} exec -it backrest /bin/bash
+${CCP_CLI} exec -it backrest-async-archive /bin/bash
pgbackrest --stanza=db backup
....