You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, our CI runs a suite of integration tests that exercises all user journeys. These integration tests mean we have very high confidence that changes to the application code won't break production.
However, the CI spins up the application via docker-compose whereas production runs the application via Kubernetes. This difference means that we're not protected against errors in the helm charts which has led to problems in the past (e.g. see #257). We now want to remove the difference between the CI and production environments and to eliminate this source of errors.
Technologies: Bash, Kubernetes, Helm.
Task
During CI, install and start a slim Kubernetes environment like minikube, k3s, kind or similar (further research required on which is the best option). The code for the CI scripts which set up the Kubernetes environment can be added to .travis.sh and called in CI install step.
Add CI scripts to publish the Docker images built for the current CI run to a local Docker registry which the Kubernetes environment can access (e.g. see blog post for minikube). The registry and tag that will be attached to the images can be configured via the DOCKER_REPO and BUILD_TAG environment variables and the publishing of the images can be achieved via the release make recipe.
Add CI scripts to use the helm charts to deploy the images from the local registry to the Kubernetes cluster via helm install. This will be simplified by Port helm charts to v3 #283 which will remove the need for setting up Tiller in the Kubernetes cluster.
Parameterize the integration tests such that all hostnames (e.g. nginx) can be provided via an environment variable as the hostname will differ between when the tests are run in docker-compose versus Kubernetes.
Add CI scripts to run the integration tests against the code deployed on the Kubernetes cluster. This can either be achieved by exposing the nginx service outside of the cluster via an ingress or by using kubectl run to execute the tests in a pod inside of the cluster.
The text was updated successfully, but these errors were encountered:
* Update flask-babelex from 0.9.3 to 0.9.4
* Update flask-wtf from 0.14.2 to 0.14.3
* Update environs from 7.1.0 to 7.2.0
* Update watchdog from 0.10.1 to 0.10.2
Background
Currently, our CI runs a suite of integration tests that exercises all user journeys. These integration tests mean we have very high confidence that changes to the application code won't break production.
However, the CI spins up the application via docker-compose whereas production runs the application via Kubernetes. This difference means that we're not protected against errors in the helm charts which has led to problems in the past (e.g. see #257). We now want to remove the difference between the CI and production environments and to eliminate this source of errors.
Technologies: Bash, Kubernetes, Helm.
Task
During CI, install and start a slim Kubernetes environment like minikube, k3s, kind or similar (further research required on which is the best option). The code for the CI scripts which set up the Kubernetes environment can be added to .travis.sh and called in CI install step.
Add CI scripts to publish the Docker images built for the current CI run to a local Docker registry which the Kubernetes environment can access (e.g. see blog post for minikube). The registry and tag that will be attached to the images can be configured via the DOCKER_REPO and BUILD_TAG environment variables and the publishing of the images can be achieved via the release make recipe.
Add CI scripts to use the helm charts to deploy the images from the local registry to the Kubernetes cluster via helm install. This will be simplified by Port helm charts to v3 #283 which will remove the need for setting up Tiller in the Kubernetes cluster.
Parameterize the integration tests such that all hostnames (e.g. nginx) can be provided via an environment variable as the hostname will differ between when the tests are run in docker-compose versus Kubernetes.
Add CI scripts to run the integration tests against the code deployed on the Kubernetes cluster. This can either be achieved by exposing the nginx service outside of the cluster via an ingress or by using kubectl run to execute the tests in a pod inside of the cluster.
The text was updated successfully, but these errors were encountered: