Skip to content

Commit

Permalink
Merge pull request #33 from BioImage-Archive/dev
Browse files Browse the repository at this point in the history
Public release
  • Loading branch information
ctr26 authored Sep 3, 2024
2 parents 2200b94 + 825d9bf commit b4fd149
Show file tree
Hide file tree
Showing 52 changed files with 127 additions and 1,695 deletions.
9 changes: 5 additions & 4 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@

BASEDIR = $(shell pwd)

VARIANTS := embassy denbi minikube
# VARIANTS := embassy denbi minikube
VARIANTS := embassy minikube
ENVIRONMENTS := prod dev local

.PHONY: all $(VARIANTS) $(ENVIRONMENTS) $(VARIANTS:%=%.prod) $(VARIANTS:%=%.dev) htpassword
Expand All @@ -21,15 +22,15 @@ $(ENVIRONMENTS): %:

$(VARIANTS:%=%.prod): %.prod:
@echo "Deploying $* to prod..."
helmsman --apply --debug --group "prod" -f helmsman.yaml -f helmsman/prod.yaml -e $*.prod.env
helmsman --apply --debug --group "prod" -f helmsman.yaml -f helmsman/prod.yaml -e $*.prod.env --subst-env-values

$(VARIANTS:%=%.dev): %.dev:
@echo "Deploying $* to dev..."
helmsman --apply --debug --group "dev" -f helmsman.yaml -f helmsman/dev.yaml -e $*.dev.env
helmsman --apply --debug --group "dev" -f helmsman.yaml -f helmsman/dev.yaml -e $*.dev.env --subst-env-values

$(VARIANTS:%=%.local): %.local:
@echo "Deploying $* to local..."
helmsman --apply --debug --group "prod" -f helmsman.yaml -f helmsman/local.yaml -e $*.local.env
helmsman --apply --debug --group "prod" -f helmsman.yaml -f helmsman/local.yaml -e $*.local.env --subst-env-values

htpassword:
docker run --rm -ti xmartlabs/htpasswd ${CI_REGISTRY_USER} ${CI_REGISTRY_PASSWORD} > htpasswd_file
Expand Down
98 changes: 75 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,38 +1,90 @@
<!-- # Bio-Image Archive Kubernetes analysis platform -->
# Relevant Links and Notebooks

<!-- - [`setup.md`](docs/setup.md) If you are setting this up from scratch follow these instructions to setup GitLab CI/CD.
- [`deployment.md`](docs/deployment.md) If you wish to change the deployment read this.
-->
- **IDR Screen Data**: [IDR Screen 2451](https://idr.openmicroscopy.org/webclient/?show=screen-2451)
- **IC50 Notebook for SARS-CoV-2 Data**: [![Binder](https://binder.bioimagearchive.org/badge_logo.svg)](https://binder.bioimagearchive.org/v2/gh/IDR/idr0094-ellinger-sarscov2/master?urlpath=notebooks%2Fnotebooks%2Fidr0094-ic50.ipynb%3FscreenId%3D2602)
- **S-BSST522 Image Gallery**: [EBI Image Gallery](https://www.ebi.ac.uk/bioimage-archive/galleries/S-BSST522.html)
- **BIA Explorer Visualisation Notebook**: [![Binder](https://binder.bioimagearchive.org/badge_logo.svg)](https://binder.bioimagearchive.org/v2/gh/BioImage-Archive/bia-explorer/HEAD?labpath=BIA-explorer-visualisation-notebook.ipynb)
- **BIA Explorer GitHub Repository**: [GitHub - BIA Explorer](https://github.com/BioImage-Archive/bia-explorer)
- **BIA Training Repository**: [GitHub - BIA Training](https://github.com/BioImage-Archive/bia-training)
- **BIA BMZ Integration**: [GitHub - BIA BMZ Integration](https://github.com/BioImage-Archive/bia-bmz-integration)
- **BioImage Archive Notebooks**: [GitHub - BioImage Archive Notebooks](https://github.com/BioImage-Archive/bioimagearchive_notebooks)

## Quick start

Install minikube and helmfile
# BIA Binder Deployment

https://kubernetes.io/docs/tasks/tools/install-minikube/
https://github.com/roboll/helmfile#installation
This repository contains the materials for deploying the BIA Binder, including the codebase and relevant supplementary files. The repository is currently aimed at deploying to Embassy Cloud and has historically also worked with DeNBI Cloud.

Add the secrets to your local environment
## Usage

source secrets.env
### Installation

<!-- Create a namespace, role, account and additional token for the Gitlab runner -->
1. **Clone the repository**:
```bash
git clone https://github.com/yourusername/bia-binder.git
cd bia-binder
```

<!--
https://kubernetes.io/docs/admin/service-accounts-admin/
2. **Install the necessary tools**:
2.i **Install Helm**: Follow the instructions on the [Helm page](https://helm.sh/docs/intro/install/)
2.ii **Install Helmsman**: Follow the instructions on the [Helmsman GitHub page](https://github.com/Praqma/helmsman) to install Helmsman on your system.
2.iii *Install helm-diff* plugin:

kubectl apply -f ./k8s-clusterrole/
helm plugin install https://github.com/databus23/helm-diff

This will give the GitLab runner almost full administrative access to the cluster.
### Deployment

bash gitlab-ci/install-helm.sh
helmfile -e minikube sync
To deploy the application to different environments, you can use the Makefile. The Makefile supports deployment to `prod`, `dev`, and `local` environments for different variants: `embassy`, `denbi`, and `minikube`.

For prod
1. **Deploy to production**:
```bash
make embassy.prod
make denbi.prod
make minikube.prod
```

helmfile -e default sync -->
2. **Deploy to development**:
```bash
make embassy.dev
make denbi.dev
make minikube.dev
```

## TODO
3. **Deploy to local environment**:
```bash
make embassy.local
make denbi.local
make minikube.local
```

- Continuous integration
- Elixir login portal
- Speed up image pull
4. **Generate htpassword file**:
```bash
make htpassword
```

### Example `.env` File

Create a `.env` file in the root directory of the repository with the following structure:

```env
# Generic .env file example
# CI/CD Configuration
CI_REGISTRY_IMAGE="bioimagearchive/binder-"
CI_REGISTRY_URL="https://registry.binder.bioimagearchive.org"
HOST_NAME="binder.bioimagearchive.org"
KUBECONFIG="kube/embassy.config"
CI_REGISTRY_USER=ctr26
CI_REGISTRY_PASSWORD=4c16f763-****-****-****-80ec19d2f840
# Secret keys (values obfuscated for security)
SECRET_ELIXIR_CLIENTID=********-****-****-****-************
SECRET_ELIXIR_CLIENTSECRET=********-****-****-****-************
SECRET_HUB_PASSWORD=1e54cdb91f9c78d3f93a577b90567771ad076424ab24470192cc0776a3d45bd5
SECRET_JUPYTERHUB_PROXY_TOKEN=1e54cdb91f9c78d3f93a577b90567771ad076424ab24470192cc0776a3d45bd5
SECRET_BINDERHUB_SERVICE=569d94fdabd71bf88452db33782e4fa1881241764226e3038aedf6ec2cce1aa1
SECRET_GITHUBREPOPROVIDER=ghp_************-**********
SECRET_PROMETHEUS_AUTH_HTPASSWD=
# AAI Configuration
AAI_CLIENT_ID="cfac1c45-ab70-4596-b5b6-d1d92cf8ce3b"
AAI_CLIENT_SECRET="c064871e-****-****-****-f82b1ad6e2c6"
2 changes: 1 addition & 1 deletion binderhub/frontend-custom/template/page.html
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
<div class="container">
<div class="row text-center">
<h3>questions?<br/>read about <a href="https://mmb.irbbarcelona.org/biobb/workflows">the workflows</a>, see the <a href="https://github.com/jupyterhub/binderhub">BinderHub code</a>, ask for help with <a href="mailto:[email protected]">Binder for the BioImageArchive</a> or even
<a href="https://github.com/bioimagearchive/k8s-jupyterhub"> deploy your own</a>
<a href="https://github.com/BioImage-Archive/bia-binder"> deploy your own</a>
</h3>
</div>
</div>
Expand Down
12 changes: 0 additions & 12 deletions binderhub/persistentVolumes.yaml

This file was deleted.

36 changes: 0 additions & 36 deletions binderhub/persistentVolumes/biostudies.yaml

This file was deleted.

82 changes: 6 additions & 76 deletions binderhub/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,9 @@ config:
<div style="text-align: center;">This is a public Beta and liable to downtime</div>
jupyterhub:
scheduling:
userScheduler:
enabled: false
nameOverride: binder
fullnameOverride: binder
proxy:
Expand Down Expand Up @@ -63,79 +66,6 @@ jupyterhub:
baseUrl: /jupyter
networkPolicy:
enabled: true
# fuseConfig: |
# from kubernetes import client
# def modify_pod_hook(spawner, pod):
# pod.spec.containers[0].security_context = client.V1SecurityContext(
# privileged=True,
# capabilities=client.V1Capabilities(
# add=['SYS_ADMIN']
# )
# )
# return pod
# c.KubeSpawner.modify_pod_hook = modify_pod_hook
# 00-add-dask-gateway-values: |
# # 1. Sets `DASK_GATEWAY__PROXY_ADDRESS` in the singleuser environment.
# # 2. Adds the URL for the Dask Gateway JupyterHub service.
# import os
# # These are set by jupyterhub.
# release_name = os.environ["HELM_RELEASE_NAME"]
# release_namespace = os.environ["POD_NAMESPACE"]
# if "PROXY_HTTP_SERVICE_HOST" in os.environ:
# # https is enabled, we want to use the internal http service.
# gateway_address = "http://{}:{}/services/dask-gateway/".format(
# os.environ["PROXY_HTTP_SERVICE_HOST"],
# os.environ["PROXY_HTTP_SERVICE_PORT"],
# )
# print("Setting DASK_GATEWAY__ADDRESS {} from HTTP service".format(gateway_address))
# else:
# gateway_address = "http://proxy-public/services/dask-gateway"
# print("Setting DASK_GATEWAY__ADDRESS {}".format(gateway_address))
# # Internal address to connect to the Dask Gateway.
# c.KubeSpawner.environment.setdefault("DASK_GATEWAY__ADDRESS", gateway_address)
# # Internal address for the Dask Gateway proxy.
# c.KubeSpawner.environment.setdefault("DASK_GATEWAY__PROXY_ADDRESS", "gateway://traefik-daskgateway-dask-gateway.daskgateway:80")
# # Relative address for the dashboard link.
# c.KubeSpawner.environment.setdefault("DASK_GATEWAY__PUBLIC_ADDRESS", "/services/dask-gateway/")
# # Use JupyterHub to authenticate with Dask Gateway.
# c.KubeSpawner.environment.setdefault("DASK_GATEWAY__AUTH__TYPE", "jupyterhub")
# # Adds Dask Gateway as a JupyterHub service to make the gateway available at
# # {HUB_URL}/services/dask-gateway
# service_url = "http://traefik-{}-dask-gateway.{}".format(release_name, release_namespace)
# for service in c.JupyterHub.services:
# if service["name"] == "dask-gateway":
# if not service.get("url", None):
# print("Adding dask-gateway service URL")
# service.setdefault("url", service_url)
# break
# else:
# print("dask-gateway service not found. Did you set jupyterhub.hub.services.dask-gateway.apiToken?")


# storage:
# extraVolumes:
# - name: biostudies-nfs-binder
# persistentVolumeClaim:
# claimName: biostudies-nfs-pvc-binder
# extraVolumeMounts:
# - name: biostudies-nfs-binder
# mountPath: /home/jovyan/biostudies
# readOnly: true
# type: none
# extraVolumes:
# - name: nfs-bs-ftp
# persistentVolumeClaim:
# claimName: nfs-bs-ftp-pvc
# - name: nfs-volume
# nfs:
# # URL for the NFS server
# server: 10.108.211.244 # Change this!
# path: /
# extraVolumeMounts:
# - name: nfs-bs-ftp-pv-binderhub
# mountPath: /home/jovyan/biostudies
# readOnly: true


initContainers:
- name: git-clone-templates
Expand All @@ -146,7 +76,7 @@ initContainers:
- --branch=master
- --depth=1
- --
- https://github.com/bioimagearchive/k8s-jupyterhub
- https://github.com/BioImage-Archive/bia-binder
- /etc/binderhub/custom
securityContext:
runAsUser: 0
Expand All @@ -163,8 +93,8 @@ extraVolumeMounts:
imageBuilderType: pink
# Podman in Kubernetes
pink:
hostStorageDir: /var/lib/pink/storage/prod
hostSocketDir: /var/run/pink/prod
hostStorageDir: /var/lib/pink/storage/production
hostSocketDir: /var/run/pink/production

ingress:
enabled: true
Expand Down
41 changes: 0 additions & 41 deletions consul/values.yaml

This file was deleted.

18 changes: 18 additions & 0 deletions daskhub/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,24 @@ jupyterhub:
# JupyterHub configuration goes here.
# See https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/master/jupyterhub/values.yaml
hub:
config:
GenericOAuthenticator:
# client_id: cfac1c45-ab70-4596-b5b6-d1d92cf8ce3b
# client_secret: c064871e-2b7b-4b64-a6cb-0c0b3821d262e1002124-2cb5-4d80-ab64-f82b1ad6e2c6
client_id: ${AAI_CLIENT_ID}
client_secret: ${AAI_CLIENT_SECRET}
# client_id: cfac1c45-ab70-4596-b5b6-d1d92cf8ce3b
oauth_callback_url: https://login.binder.bioimagearchive.org/hub/oauth_callback
authorize_url: https://login.elixir-czech.org/oidc/authorize
token_url: https://login.elixir-czech.org/oidc/token
userdata_url: https://login.elixir-czech.org/oidc/userinfo
scope:
- openid
- email
username_key: email
JupyterHub:
admin_access: true
authenticator_class: generic-oauth
baseUrl: /
# config:
# GenericOAuthenticator:
Expand Down
Loading

0 comments on commit b4fd149

Please sign in to comment.