Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: add initial GPU explanation #18

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .custom_wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,3 +14,7 @@ PyPI
GPUs
integrations
autonomizing
GRES
PCIe
Nvidia
RESource
8 changes: 8 additions & 0 deletions explanation/gpus/driver.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
(driver)=
# Driver auto-install

Charmed HPC installs GPU drivers when the `slurmd` charm is deployed on a compute node equipped with a supported Nvidia GPU. Driver detection is performed via the API to [`ubuntu-drivers-common`](https://documentation.ubuntu.com/server/how-to/graphics/install-nvidia-drivers/#the-recommended-way-ubuntu-drivers-tool), a package which examines node hardware, determines appropriate third-party drivers and recommends a set of driver packages that are installed from the Ubuntu repositories.

## Libraries used

- [`ubuntu-drivers-common`](https://github.com/canonical/ubuntu-drivers-common), from GitHub.
16 changes: 16 additions & 0 deletions explanation/gpus/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
(gpus)=
# GPUs

A Graphics Processing Unit (GPU) is a specialized hardware resource that was originally designed to accelerate computer graphics calculations but now has expanded use in general purpose computing across a number of fields. GPU-enabled workloads are supported on a Charmed HPC cluster with the necessary driver and workload manager configuration automatically handled by the charms.

- {ref}`driver`
- {ref}`slurmconf`

```{toctree}
:titlesonly:
:maxdepth: 1
:hidden:

Driver auto-install <driver>
Slurm enlistment <slurmconf>
```
31 changes: 31 additions & 0 deletions explanation/gpus/slurmconf.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
(slurmconf)=
# Slurm enlistment

To allow cluster users to submit jobs requesting GPUs, detected GPUs are automatically added to the [Generic RESource (GRES) Slurm configuration](https://slurm.schedmd.com/gres.html). This is a feature in Slurm which enables scheduling of arbitrary generic resources, including GPUs.

## Device details

GPU details are gathered by [`pynvml`](https://pypi.org/project/nvidia-ml-py/), the official Python bindings for the Nvidia management library, which enables GPU counts, associated device files and model names to be queried from the drivers. For compatibility with Slurm configuration files, retrieved model names are converted to lowercase and white space is replaced with underscores. “Tesla T4” becomes `tesla_t4`, for example.

## Slurm configuration

Each GPU-equipped node is added to the `gres.conf` configuration file as its own `NodeName` entry, following the format defined in the [Slurm `gres.conf` documentation](https://slurm.schedmd.com/gres.conf.html). Individual `NodeName` entries are used over an entry per GRES resource to provide greater support for heterogeneous environments, such as a cluster where the same model of GPU is not consistently the same device file across compute nodes.

In `slurm.conf`, the configuration for GPU-equipped nodes has a comma-separated list in its `Gres=` element, giving the name, type and count for each GPU on the node.

For example, a Microsoft Azure `Standard_NC24ads_A100_v4` node, equipped with a NVIDIA A100 PCIe GPU, is given a node configuration in `slurm.conf` of:

```
NodeName=juju-e33208-1 CPUs=24 Boards=1 SocketsPerBoard=1 CoresPerSocket=24 ThreadsPerCore=1 RealMemory=221446 Gres=gpu:nvidia_a100_80gb_pcie:1 MemSpecLimit=1024
```

and corresponding `gres.conf` line:

```
NodeName=juju-e33208-1 Name=gpu Type=nvidia_a100_80gb_pcie File=/dev/nvidia0
```

## Libraries used

- [`pynvml / nvidia-ml-py`](https://pypi.org/project/nvidia-ml-py/), from PyPI.

6 changes: 3 additions & 3 deletions explanation/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@
# Explanation

- {ref}`cryptography`

🚧 Under construction 🚧
- {ref}`GPUs`

```{toctree}
:titlesonly:
:maxdepth: 1
:hidden:
cryptography/index
```
gpus/index
```