-
Notifications
You must be signed in to change notification settings - Fork 5
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
4 changed files
with
58 additions
and
3 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
(driver)= | ||
# Driver auto-install | ||
|
||
Charmed HPC installs GPU drivers when the slurmd charm is deployed on a compute node equipped with a supported Nvidia GPU. Driver detection is performed via the API to [ubuntu-drivers-common](https://documentation.ubuntu.com/server/how-to/graphics/install-nvidia-drivers/#the-recommended-way-ubuntu-drivers-tool), a package which examines node hardware, determines appropriate third-party drivers and recommends a set of driver packages that are installed from the Ubuntu repositories. | ||
|
||
## Libraries used | ||
|
||
- [`ubuntu-drivers-common`](https://github.com/canonical/ubuntu-drivers-common), from GitHub. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
(gpus)= | ||
# GPUs | ||
|
||
A Graphics Processing Unit (GPU) is a specialized hardware resource that was originally designed to accelerate computer graphics calculations but now has expanded use in general purpose computing across a number of fields. GPU-enabled workloads are supported on a Charmed HPC cluster with the necessary driver and workload manager configuration automatically handled by the charms. | ||
|
||
- {ref}`driver` | ||
- {ref}`slurmconf` | ||
|
||
```{toctree} | ||
:titlesonly: | ||
:maxdepth: 1 | ||
:hidden: | ||
Driver auto-install <driver> | ||
Slurm enlistment <slurmconf> | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
(slurmconf)= | ||
# Slurm enlistment | ||
|
||
To allow cluster users to submit jobs requesting GPUs, detected GPUs are automatically added to the [Generic RESource (GRES) Slurm configuration](https://slurm.schedmd.com/gres.html). This is a feature in Slurm which enables scheduling of arbitrary generic resources, including GPUs. | ||
|
||
## Device details | ||
|
||
GPU details are gathered by [pynvml](https://pypi.org/project/nvidia-ml-py/), the official Python bindings for the Nvidia management library, which enables GPU counts, associated device files and model names to be queried from the drivers. For compatibility with Slurm configuration files, retrieved model names are converted to lowercase and whitespace is replaced with underscores. “Tesla T4” becomes “tesla_t4”, for example. | ||
|
||
## Slurm configuration | ||
|
||
Each GPU-equipped node is added to the gres.conf configuration file as its own NodeName entry, following the format defined in the [Slurm gres.conf documentation](https://slurm.schedmd.com/gres.conf.html). Individual NodeName entries are used over an entry per GRES resource to provide greater support for heterogenous environments, such as a cluster where the same model of GPU is not consistently the same device file across compute nodes. | ||
|
||
In slurm.conf, the configuration for GPU-equipped nodes has a comma-separated list in its “Gres=” element, giving the name, type and count for each GPU on the node. | ||
|
||
For example, a Microsoft Azure Standard_NC24ads_A100_v4 node, equipped with a NVIDIA A100 PCIe GPU, is given a node configuration in slurm.conf of: | ||
|
||
``` | ||
NodeName=juju-e33208-1 CPUs=24 Boards=1 SocketsPerBoard=1 CoresPerSocket=24 ThreadsPerCore=1 RealMemory=221446 Gres=gpu:nvidia_a100_80gb_pcie:1 MemSpecLimit=1024 | ||
``` | ||
|
||
and corresponding gres.conf line: | ||
|
||
``` | ||
NodeName=juju-e33208-1 Name=gpu Type=nvidia_a100_80gb_pcie File=/dev/nvidia0 | ||
``` | ||
|
||
## Libraries used | ||
|
||
- [`pynvml / nvidia-ml-py`](https://pypi.org/project/nvidia-ml-py/), from PyPI. | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters