Skip to content

Commit

Permalink
Merge pull request #2 from pangeo-data/update_spider_doc
Browse files Browse the repository at this point in the history
add details on dask jobqueue
  • Loading branch information
tinaok authored Jun 28, 2024
2 parents 9004e81 + 30cabaa commit d475cfe
Show file tree
Hide file tree
Showing 3 changed files with 46 additions and 5 deletions.
51 changes: 46 additions & 5 deletions docs/pangeo/dask_spider.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2785,16 +2785,16 @@
"\n",
"For this workshop, according to the Pangeo EOSC deployment, you learned how to use Dask Gateway to manage Dask clusters over Kubernetes, allowing to run our data analysis in parallel e.g. distribute tasks across several workers.\n",
"\n",
"Lets now try set up your Dask cluster using HPC infrastructure with Dask jobqueue. \n",
"As Dask jobqueue is configured by default on this ifnrastructure thanks to <a href=\"JupyterDaskOnSLURM \">https://github.com/RS-DAT/JupyterDaskOnSLURM/blob/main/user-guide.md#container-wrapper-for-spider-system</a> we just installed in the last section, you just need copy the SLURMCluster configuration cell below and execute it to connect the Dask jobqueue SLURMCluster. "
"Lets now try set up your Dask cluster using HPC infrastructure with [Dask jobqueue](https://jobqueue.dask.org). \n",
"As Dask jobqueue is configured by default on this ifnrastructure thanks to [JupyterDaskOnSLURM](https://github.com/RS-DAT/JupyterDaskOnSLURM/blob/main/user-guide.md#container-wrapper-for-spider-system) we just installed in the last section, you just need drag & drop the SLURMCluster configuration cell, and execute it to connect the Dask jobqueue SLURMCluster. "
]
},
{
"cell_type": "markdown",
"id": "830b67ad-3a82-4dd7-8ab6-3015d12c3240",
"id": "63ef6918-baa8-42db-8cc7-a0b3eb4d71d1",
"metadata": {},
"source": [
"Make sure you use the right port (taken from the left panel), and click scale, to have several workers. "
"![Slurmcluster](slurmcluster.png)"
]
},
{
Expand Down Expand Up @@ -2908,11 +2908,38 @@
"client"
]
},
{
"cell_type": "markdown",
"id": "830b67ad-3a82-4dd7-8ab6-3015d12c3240",
"metadata": {},
"source": [
"Make sure you use the right port (taken from the left panel), and click scale, to have several workers. "
]
},
{
"cell_type": "markdown",
"id": "dad4af05-a14c-4fb8-9a46-06dc2f2bc351",
"metadata": {},
"source": [
"![scale_daskjobqueue](scale_daskjobqueue.png)"
]
},
{
"cell_type": "markdown",
"id": "aa88e469-a632-46ee-bbbc-997d8f4d9d3d",
"metadata": {},
"source": [
"Adding a worker corresoinds to submiting a job using Slurm to start a node running a worker. "
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "8631b6bc-adf9-446c-b115-4c090aacbe62",
"metadata": {
"jupyter": {
"source_hidden": true
},
"tags": []
},
"outputs": [
Expand All @@ -2933,6 +2960,20 @@
"!squeue -u $USER"
]
},
{
"cell_type": "markdown",
"id": "95355523-23fe-4715-b250-5a420d801128",
"metadata": {},
"source": [
"<div class=\"alert alert-warning\">\n",
" <i class=\"fa-check-circle fa\" style=\"font-size: 22px;color:#666;\"></i> <b>Exercise</b>\n",
" <br>\n",
" <ul>\n",
" <li> The size of job you submitted is defined in your `~/.config/dask/config.yml`. Try updating it, to see how your resource (threads, memory size ...) changing after restarting your jupyter lab! (Observe well the dask dashboard!) </li> \n",
" </ul>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"id": "d1f4818f-2b6b-44c1-af77-97daf8a1c2a1",
Expand Down Expand Up @@ -13334,7 +13375,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.11.7"
}
},
"nbformat": 4,
Expand Down
Binary file added docs/pangeo/scale_daskjobqueue.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/pangeo/slurmcluster.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit d475cfe

Please sign in to comment.