Skip to content

Latest commit

 

History

History
83 lines (58 loc) · 2.8 KB

README.md

File metadata and controls

83 lines (58 loc) · 2.8 KB

Deep Learning Template with Pytorch, MLflow, Black, Flake8 and isort

Code style: black

Installation

Clone this repository and install dependencies by the following commands:

python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

Recommended editor

It is recommended to use Visual Studio Code with the following extensions:

Start your Deep Learning project

You can create a new directory or a Jupyter Notebook in the notebooks directory, and start developing your Deep Learning experiments. If there are any common modules that you want to use in your experiments, you can create a new directory in the project root directory and create a Python module in it. For example, if you want to create a module named libs, you can create a directory named libs in the project root directory and create a __init__.py file in it. Then, you can import the module in the notebooks by the following code:

import libs

Note: this feature is only works with Visual Studio Code. For more information, see the note below.

Note for Jupyter Notebook

It is recommended to use Jupyter Notebook with Visual Studio Code. When using VScode to open a Jupyter Notebook, the file root of each notebook in this repository is set to the project root. So, you can import modules in the project root directory by the following code (assuming there is a module named libs):

import libs

Note for MLflow

Local MLflow Tracking Server

This repository uses MLflow to track experiments. You can run the following command to start MLflow server:

mlflow ui

For the example usage of MLflow, see notebooks in "./notebooks/mlflow/" directory.

Remote MLflow Tracking Server

When you are running an independent MLflow tracking server, you can create a .env file in the project root directory and set the following environment variables:

MLFLOW_TRACKING_URI="https://mlflow.example.com/"
MLFLOW_TRACKING_TOKEN="<your token here>"
AWS_ACCESS_KEY_ID="<your key here>"
AWS_SECRET_ACCESS_KEY="<secret>"
MLFLOW_S3_ENDPOINT_URL="https://s3.example.com"

To load the environment variables, add the following code to the top of your Python script or notebook:

from dotenv import load_dotenv

load_dotenv()

For the example usage of MLflow, see notebooks in "./notebooks/mlflow/" directory.