Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: readme revamp #1108

Merged
merged 40 commits into from
Dec 19, 2024
Merged
Show file tree
Hide file tree
Changes from 24 commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
ba16478
feat: initial readme revamp
sash-a Oct 22, 2024
98247f5
chore: some readme fixes
sash-a Oct 22, 2024
af6accb
fix: convert images to png
sash-a Oct 22, 2024
2304c34
chore: changes around performance plots in readme
sash-a Oct 22, 2024
4038101
chore: add more benchmark images
sash-a Oct 23, 2024
cb91cfe
chore: update images
sash-a Oct 23, 2024
e81eb46
chore: rename legend
sash-a Oct 23, 2024
d1b9363
chore: add legend to readme
sash-a Oct 23, 2024
3655c73
chore: move install + getting started near the top
sash-a Oct 23, 2024
0565817
feat: update system readmes
sash-a Oct 23, 2024
a040a69
Merge branch 'develop' into feat/readme-revamp
sash-a Oct 28, 2024
427a36c
chore: a note on ISAC
sash-a Nov 1, 2024
fec0d6f
chore: update branch naming convention doc
sash-a Nov 1, 2024
86b145a
chore: some readme updates
sash-a Nov 1, 2024
4fc0ba8
feat: switch detailed install instruction to uv
sash-a Nov 3, 2024
0258bb8
Merge branch 'develop' into feat/readme-revamp
RuanJohn Nov 13, 2024
4a664b3
docs: python badge
RuanJohn Nov 13, 2024
8df6c59
wip: system level docs
RuanJohn Nov 14, 2024
5c9c531
feat: readme badges
sash-a Nov 21, 2024
d9a4733
chore: add speed plot and move tables out of collapsible
sash-a Nov 21, 2024
ac40e79
Merge branch 'develop' into feat/readme-revamp
sash-a Nov 21, 2024
61443e2
Merge branch 'develop' into feat/readme-revamp
OmaymaMahjoub Dec 11, 2024
8092b16
fix: run pre commits
OmaymaMahjoub Dec 11, 2024
ab8600f
fix: tests budge link
OmaymaMahjoub Dec 11, 2024
1c967a7
feat: system level configs
RuanJohn Dec 12, 2024
9496128
chore: github math render fix
RuanJohn Dec 12, 2024
1d39e95
chore: github math render fix and linting
RuanJohn Dec 12, 2024
f5dd9cf
docs: add links to system readmes, papers and hydra
RuanJohn Dec 13, 2024
482f5c3
docs: qlearning paper links
RuanJohn Dec 13, 2024
27437b8
docs: reword sebulba section to be distribution architectures
RuanJohn Dec 13, 2024
99b4ed6
docs: change reference to sable paper
RuanJohn Dec 13, 2024
8312345
docs: clarify distribution architectures that are support for differe…
RuanJohn Dec 13, 2024
58b9341
docs: general spelling mistake fixes and relative links to docs and f…
RuanJohn Dec 13, 2024
f890869
docs: typo fixes
RuanJohn Dec 13, 2024
cf9d2b0
docs: replace absolute website links with relative links
RuanJohn Dec 13, 2024
76d7eb0
docs: sable diagram caption
RuanJohn Dec 13, 2024
8b9b66f
docs: sable caption math
RuanJohn Dec 13, 2024
8bbf732
docs: another sable diagram caption fix
RuanJohn Dec 13, 2024
24a4aed
docs: sable diagram math render
RuanJohn Dec 13, 2024
e0ddaa4
docs: add environment code and paper links
RuanJohn Dec 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
233 changes: 92 additions & 141 deletions README.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ pre-commit run --all-files

## Naming Conventions
### Branch Names
We name our feature and bugfix branches as follows - `feature/[BRANCH-NAME]`, `bugfix/[BRANCH-NAME]` or `maintenance/[BRANCH-NAME]`. Please ensure `[BRANCH-NAME]` is hyphen delimited.
We name our feature and bugfix branches as follows - `feat/[BRANCH-NAME]`, `fix/[BRANCH-NAME]`. Please ensure `[BRANCH-NAME]` is hyphen delimited.
### Commit Messages
We follow the conventional commits [standard](https://www.conventionalcommits.org/en/v1.0.0/).

Expand Down
20 changes: 11 additions & 9 deletions docs/DETAILED_INSTALL.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@
# Detailed installation guide

### Conda virtual environment
We recommend using `conda` for package management. These instructions should allow you to install and run mava.
We recommend using [uv](https://docs.astral.sh/uv/) for package management. These instructions should allow you to install and run mava.

1. Create and activate a virtual environment
1. Install `uv`
```bash
conda create -n mava python=3.12
conda activate mava
curl -LsSf https://astral.sh/uv/install.sh | sh
```

2. Clone mava
Expand All @@ -15,19 +14,22 @@ git clone https://github.com/instadeepai/Mava.git
cd mava
```

3. Install the dependencies
3. Create and activate a virtual environment and install requirements
```bash
pip install -e .
uv venv -p=3.12
source .venv/bin/activate
uv pip install -e .
```

4. Install jax on your accelerator. The example below is for an NVIDIA GPU, please the [official install guide](https://github.com/google/jax#installation) for other accelerators
4. Install jax on your accelerator. The example below is for an NVIDIA GPU, please the [official install guide](https://github.com/google/jax#installation) for other accelerators.
Note that the Jax version we use will change over time, please check the [requirements.txt](https://github.com/instadeepai/Mava/blob/develop/requirements/requirements.txt) for our latest tested Jax verion.
```bash
pip install "jax[cuda12]==0.4.30"
uv pip install "jax[cuda12]==0.4.30"
```

5. Run a system!
```bash
python mava/systems/ppo/ff_ippo.py env=rware
python mava/systems/ppo/anakin/ff_ippo.py env=rware
```

### Docker
Expand Down
Binary file added docs/images/benchmark_results/connector.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/lbf.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/legend.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/mabrax.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/mpe.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/rware.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/smax.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Binary file removed docs/images/lbf_results/legend_rec_mappo.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_ippo/small-4ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_ippo/tiny-2ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_ippo/tiny-4ag.png
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file removed docs/images/rware_results/ff_mappo/small-4ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_mappo/tiny-2ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_mappo/tiny-4ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/rec_ippo/small-4ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/rec_ippo/tiny-2ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/rec_ippo/tiny-4ag.png
Binary file not shown.
Binary file not shown.
Binary file removed docs/images/rware_results/rec_mappo/tiny-2ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/rec_mappo/tiny-4ag.png
Diff not rendered.
Binary file removed docs/images/smax_results/10m_vs_11m.png
Diff not rendered.
Binary file removed docs/images/smax_results/27m_vs_30m.png
Diff not rendered.
Binary file removed docs/images/smax_results/2s3z.png
Diff not rendered.
Binary file removed docs/images/smax_results/3s5z.png
Diff not rendered.
Binary file removed docs/images/smax_results/3s5z_vs_3s6z.png
Diff not rendered.
Binary file removed docs/images/smax_results/3s_vs_5z.png
Diff not rendered.
Binary file removed docs/images/smax_results/5m_vs_6m.png
Diff not rendered.
Binary file removed docs/images/smax_results/6h_vs_8z.png
Diff not rendered.
Binary file removed docs/images/smax_results/legend.png
Diff not rendered.
Diff not rendered.
Binary file removed docs/images/speed_results/mava_sps_results.png
Diff not rendered.
Binary file added docs/images/speed_results/speed.png
74 changes: 0 additions & 74 deletions docs/jumanji_rware_comparison.md

This file was deleted.

43 changes: 0 additions & 43 deletions docs/smax_benchmark.md

This file was deleted.

13 changes: 13 additions & 0 deletions mava/systems/mat/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Proximal Policy Optimization

We provide the following four multi-agent extensions to [PPO](https://arxiv.org/pdf/1707.06347).
* [ff-IPPO](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/ppo/anakin/ff_ippo.py)
* [ff-MAPPO](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/ppo/anakin/ff_mappo.py)
* [rec-IPPO](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/ppo/anakin/rec_ippo.py)
* [rec-MAPPO](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/ppo/anakin/rec_mappo.py)

In all cases IPPO implies that it is an implementation following the independent learners MARL paradigm while MAPPO implies that the implementation follows the centralised training with decentralised execution paradigm by having a centralised critic during training. The `ff` and `rec` in the system names implies that the policies are MLPs or have a [GRU](https://arxiv.org/pdf/1406.1078) memory module to help learning despite partial observability in the environment.

## Relevant papers:
* [Single agent Proximal Policy Optimization Algorithms](https://arxiv.org/pdf/1707.06347)
* [The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games](https://arxiv.org/pdf/2103.01955)
13 changes: 13 additions & 0 deletions mava/systems/ppo/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Proximal Policy Optimization

We provide 4 implementations of multi-agent PPO.
* [ff-IPPO](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/ppo/anakin/ff_ippo.py): feed forward independant PPO
* [ff-MAPPO](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/ppo/anakin/ff_mappo.py): feed forward multi-agent PPO
* [rec-IPPO](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/ppo/anakin/rec_ippo.py): recurrent independant PPO
* [rec-MAPPO](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/ppo/anakin/rec_mappo.py): recurrent multi-agent PPO

Where independant PPO uses independant learners and multi-agent PPO uses a CTDE style of training with a centralized critic.

## Relevant papers:
* [Single agent Proximal Policy Optimization Algorithms](https://arxiv.org/pdf/1707.06347)
* [The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games](https://arxiv.org/pdf/2103.01955)
10 changes: 10 additions & 0 deletions mava/systems/q_learning/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Q Learning

We provide 2 Q-Learning based systems:
* [rec-IQL](https://github.com/instadeepai/Mava/tree/feat/develop/mava/systems/q_learning/anakin/rec_iql.py): a multi-agent recurrent DQN implementation with double DQN.
* [rec-QMIX](https://github.com/instadeepai/Mava/tree/feat/develop/mava/systems/q_learning/anakin/rec_qmix.py): an implementation of QMIX.

## Relevant papers:
* [Single agent DQN](https://arxiv.org/pdf/1312.5602)
* [Multiagent Cooperation and Competition with Deep Reinforcement Learning](https://arxiv.org/pdf/1511.08779)
* [QMIX](https://arxiv.org/pdf/1803.11485)
13 changes: 13 additions & 0 deletions mava/systems/sable/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Proximal Policy Optimization

We provide 4 implementations of multi-agent PPO.
* [ff-IPPO](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/ppo/anakin/ff_ippo.py): feed forward independant PPO
* [ff-MAPPO](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/ppo/anakin/ff_mappo.py): feed forward multi-agent PPO
* [rec-IPPO](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/ppo/anakin/rec_ippo.py): recurrent independant PPO
* [rec-MAPPO](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/ppo/anakin/rec_mappo.py): recurrent multi-agent PPO

Where independant PPO uses independant learners and multi-agent PPO uses a CTDE style of training with a centralized critic.

## Relevant papers:
* [Single agent Proximal Policy Optimization Algorithms](https://arxiv.org/pdf/1707.06347)
* [The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games](https://arxiv.org/pdf/2103.01955)
14 changes: 14 additions & 0 deletions mava/systems/sac/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Soft Actor Critic

We provide 3 implementations of multi-agent SAC.
* [ff-ISAC](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/sac/anakin/ff_isac.py): feed forward independant SAC
* [ff-MASAC](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/sac/anakin/ff_masac.py): feed forward multi-agent SAC
* [ff-HASAC](https://github.com/instadeepai/Mava/blob/feat/develop/mava/systems/sac/anakin/ff_hasac.py): recurrent independant SAC

Where independant SAC uses independant learners and multi-agent SAC uses a CTDE style of training with a centralized critic and HASAC uses heterogenous style, sequential updates.
Note: independant SAC is included for completeness, however we find that it does not perform well.

## Relevant papers
* [Single agent Soft Actor Critic](https://arxiv.org/pdf/1801.01290)
* [MADDPG](https://arxiv.org/pdf/1706.02275)
* [HASAC](https://arxiv.org/pdf/2306.10715)
Loading