Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: readme revamp #1108

Merged
merged 40 commits into from
Dec 19, 2024
Merged
Show file tree
Hide file tree
Changes from 39 commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
ba16478
feat: initial readme revamp
sash-a Oct 22, 2024
98247f5
chore: some readme fixes
sash-a Oct 22, 2024
af6accb
fix: convert images to png
sash-a Oct 22, 2024
2304c34
chore: changes around performance plots in readme
sash-a Oct 22, 2024
4038101
chore: add more benchmark images
sash-a Oct 23, 2024
cb91cfe
chore: update images
sash-a Oct 23, 2024
e81eb46
chore: rename legend
sash-a Oct 23, 2024
d1b9363
chore: add legend to readme
sash-a Oct 23, 2024
3655c73
chore: move install + getting started near the top
sash-a Oct 23, 2024
0565817
feat: update system readmes
sash-a Oct 23, 2024
a040a69
Merge branch 'develop' into feat/readme-revamp
sash-a Oct 28, 2024
427a36c
chore: a note on ISAC
sash-a Nov 1, 2024
fec0d6f
chore: update branch naming convention doc
sash-a Nov 1, 2024
86b145a
chore: some readme updates
sash-a Nov 1, 2024
4fc0ba8
feat: switch detailed install instruction to uv
sash-a Nov 3, 2024
0258bb8
Merge branch 'develop' into feat/readme-revamp
RuanJohn Nov 13, 2024
4a664b3
docs: python badge
RuanJohn Nov 13, 2024
8df6c59
wip: system level docs
RuanJohn Nov 14, 2024
5c9c531
feat: readme badges
sash-a Nov 21, 2024
d9a4733
chore: add speed plot and move tables out of collapsible
sash-a Nov 21, 2024
ac40e79
Merge branch 'develop' into feat/readme-revamp
sash-a Nov 21, 2024
61443e2
Merge branch 'develop' into feat/readme-revamp
OmaymaMahjoub Dec 11, 2024
8092b16
fix: run pre commits
OmaymaMahjoub Dec 11, 2024
ab8600f
fix: tests budge link
OmaymaMahjoub Dec 11, 2024
1c967a7
feat: system level configs
RuanJohn Dec 12, 2024
9496128
chore: github math render fix
RuanJohn Dec 12, 2024
1d39e95
chore: github math render fix and linting
RuanJohn Dec 12, 2024
f5dd9cf
docs: add links to system readmes, papers and hydra
RuanJohn Dec 13, 2024
482f5c3
docs: qlearning paper links
RuanJohn Dec 13, 2024
27437b8
docs: reword sebulba section to be distribution architectures
RuanJohn Dec 13, 2024
99b4ed6
docs: change reference to sable paper
RuanJohn Dec 13, 2024
8312345
docs: clarify distribution architectures that are support for differe…
RuanJohn Dec 13, 2024
58b9341
docs: general spelling mistake fixes and relative links to docs and f…
RuanJohn Dec 13, 2024
f890869
docs: typo fixes
RuanJohn Dec 13, 2024
cf9d2b0
docs: replace absolute website links with relative links
RuanJohn Dec 13, 2024
76d7eb0
docs: sable diagram caption
RuanJohn Dec 13, 2024
8b9b66f
docs: sable caption math
RuanJohn Dec 13, 2024
8bbf732
docs: another sable diagram caption fix
RuanJohn Dec 13, 2024
24a4aed
docs: sable diagram math render
RuanJohn Dec 13, 2024
e0ddaa4
docs: add environment code and paper links
RuanJohn Dec 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
237 changes: 94 additions & 143 deletions README.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ pre-commit run --all-files

## Naming Conventions
### Branch Names
We name our feature and bugfix branches as follows - `feature/[BRANCH-NAME]`, `bugfix/[BRANCH-NAME]` or `maintenance/[BRANCH-NAME]`. Please ensure `[BRANCH-NAME]` is hyphen delimited.
We name our feature and bugfix branches as follows - `feat/[BRANCH-NAME]`, `fix/[BRANCH-NAME]`. Please ensure `[BRANCH-NAME]` is hyphen delimited.
### Commit Messages
We follow the conventional commits [standard](https://www.conventionalcommits.org/en/v1.0.0/).

Expand Down
22 changes: 12 additions & 10 deletions docs/DETAILED_INSTALL.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@
# Detailed installation guide

### Conda virtual environment
We recommend using `conda` for package management. These instructions should allow you to install and run mava.
We recommend using [uv](https://docs.astral.sh/uv/) for package management. These instructions should allow you to install and run mava.

1. Create and activate a virtual environment
1. Install `uv`
```bash
conda create -n mava python=3.12
conda activate mava
curl -LsSf https://astral.sh/uv/install.sh | sh
```

2. Clone mava
Expand All @@ -15,19 +14,22 @@ git clone https://github.com/instadeepai/Mava.git
cd mava
```

3. Install the dependencies
3. Create and activate a virtual environment and install requirements
```bash
pip install -e .
uv venv -p=3.12
source .venv/bin/activate
uv pip install -e .
```

4. Install jax on your accelerator. The example below is for an NVIDIA GPU, please the [official install guide](https://github.com/google/jax#installation) for other accelerators
4. Install jax on your accelerator. The example below is for an NVIDIA GPU, please the [official install guide](https://github.com/google/jax#installation) for other accelerators.
Note that the Jax version we use will change over time, please check the [requirements.txt](../requirements/requirements.txt) for our latest tested Jax verion.
```bash
pip install "jax[cuda12]==0.4.30"
uv pip install "jax[cuda12]==0.4.30"
```

5. Run a system!
```bash
python mava/systems/ppo/ff_ippo.py env=rware
python mava/systems/ppo/anakin/ff_ippo.py env=rware
```

### Docker
Expand All @@ -50,4 +52,4 @@ If you are having trouble with dependencies we recommend using our docker image

For example, `make run example=mava/systems/ppo/ff_ippo.py`.

Alternatively, run bash inside a docker container with mava installed by running `make bash`, and from there systems can be run as follows: `python dir/to/system.py`.
Alternatively, run bash inside a docker container with Mava installed by running `make bash`, and from there systems can be run as follows: `python dir/to/system.py`.
Binary file added docs/images/algo_images/sable-arch.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/connector.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/lbf.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/legend.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/mabrax.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/mpe.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/rware.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/benchmark_results/smax.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Binary file removed docs/images/lbf_results/legend_rec_mappo.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_ippo/small-4ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_ippo/tiny-2ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_ippo/tiny-4ag.png
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file removed docs/images/rware_results/ff_mappo/small-4ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_mappo/tiny-2ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/ff_mappo/tiny-4ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/rec_ippo/small-4ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/rec_ippo/tiny-2ag.png
Binary file not shown.
Binary file removed docs/images/rware_results/rec_ippo/tiny-4ag.png
Binary file not shown.
Binary file not shown.
Binary file removed docs/images/rware_results/rec_mappo/tiny-2ag.png
Diff not rendered.
Binary file removed docs/images/rware_results/rec_mappo/tiny-4ag.png
Diff not rendered.
Binary file removed docs/images/smax_results/10m_vs_11m.png
Diff not rendered.
Binary file removed docs/images/smax_results/27m_vs_30m.png
Diff not rendered.
Binary file removed docs/images/smax_results/2s3z.png
Diff not rendered.
Binary file removed docs/images/smax_results/3s5z.png
Diff not rendered.
Binary file removed docs/images/smax_results/3s5z_vs_3s6z.png
Diff not rendered.
Binary file removed docs/images/smax_results/3s_vs_5z.png
Diff not rendered.
Binary file removed docs/images/smax_results/5m_vs_6m.png
Diff not rendered.
Binary file removed docs/images/smax_results/6h_vs_8z.png
Diff not rendered.
Binary file removed docs/images/smax_results/legend.png
Diff not rendered.
Diff not rendered.
Binary file removed docs/images/speed_results/mava_sps_results.png
Diff not rendered.
Binary file added docs/images/speed_results/speed.png
74 changes: 0 additions & 74 deletions docs/jumanji_rware_comparison.md

This file was deleted.

43 changes: 0 additions & 43 deletions docs/smax_benchmark.md

This file was deleted.

6 changes: 6 additions & 0 deletions mava/systems/mat/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Multi-agent Transformer

We provide an implementation of the Multi-agent Transformer algorithm in JAX. MAT casts cooperative multi-agent reinforcement learning as a sequence modelling problem where agent observations and actions are treated as a sequence. At each timestep the observations of all agents are encoded and then these encoded observations are used for auto-regressive action selection.

## Relevant paper:
* [Multi-Agent Reinforcement Learning is a Sequence Modeling Problem](https://arxiv.org/pdf/2205.14953)
17 changes: 17 additions & 0 deletions mava/systems/ppo/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Proximal Policy Optimization

We provide the following four multi-agent extensions to [PPO](https://arxiv.org/pdf/1707.06347) following the Anakin architecture.

* [ff-IPPO](../../systems/ppo/anakin/ff_ippo.py)
* [ff-MAPPO](../../systems/ppo/anakin/ff_mappo.py)
* [rec-IPPO](../../systems/ppo/anakin/rec_ippo.py)
* [rec-MAPPO](../../systems/ppo/anakin/rec_mappo.py)

In all cases IPPO implies that it is an implementation following the independent learners MARL paradigm while MAPPO implies that the implementation follows the centralised training with decentralised execution paradigm by having a centralised critic during training. The `ff` or `rec` suffixes in the system names implies that the policy networks are MLPs or have a [GRU](https://arxiv.org/pdf/1406.1078) memory module to help learning despite partial observability in the environment.

In addition to the Anakin-based implementations, we also include a Sebulba-based implementation of [ff-IPPO](../../systems/ppo/sebulba/ff_ippo.py) which can be used on environments that are not written in JAX and adhere to the Gymnasium API.

## Relevant papers:
* [Proximal Policy Optimization Algorithms](https://arxiv.org/pdf/1707.06347)
* [The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games](https://arxiv.org/pdf/2103.01955)
* [Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge?](https://arxiv.org/pdf/2011.09533)
14 changes: 14 additions & 0 deletions mava/systems/q_learning/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Q Learning

We provide two Q-Learning based systems that follow the independent learners and centralised training with decentralised execution paradigms:

* [rec-IQL](../../systems/q_learning/anakin/rec_iql.py)
* [rec-QMIX](../../systems/q_learning/anakin/rec_qmix.py)

`rec-IQL` is a multi-agent version of DQN that uses double DQN and has a GRU memory module and `rec-QMIX` is an implementation of QMIX in JAX that uses monontic value function decomposition.

## Relevant papers:
* [Playing Atari with Deep Reinforcement Learning](https://arxiv.org/pdf/1312.5602)
* [Multiagent Cooperation and Competition with Deep Reinforcement Learning](https://arxiv.org/pdf/1511.08779)
* [QMIX: Monotonic Value Function Factorisation for
Deep Multi-Agent Reinforcement Learning](https://arxiv.org/pdf/1803.11485)
24 changes: 24 additions & 0 deletions mava/systems/sable/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Sable

Sable is an algorithm that was developed by the research team at InstaDeep. It also casts MARL as a sequence modelling problem and leverages the [advantage decompostion theorem](https://arxiv.org/pdf/2108.08612) through auto-regressive action selection for convergence guarantees and can scale to thousands of agents by leveraging the memory efficiency of Retentive Networks.

We provide two Anakin based implementations of Sable:
* [ff-sable](../../systems/sable/anakin/ff_sable.py)
* [rec-sable](../../systems/sable/anakin/rec_sable.py)

Here the `ff` suffix implies that the algorithm retains no memory over time but treats only the agents as the sequence dimension while `rec` implies that the algorithms maintains memory over both agents and time for long context memory in partially observable environments.

For an overview of how the algorithm works, please see the diagram below. For a more detailed overview please see our associated [paper](https://arxiv.org/pdf/2410.01706).

<p align="center">
<a href="../../../docs/images/algo_images/sable-arch.png">
<img src="../../../docs/images/algo_images/sable-arch.png" alt="Sable Arch" width="80%"/>
</a>
</p>

*Sable architecture and execution.* The encoder receives all agent observations $o_t^1,\dots,o_t^N$ from the current timestep $t$ along with a hidden state $h\_{t-1}^{\text{enc}}$ representing past timesteps and produces encoded observations $\hat{o}\_t^1,\dots,\hat{o}\_t^N$, observation-values $v \left( \hat{o}\_t^1 \right),\dots,v \left( \hat{o}\_t^N \right) $, and a new hidden state $h_t^{\text{enc}}$.
The decoder performs recurrent retention over the current action $a_t^{m-1}$, followed by cross attention with the encoded observations, producing the next action $a_t^m$. The initial hidden states for recurrence over agents in the decoder at the current timestep are $( h\_{t-1}^{\text{dec}\_1},h\_{t-1}^{\text{dec}\_2})$, and by the end of the decoding process, it generates the updated hidden states $(h_t^{\text{dec}_1},h_t^{\text{dec}_2})$.

## Relevant paper:
* [Performant, Memory Efficient and Scalable Multi-Agent Reinforcement Learning](https://arxiv.org/pdf/2410.01706)
* [Retentive Network: A Successor to Transformer for Large Language Models](https://arxiv.org/pdf/2307.08621)
16 changes: 16 additions & 0 deletions mava/systems/sac/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Soft Actor-Critic

We provide the following three multi-agent extensions to the Soft Actor-Critic (SAC) algorithm.

* [ff-ISAC](../../systems/sac/anakin/ff_isac.py)
* [ff-MASAC](../../systems/sac/anakin/ff_masac.py)
* [ff-HASAC](../../systems/sac/anakin/ff_hasac.py)

`ISAC` is an implementation following the independent learners MARL paradigm while `MASAC` is an implementation that follows the centralised training with decentralised execution paradigm by having a centralised critic during training. `HASAC` follows the heterogeneous agent learning paradigm through sequential policy updates. The `ff` prefix to the algorithm names indicate that the algorithms use MLP-based policy networks.

## Relevant papers
* [Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor](https://arxiv.org/pdf/1801.01290)
* [Multi-Agent Actor-Critic for Mixed
Cooperative-Competitive Environments](https://arxiv.org/pdf/1706.02275)
* [Robust Multi-Agent Control via Maximum Entropy
Heterogeneous-Agent Reinforcement Learning](https://arxiv.org/pdf/2306.10715)
Loading