Skip to content

1.3.0

Latest
Compare
Choose a tag to compare
@matteobettini matteobettini released this 25 Oct 15:55
· 5 commits to main since this release

Memory networks in BenchMARL

BenchMARL release paired with TorchRL 0.6.0

Highlights

RNNs in BenchMARL

We now support RNNs as models!

We have implemented and added to the library GRU and LSTM.

These can be used in the policy or in the critic (both from local agent inputs and from global centralized inputs). They are also compatible with any parameter sharing choice.

We have benchmarked GRU on a multiagent version of repeat_previous and it does solve the task, while just MLP fails

W B Chart 01_08_2024, 09_17_59

In cotrast to traditional RNNs, we do not do any time padding. Instead we do a for loop over time in the sequence while reading the “is_init” flag. This approach is slower but leads to better results.

Oh also as always you can chain them with as many models as you desire (CNN, GNN, ...)

Simplified model reloading and evaluation

We have added some useful tools described at here

In particular, we have added experiment.evaluate() and some useful command line tols like benchmarl/evaluate.py and benchmarl/resume.py that just take the path to a checkpoint file.

Basically now you can reload models from the hydra run without giving again all the config, the scripts will find the configs you used in the hydra folders automatically.

Better logging of episode metrics

BenchaMARL will now consider an episode done when the global task done is set. Thus, it will allow for agents done early (as long as the global done is set on all()

Here is an overview of how episode metrics are computed:

epispde_reward compuation

BenchMARL will be looking at the global done (always assumed to be set), which can usually be computed using any or all over the single agents dones.

In all cases the global done is what is used to compute the episode reward.

We log episode_reward min, mean, max over episodes at three different levels:

  • agent, (disabled by default, can be turned on manually)
  • group, averaged over agents in group
  • global, averaged over agents in groups and gropus

What's Changed

Full Changelog: 1.2.1...1.3.0