You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently, I've been interested in MAGAIL and tried to reproduce your results.
Firstly, I tried to train expert policy as recommended in README.md by python -m sandbox.mack.run_simple, but I failed. I thought there is a problem on action dimension, so I made all actions as multi-hot vectors and modified relevant terms. After training with MACK, however, it seems like agents cannot recover the appropriate policies similar to MADDPG.
So I wonder whether it is possible to share the weight files of expert so that readers can simply generate expert trajectories.
Thanks.
The text was updated successfully, but these errors were encountered:
@wsjeon@Ericonaldo An update for the code: I found new implementation of MAGAIL used for the new paper of ermongroup here. The paper is "Multi-Agent Adversarial Inverse Reinforcement Learning".
Results of execution are similar to new paper, but I cannot reach the results of MAGAIL paper (based on the paper "Multi-Agent Generative Adversarial Imitation Learning").
Anyone reached performance from this paper with MAGAIL?
Dear authors,
Hi. Thank you for sharing your codes.
Recently, I've been interested in MAGAIL and tried to reproduce your results.
Firstly, I tried to train expert policy as recommended in
README.md
bypython -m sandbox.mack.run_simple
, but I failed. I thought there is a problem on action dimension, so I made all actions as multi-hot vectors and modified relevant terms. After training with MACK, however, it seems like agents cannot recover the appropriate policies similar to MADDPG.So I wonder whether it is possible to share the weight files of expert so that readers can simply generate expert trajectories.
Thanks.
The text was updated successfully, but these errors were encountered: