-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gym.error.UnregisteredEnv: No registered env with id: RocketLander-v0 #2
Comments
apparently, RocketLander-v0 is not part of gym. |
yeah there are a couple problems in this repo. I'm not sure everything needed to run the demo is included. You need to clone this repo for the lander https://github.com/EmbersArc/gym Also, the actual trained model isn't in this repo. You can find trained models from the original repo (https://github.com/EmbersArc/PPO). However, my computer was not able to run ppo.py from these models. I am currently retraining the network (very slow). I will comment again if the model works. |
Sure. Please let me know on your progress |
I had to change a lot of things, and I still don't have the setup from the video. |
kris-at-ata that helped a lot. One this line (around 117): Decide and take an action
I had to change stochastic to False in order for the code to run. Any tips on how to ru the code with stochastic = True |
I've finished running training for 2 million steps! The model is getting loaded from what I can see, but nothing is showing and I have render set to True. Anyone had any luck getting this to run? |
As @ironjedi correctly pointed out, this environment is not part of gym. You can find it here: https://github.com/EmbersArc/gym. The necessary changes to the init files are already done in this repo. @llSourcell while I appreciate you making this video, it feels like you're trying to include everything at the same time. In the end the result is incomplete and people won't be able to run it themselves. It looks like you didn't even run it yourself and just showed the GIF from reddit. It's somewhat misleading in my opinion and might be discouraging for those who actually try it out. You have plenty of videos on MDPs, Reinforcement Learning, etc.. Why not focus on one thing at a time? It will give you more time to actually include the important links and information for people to get started. |
@dfolz Are you running a discrete action space? Maybe there my part of the implementation broke at some point. Can you post an error message? |
I agree with @EmbersArc that the included source code should at least run the demo shown. I have gym, Box2D and pyglet installed. However, the rendering still doesn't work. Here is what I have so far when I try to run ppo.py: Making new env: RocketLander-v0 |
I was able to run it. I set up gym according to @EmbersArc. Then I set |
Alright, I'll try again. So you don't load the model? How can it run inference if no model is loaded? |
@ironjedi In the code, it says that setting load_model = False, randomly initializes instead of using the trained model. |
Great, thx. Really hoping to get this to run. |
@buddhashrestha There is no model included, so load_model=True will not work. @ironjedi it looks like you're on mac. This comment might help: google-deepmind/pysc2#2 (comment) @dfolz I don't know that implementation but Copy-v0 is definitely the wrong environment name. |
Also please change renderthread.start() to renderthread.run() in ppo.py
If you followed EmbersArc it should work just fine. |
That will probably make the rendering work (because it's in the main thread now) but training will stop.
What do you mean by that exactly? Should look like in the GIF, just the behavior is different because the model is not trained. |
Haha that's the old one I did for the Classical Control environment. You'll need RocketLander-v0 instead of RocketLanderSimple-v0 |
@EmbersArc Man thanks a ton!, now i gotta figure out how to render it in parallel with the optimisation. :) thanks again :) You have already provided a link for it before will go through it now :)(google-deepmind/pysc2#2 (comment)) |
I'm unable to use 'RocketLander-v0' and keep on getting the following error:
I've tried everything mentioned in the comments above and have a problem similar to 'dfolz', wherein I too am getting re-register id: Copy-v0. |
@dfolz @aashray18521 |
@shubhch32 |
This should have the necessary changes on top of the latest version of gym: https://github.com/EmbersArc/gym/tree/addRocketLander |
When I try to run it using : python ppo.py
It gave me this error:
gym.error.UnregisteredEnv: No registered env with id: RocketLander-v0
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/home/projects/Landing-a-SpaceX-Falcon-Heavy-Rocket/agents/environment.py", line 136, in close
self.env.close()
AttributeError: 'GymEnvironment' object has no attribute 'env'
The text was updated successfully, but these errors were encountered: