Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gym.error.UnregisteredEnv: No registered env with id: RocketLander-v0 #2

Open
buddhashrestha opened this issue Feb 22, 2018 · 24 comments

Comments

@buddhashrestha
Copy link

When I try to run it using : python ppo.py

It gave me this error:

gym.error.UnregisteredEnv: No registered env with id: RocketLander-v0
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/home/projects/Landing-a-SpaceX-Falcon-Heavy-Rocket/agents/environment.py", line 136, in close
self.env.close()
AttributeError: 'GymEnvironment' object has no attribute 'env'

@kris-at-ata
Copy link

kris-at-ata commented Feb 22, 2018

apparently, RocketLander-v0 is not part of gym.

@ghost
Copy link

ghost commented Feb 22, 2018

yeah there are a couple problems in this repo. I'm not sure everything needed to run the demo is included. You need to clone this repo for the lander https://github.com/EmbersArc/gym

Also, the actual trained model isn't in this repo. You can find trained models from the original repo (https://github.com/EmbersArc/PPO). However, my computer was not able to run ppo.py from these models. I am currently retraining the network (very slow). I will comment again if the model works.

@buddhashrestha
Copy link
Author

Sure. Please let me know on your progress

@kris-at-ata
Copy link

kris-at-ata commented Feb 22, 2018

I had to change a lot of things, and I still don't have the setup from the video.
just to try the algorithm, I changed 'RocketLander-v0' to 'CartPole-v0' in ppo.py, and I changed line 45 to load_model = False.
I also had to add a __init__.py file to ppo/ and add
from .history import *
from .models import *
from .renderthread import *
from .trainer import *
But at least I have CartPole-v0 working with PPO now.

@dfolz
Copy link

dfolz commented Feb 22, 2018

kris-at-ata that helped a lot. One this line (around 117):

Decide and take an action

    info = trainer.take_action(info, env, brain_name, steps, normalize_steps, stochastic=True)

I had to change stochastic to False in order for the code to run.

Any tips on how to ru the code with stochastic = True

@ghost
Copy link

ghost commented Feb 22, 2018

I've finished running training for 2 million steps! The model is getting loaded from what I can see, but nothing is showing and I have render set to True. Anyone had any luck getting this to run?

@EmbersArc
Copy link

EmbersArc commented Feb 22, 2018

As @ironjedi correctly pointed out, this environment is not part of gym. You can find it here: https://github.com/EmbersArc/gym. The necessary changes to the init files are already done in this repo.
If rendering doesn't work you might be missing pyglet (pip install pyglet).

@llSourcell while I appreciate you making this video, it feels like you're trying to include everything at the same time. In the end the result is incomplete and people won't be able to run it themselves. It looks like you didn't even run it yourself and just showed the GIF from reddit. It's somewhat misleading in my opinion and might be discouraging for those who actually try it out.

You have plenty of videos on MDPs, Reinforcement Learning, etc.. Why not focus on one thing at a time? It will give you more time to actually include the important links and information for people to get started.
Just my two cents, I like your videos a lot but also think sometimes less might be more.

@EmbersArc
Copy link

@dfolz Are you running a discrete action space? Maybe there my part of the implementation broke at some point. Can you post an error message?

@ghost
Copy link

ghost commented Feb 22, 2018

I agree with @EmbersArc that the included source code should at least run the demo shown.

I have gym, Box2D and pyglet installed. However, the rendering still doesn't work. Here is what I have so far when I try to run ppo.py:

Making new env: RocketLander-v0
Academy name: Gym Environment
Actions:
Size: 3, Type: continuous
States:
Size: 10, Type: continuous
2018-02-22 15:31:32.931619: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-02-22 15:31:32.931655: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-02-22 15:31:32.931663: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2018-02-22 15:31:32.931670: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2018-02-22 15:31:32.931676: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Loading Model...
2018-02-22 15:31:34.207 python[37363:2215996] WARNING: nextEventMatchingMask should only be called from the Main Thread! This will throw an exception in the future.

@buddhashrestha
Copy link
Author

I was able to run it. I set up gym according to @EmbersArc. Then I set load_model = False

@ghost
Copy link

ghost commented Feb 23, 2018

Alright, I'll try again.

So you don't load the model? How can it run inference if no model is loaded?

@dfolz
Copy link

dfolz commented Feb 23, 2018

Okay here is where I am at:
I have loaded pyglet and gym
I tried to load gym-master with:
from envs import box2d

Now I am getting an error message that says I can't re-register id: Copy-v0:
does anyone have any clue what that means or how to fix the error? Here's the full error message:

screenshot 2018-02-22 19 21 49

@buddhashrestha
Copy link
Author

buddhashrestha commented Feb 23, 2018

@ironjedi In the code, it says that setting load_model = False, randomly initializes instead of using the trained model.
But, If I set it to True, then it throws an error while trying to load it. I have not got a chance to debug since, and if I do, I will let you know.

@ghost
Copy link

ghost commented Feb 23, 2018

Great, thx.

Really hoping to get this to run.

@EmbersArc
Copy link

@buddhashrestha There is no model included, so load_model=True will not work.

@ironjedi it looks like you're on mac. This comment might help: google-deepmind/pysc2#2 (comment)
Otherwise you'll have to find a way to render it in the main thread.

@dfolz I don't know that implementation but Copy-v0 is definitely the wrong environment name.

@anubhavjaiswal03
Copy link

anubhavjaiswal03 commented Feb 23, 2018

Also please change renderthread.start() to renderthread.run() in ppo.py

        if not render_started and render:
            renderthread = RenderThread(sess=sess, trainer=trainer_monitor,
                                        environment=env_render, brain_name=brain_name, normalize=normalize_steps, fps=fps)
            renderthread.run()
            render_started = True
    # Final save Tensorflow model

If you followed EmbersArc it should work just fine.
FYI: the rendering isn't what was shown on the demo. Makes me sad :(

@EmbersArc
Copy link

EmbersArc commented Feb 23, 2018

Also please change renderthread.start() to renderthread.run() in ppo.py

That will probably make the rendering work (because it's in the main thread now) but training will stop.

FYI: the rendering isn't what was shown on the demo. Makes me sad :(

What do you mean by that exactly? Should look like in the GIF, just the behavior is different because the model is not trained.

@anubhavjaiswal03
Copy link

What do you mean by that exactly? Should look like in the GIF, just the behavior is different because the model is not trained.

The rendering looks like a black rocket on a white background
screen shot 2018-02-23 at 5 18 37 pm

@EmbersArc
Copy link

Haha that's the old one I did for the Classical Control environment. You'll need RocketLander-v0 instead of RocketLanderSimple-v0

@anubhavjaiswal03
Copy link

@EmbersArc Man thanks a ton!, now i gotta figure out how to render it in parallel with the optimisation. :) thanks again :) You have already provided a link for it before will go through it now :)(google-deepmind/pysc2#2 (comment))

@aashray18521
Copy link

I'm unable to use 'RocketLander-v0' and keep on getting the following error:

Traceback (most recent call last):
File "ppo.py", line 64, in
env = GymEnvironment(env_name=env_name, log_path="./PPO_log", skip_frames=6)
File "/PPO-master/agents/environment.py", line 19, in init
self.env = gym.make(env_name)
File "/PPO-master/env/lib/python3.6/site-packages/gym/envs/registration.py", line 163, in make
return registry.make(id)
File "/PPO-master/env/lib/python3.6/site-packages/gym/envs/registration.py", line 119, in make
env = spec.make()
File "/PPO-master/env/lib/python3.6/site-packages/gym/envs/registration.py", line 85, in make
cls = load(self._entry_point)
File "/PPO-master/env/lib/python3.6/site-packages/gym/envs/registration.py", line 14, in load
result = entry_point.load(False)
File "/PPO-master/env/lib/python3.6/site-packages/pkg_resources/init.py", line 2291, in load
return self.resolve()
File "/PPO-master/env/lib/python3.6/site-packages/pkg_resources/init.py", line 2297, in resolve
module = import(self.module_name, fromlist=['name'], level=0)
File "/PPO-master/env/lib/python3.6/site-packages/gym/envs/box2d/init.py", line 1, in
from gym.envs.box2d.lunar_lander import LunarLander
File "/PPO-master/env/lib/python3.6/site-packages/gym/envs/box2d/lunar_lander.py", line 4, in
import Box2D
ModuleNotFoundError: No module named 'Box2D'
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/PPO-master/agents/environment.py", line 136, in close
self.env.close()
AttributeError: 'GymEnvironment' object has no attribute 'env'

I've tried everything mentioned in the comments above and have a problem similar to 'dfolz', wherein I too am getting re-register id: Copy-v0.
Although I'm able to get 'CartPole-v0' in ppo.py instead of 'RocketLander-v0' . Someone please help! :(

@shubhch32
Copy link

@dfolz @aashray18521
I faced the same problem.
Just restart the kernel and the error goes.

@ksajan
Copy link

ksajan commented Mar 18, 2019

@shubhch32
What do you mean by restarting the kernel?
I have tried to run the ppo.py file many time it throws the same error as @aashray18521 got. Event after changing things which are said in this issue. Somebody help?

@EmbersArc
Copy link

This should have the necessary changes on top of the latest version of gym: https://github.com/EmbersArc/gym/tree/addRocketLander

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants