Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModuleNotFoundError: No module named 'tensorflow.contrib' #5

Closed
tinof opened this issue Dec 4, 2020 · 8 comments
Closed

ModuleNotFoundError: No module named 'tensorflow.contrib' #5

tinof opened this issue Dec 4, 2020 · 8 comments

Comments

@tinof
Copy link

tinof commented Dec 4, 2020

What an interesting project, thanks for this. I've tried to run the demo to train an agent, but got this error:

~/freqtrade$ python deep_rl.py
2020-12-04 03:35:21.973385: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
  File "deep_rl.py", line 8, in <module>
    from stable_baselines.common.policies import MlpPolicy
  File "/home/virtual/freqtrade/.env/lib/python3.8/site-packages/stable_baselines/__init__.py", line 19, in <module>
    from stable_baselines.ddpg import DDPG
  File "/home/virtual/freqtrade/.env/lib/python3.8/site-packages/stable_baselines/ddpg/__init__.py", line 2, in <module>
    from stable_baselines.ddpg.ddpg import DDPG
  File "/home/virtual/freqtrade/.env/lib/python3.8/site-packages/stable_baselines/ddpg/ddpg.py", line 11, in <module>
    import tensorflow.contrib as tc
ModuleNotFoundError: No module named 'tensorflow.contrib'

Is it related to this?

`tensorflow.contrib is being removed in version 2.0``

If so, I could not fix it with this:
https://github.com/deetungsten/stable-baselines

Any other ideas? Should I just revert to Python 3.7 and Tensorflow <2.x ?

python3 -c 'import tensorflow as tf; print(tf.__version__)'
I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2.3.1
@tinof
Copy link
Author

tinof commented Dec 4, 2020

So I guess for now Tensorflow <2.0 and Python =<3.7 is required.

@TomMago
Copy link

TomMago commented Dec 5, 2020

Yes, stable baselines only supports tf 1.X for now, there are some tf 2 forks, but I'm not sure if there is something far enough yet.

@hugocen
Copy link
Owner

hugocen commented Dec 13, 2020

The stable-baselines library currently supports Tensorflow versions from 1.8.0 to 1.15.0, and does not work on Tensorflow versions 2.0.0 and above. Source

Use this command to install tensorflow 1.15.
pip install --upgrade tensorflow==1.15.0

@hugocen hugocen closed this as completed Dec 14, 2020
@tinof
Copy link
Author

tinof commented Dec 22, 2020

Stable baseline now supports TF2: https://github.com/DLR-RM/stable-baselines3

@hugocen
Copy link
Owner

hugocen commented Dec 27, 2020

@tinof Stable baseline now supports TF2: https://github.com/DLR-RM/stable-baselines3
I am afraid that is a PyTorch version instead of TF2.
And last time I checked.
That version is in early stage, so it lack of several major features like multi processing.

@zcythe
Copy link

zcythe commented Feb 19, 2021

@tinof Stable baseline now supports TF2: https://github.com/DLR-RM/stable-baselines3
I am afraid that is a PyTorch version instead of TF2.
And last time I checked.
That version is in early stage, so it lack of several major features like multi processing.

Yes, you are right.
I was able to run freqtrade gym with stable-baselines3 after some slight modification.
Btw, awesome project.
To be honest, I am still kinda new in openai, gym, stable_baselines, ray, etc.

As I am still kinda new in reinforcement learning, I would need some insight/advices from the expert.
Can you enlighten me on this portion of code in freqtradegym.py?
image

Why self._reward being set to zero at the beginning of the function and why it has to be reset to zero when self._reward > 1.5?

By the way, I would like to implement sortino / omega as rewarding scheme instead of profit only rewarding scheme.
Anyone can kindly highlight which part of the code should I dive into? It is observation( ) function in freqtradegym.py?
image

@hugocen
Copy link
Owner

hugocen commented Apr 30, 2021

Why self._reward being set to zero at the beginning of the function and why it has to be reset to zero when self._reward > 1.5?

Ah, I forgot why I did that. Maybe just trying something out.
This is an experimental project. Feel free to remove it and do more experiments.

By the way, I would like to implement sortino / omega as rewarding scheme instead of profit only rewarding scheme.
Anyone can kindly highlight which part of the code should I dive into? It is observation( ) function in freqtradegym.py?

You can check out here.

@mojito228
Copy link

Yes, you are right. I was able to run freqtrade gym with stable-baselines3 after some slight modification. Btw, awesome project. To be honest, I am still kinda new in openai, gym, stable_baselines, ray, etc.

Man, how did you do that? I'm trying to run with stable-baselines3, but i've some problems with libraries, like ACER, maybe you can share your code for gym, strategy and deep_rl?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants