-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom ~Agent.cs error - Cannot reshape array #32
Comments
Hi @eagleEggs, is the state space for the brain set to discrete or continuous? If it is discrete, it will expect a single variable, whose value corresponds to the state (ie |
For all but the most simple environments you likely want to use |
Ok, so I do have a list of floats however had it set to discrete. When I changed it to continuous I then get a timeout when loading the environment within Jupyter. Switching back to discrete puts me back to the origin of this issue. So something is hanging it up with the continuous state modes. |
Are you able to run the environment using a player brain for your agent within the editor? |
Continuous state modes do work in editor however. Super awesome :D I really hope to get this working today so I can develop more logic this weekend >.< |
You said you get a timeout error when you use continuous states. Are you sure you had your brain set to external ? |
yes, it is definitely external |
So I loaded up a few old builds that worked with continuous and they also timeout. Rebooting everything and will test again. Maybe things got locked up over days of building and running these services :/ |
I get this error if I switch state to discreet. (In case that helps) |
Back to functioning status... Not sure why, but adding the build export to my local firewall exception has resolved this (OSX). Also, continuous is working properly thanks to your advice @awjuliani Kind of strange...... But thanks for the help! :D |
Glad you were able to solve the problem @eagleEggs! I will add the OSX firewall recommendation to the Common Issues section of the documentation in case others run into the same issue. |
If you have a bit more time, I have a general question (I can make a new post if you want). I'm referencing the Ball3DAgent.cs in order to convert it into a transform.position x/y as opposed to rotation. During the course of the training, nothing changes and the reward mean stays the same. The object keeps redoing a movement that gives a -reward and doesn't adjust itself. I must be missing a key thing here. |
Can you send us the CollectState method you implemented for the |
Within the Brain I'm configuring 2 states and 2 actions in the inspector. The collect state has two states: public override List CollectState(){ I'm using notebook with PPO.py, train_model = true; I'm not sure that I'm structuring the action states correctly. if(act[0] ==0f){transform code x); However with debugging, act[0] is always 0f. |
Kind of confused because with similar code, I can train the demos :/ |
Also, when I map the controls to player actions, they work perfectly. Something is missing to have it step through the actions properly on it's own. |
No issues now. I needed to rewrite some of my action code. |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
I must be overlooking something, anyone know what this is pointing to? :D
Maybe this works better...
It occurs no matter how many states I have within the code, and no matter what values I change in the state parameter within inspector.
Also, when states are set to 0 within inspector, there are no errors except for in game - Where it expects 8, which brings us back to the issue :( :
The text was updated successfully, but these errors were encountered: