Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom ~Agent.cs error - Cannot reshape array #32

Closed
eagleEggs opened this issue Sep 22, 2017 · 18 comments
Closed

Custom ~Agent.cs error - Cannot reshape array #32

eagleEggs opened this issue Sep 22, 2017 · 18 comments
Labels
bug Issue describes a potential bug in ml-agents.

Comments

@eagleEggs
Copy link

eagleEggs commented Sep 22, 2017

I must be overlooking something, anyone know what this is pointing to? :D

valueerror_index_uml

Maybe this works better...
valueerror_uml

It occurs no matter how many states I have within the code, and no matter what values I change in the state parameter within inspector.

Also, when states are set to 0 within inspector, there are no errors except for in game - Where it expects 8, which brings us back to the issue :( :

statesexpectederror_uml

@awjuliani
Copy link
Contributor

Hi @eagleEggs, is the state space for the brain set to discrete or continuous? If it is discrete, it will expect a single variable, whose value corresponds to the state (ie 5 corresponds to actions being in the 5th room or something like that). If it is continuous, it will expect a list of n variables.

@awjuliani
Copy link
Contributor

For all but the most simple environments you likely want to use continuous states.

@eagleEggs
Copy link
Author

Ok, so I do have a list of floats however had it set to discrete. When I changed it to continuous I then get a timeout when loading the environment within Jupyter. Switching back to discrete puts me back to the origin of this issue. So something is hanging it up with the continuous state modes.

@awjuliani
Copy link
Contributor

Are you able to run the environment using a player brain for your agent within the editor?

@eagleEggs
Copy link
Author

Continuous state modes do work in editor however. Super awesome :D I really hope to get this working today so I can develop more logic this weekend >.<

@vincentpierre
Copy link
Contributor

You said you get a timeout error when you use continuous states. Are you sure you had your brain set to external ?

@eagleEggs
Copy link
Author

yes, it is definitely external

@eagleEggs
Copy link
Author

So I loaded up a few old builds that worked with continuous and they also timeout. Rebooting everything and will test again. Maybe things got locked up over days of building and running these services :/

@awjuliani awjuliani added the bug Issue describes a potential bug in ml-agents. label Sep 23, 2017
@SoylentGraham
Copy link

I get this error if I switch state to discreet. (In case that helps)

@eagleEggs
Copy link
Author

Back to functioning status...

Not sure why, but adding the build export to my local firewall exception has resolved this (OSX).
Didn't have to do it before Friday - but it was the first time I switched networks since working with this so maybe that had something to do with it.

Also, continuous is working properly thanks to your advice @awjuliani
I guess when I switched to continuous, it was in the midst of this issue evolving :/

Kind of strange...... But thanks for the help! :D

continuous_mean_reward

@awjuliani
Copy link
Contributor

Glad you were able to solve the problem @eagleEggs!

I will add the OSX firewall recommendation to the Common Issues section of the documentation in case others run into the same issue.

@eagleEggs
Copy link
Author

If you have a bit more time, I have a general question (I can make a new post if you want).

I'm referencing the Ball3DAgent.cs in order to convert it into a transform.position x/y as opposed to rotation. During the course of the training, nothing changes and the reward mean stays the same. The object keeps redoing a movement that gives a -reward and doesn't adjust itself. I must be missing a key thing here.

@vincentpierre
Copy link
Contributor

Can you send us the CollectState method you implemented for the Ball3DAgent.cs script? Also, are you sure you are in training mode ? If you were using ppo.py, did you specify --train? And if you are using the notebook, did you specify train_model = True ?

@eagleEggs
Copy link
Author

Within the Brain I'm configuring 2 states and 2 actions in the inspector.
(I've attempted more as well).

The collect state has two states:

public override List CollectState(){
List state = new List();
state.Add(this.transform.rotation.x);
state.Add(this.transform.rotation.z);
return state;}

I'm using notebook with PPO.py, train_model = true;

I'm not sure that I'm structuring the action states correctly.
Based on the example, I should be able to specify within agentStep (which seems the easiest method):

if(act[0] ==0f){transform code x);
if(act[0] ==1f){transform code z);

However with debugging, act[0] is always 0f.

@eagleEggs
Copy link
Author

Kind of confused because with similar code, I can train the demos :/

@eagleEggs
Copy link
Author

Also, when I map the controls to player actions, they work perfectly. Something is missing to have it step through the actions properly on it's own.

@eagleEggs
Copy link
Author

No issues now. I needed to rewrite some of my action code.
Thanks for your time, and for developing this cool project :D

@lock
Copy link

lock bot commented Jan 4, 2020

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Jan 4, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Issue describes a potential bug in ml-agents.
Projects
None yet
Development

No branches or pull requests

4 participants