Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question about the result #34

Open
hzx-ctrl opened this issue May 13, 2021 · 5 comments
Open

A question about the result #34

hzx-ctrl opened this issue May 13, 2021 · 5 comments

Comments

@hzx-ctrl
Copy link

hzx-ctrl commented May 13, 2021

Hi,
I noticed that the duration of a task is decided by the code in node.py, which used np.random.randint to generate the cost-time. But if I replace it with np_random which has specified seed, the result I got is still different each time I trained the model. I have no idea why it would happen
Thank you!
Hu

@hzx-ctrl hzx-ctrl changed the title A quesTion A question about the result May 13, 2021
@hongzimao
Copy link
Owner

hongzimao commented May 14, 2021

I think there's randomness in tensorflow action sampling too. As a result, each round of training will get different action trajectory, and model will go down different path. Try fixing a random seed for that too and see if results are repeatable.

One other potential problem I remember is some numerical instability of tensorflow. The training has multiple agents collecting experience in different processes. Mathematically, the order of getting the experiences to compute the gradient shouldn't matter. But empirically it seems that tensorflow gets different gradient when assembling the experiences in different order. You might also want to keep this in mind if you want repeatable outcome at every run. Hope these help!

@hzx-ctrl
Copy link
Author

Thanks for your reply!
And since the algorithm picked up different DAG each episode, how can we tell if Decima has already converged?

@hongzimao
Copy link
Owner

Looking at reward and entropy signal. You can set a criteria (e.g., signal flat out, or stay within x standard deviation computed from past n signal data point) for training convergence. This part is similar to standard RL training.

@hzx-ctrl
Copy link
Author

hzx-ctrl commented Jun 8, 2021

Thank you very much and sorry to bother you again, I trained with --num_init_dags 5 --num_stream_dags 10, and after several thousand episodes I find the output of policy network is so large that valid_mask can't work at all,which leads to take illegal actions. Could you please tell me is it normal and any possible reasons why could this happen? Thanks!

@hongzimao
Copy link
Owner

hmmm I don't recall valid_mask failed. If the policy network can output something, valid_mask is in the same shape. I don't quite get what you meant by "policy network is so large"? Are the numeric values being too large? That might leads to NaN when very large (basically being treated as Inf) number multiplies 0 at valid_mask. In another context I have seen behavior like this, it's usually because the agent selects an invalid action in the previous step. Because it was masked with 0, the gradient descent will have an Inf for some parameters, then things blow up. But I don't recall seeing this in this training code.

Here's a pre-trained model #12 You might want to try the same parameters and compare with the model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants