-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A question about the result #34
Comments
I think there's randomness in tensorflow action sampling too. As a result, each round of training will get different action trajectory, and model will go down different path. Try fixing a random seed for that too and see if results are repeatable. One other potential problem I remember is some numerical instability of tensorflow. The training has multiple agents collecting experience in different processes. Mathematically, the order of getting the experiences to compute the gradient shouldn't matter. But empirically it seems that tensorflow gets different gradient when assembling the experiences in different order. You might also want to keep this in mind if you want repeatable outcome at every run. Hope these help! |
Thanks for your reply! |
Looking at reward and entropy signal. You can set a criteria (e.g., signal flat out, or stay within x standard deviation computed from past n signal data point) for training convergence. This part is similar to standard RL training. |
Thank you very much and sorry to bother you again, I trained with --num_init_dags 5 --num_stream_dags 10, and after several thousand episodes I find the output of policy network is so large that valid_mask can't work at all,which leads to take illegal actions. Could you please tell me is it normal and any possible reasons why could this happen? Thanks! |
hmmm I don't recall Here's a pre-trained model #12 You might want to try the same parameters and compare with the model? |
Hi,
I noticed that the duration of a task is decided by the code in node.py, which used np.random.randint to generate the cost-time. But if I replace it with np_random which has specified seed, the result I got is still different each time I trained the model. I have no idea why it would happen
Thank you!
Hu
The text was updated successfully, but these errors were encountered: