-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mean Prediction Interval Width (MPIW) and Kullback-Leibler Divergence (KL-Divergence) #2
Comments
Upper and lower bounds are calculated through sampling. For example, after you read the data:
I would not recommend using KL-Divergence as I realize it is not suitable for the evaluation of point distributions in our case, but MPIW is fine. I define the KL-Divergence in the whole dataset level, which is deviated from the original definition of it. But here is the code block I used for the paper:
|
Thank you very much! |
When I tried to apply the code you provided to my task, I found that when calculating the upper and lower bounds, the output was 0. I suspect this may have something to do with the three parameters of the ZINB model I used that were predicted by the deep learning model. I'm wondering if I need to do an anti-normalization process to get the right result? If you have ever been in a similar situation or have any suggestions, I would appreciate your guidance. |
Not always, if you train correctly. This is the code of plotting Figure 4(b):
I have no normalization process as you need to fit your data directly to calculate the likelihood. Some points might be fitted with both 0 upper and lower bounds, but not always the case. If your data are too sparse, it is likely that your lower bounds are always 0, but the upper bound should be some value larger than 0. |
Thank you very much! I find the problem! |
Hi, author. When I calculate the evaluation metrics, I am very confused about the determination of the upper bound U and the lower bound L. Would it be convenient for you to provide the codes for MPIW and KL-Divergence? Thank you!
The text was updated successfully, but these errors were encountered: