Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uncertainties Scaling Improvements & Info gain debugged (fixed) #88

Merged
merged 16 commits into from
Apr 17, 2020

Conversation

AdityaSavara
Copy link
Owner

@AdityaSavara AdityaSavara commented Apr 17, 2020

This branch has a fairly substantial set of improvements. Several arguments have been added to UserInput, a few if statements to the code, and some bugs fixed.

  1. Uncertainties Scaling had a bug in the scaling of covmat for the prior. Now fixed.
  2. Uncertainties scaling can now be turned off, by parameter_estimation_settings['scaling_uncertainties_type'] = 'off'
  3. Uncertainties scaling can now be done by a fixed constant by using a string that is a number, like this: parameter_estimation_settings['scaling_uncertainties_type'] = '1e3'
  4. Info gain had a bug due to posterior scaling to evidence having an error that was introduced during conversion to log_posterior. That bug has been fixed.
  5. A new variable has been introduced called parameter_estimation_settings['mcmc_info_gain_cutoff'] . If that variable is used, by setting it to 1E-4 for example, then small posterior and small prior values will be excluded in the infogain calculation when their pdf is below that threshold.
  6. A minor feature was added to complement feature 3. parameter_estimation_settings['undo_scaling_uncertainties_type'] = True will undo a fixed scaling number 3 (above) before returning the prior in getPrior. This currently only works for the case of a fixed constant, so is not normally useful. Maybe it's possible to extend it to the "std" scaling, but it's not that obvious how to do so and I won't worry about it.
  7. I merged the Langmuir example into this branch before making the pull request to master, so that example is added to master in this merge also.

walke299ericwalk and others added 15 commits April 15, 2020 20:00
To temporarily bypass scaling bugs, this branch includes the following line added:

 self.UserInput.scaling_uncertainties = self.UserInput.scaling_uncertainties/self.UserInput.scaling_uncertainties

That sets the scaling factors for uncertainties to 1.0 for every parameter.
Corrected scaling bug, added "constant/fixed" scaling

Also made some directories to investigate the effects of scaling on Example 12
Removing unnecessary print statement and fixing and converty from try/except to "if" statement for 0 probability cases of simulation failure.
UserInput now has a variable like this:

UserInput.parameter_estimation_settings['mcmc_info_gain_cutoff'] = 1E-5
parameter_estimation_settings['undo_scaling_uncertainties_type'] = True  will undo the scaling. Now we see that 1E13 and the unscaled case are the same.
…ff'] feature

UserInput.parameter_estimation_settings['mcmc_info_gain_cutoff']
I think I found an error. I think this line:
post_burn_in_log_posteriors_vec = self.post_burn_in_log_posteriors_un_normed_vec/self.evidence

I think it should be like this:
post_burn_in_log_posteriors_vec = log(  np.exp(self.post_burn_in_log_posteriors_un_normed_vec) /self.evidence )
Competitive Langmuir adsorption example
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants