Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closes #3051 - logp numpy array input fixed #3836

Merged
merged 12 commits into from
May 5, 2020
Merged

Conversation

Ahanmr
Copy link
Contributor

@Ahanmr Ahanmr commented Mar 18, 2020

Converts int type to array(2, dtype=int32) to parse value to astype and allows arguments to logp(self,value) when called with numpy array in master/pymc3/distributions/multivariate.py

Ahanmr added 3 commits March 19, 2020 00:11
Converts 'int' type to <TensorType(int64,Scalar)> to parse value to `astype` and allows arguments to `logp(self,value)` when called with numpy array.
Allows `logp(self,value)` to take `value` input of type numpy array without errors
@Ahanmr
Copy link
Contributor Author

Ahanmr commented Mar 18, 2020

@twiecki This should do the job of allowing arguments for logp() to be of type numpy array, right?

@codecov
Copy link

codecov bot commented Mar 18, 2020

Codecov Report

Merging #3836 into master will not change coverage.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #3836   +/-   ##
=======================================
  Coverage   83.45%   83.45%           
=======================================
  Files         103      103           
  Lines       14178    14178           
=======================================
  Hits        11832    11832           
  Misses       2346     2346           
Impacted Files Coverage Δ
pymc3/distributions/multivariate.py 78.13% <100.00%> (ø)

@Ahanmr
Copy link
Contributor Author

Ahanmr commented Mar 18, 2020

@michaelosthege Any idea how I can fix this error in base_hmc.py ?

Copy link
Member

@michaelosthege michaelosthege left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't worry about the codecov thing. It often makes mistakes in counting the diff.

However, it'd be good if you can add a test to make sure this bug doesn't appear again. Around https://github.com/pymc-devs/pymc3/blob/master/pymc3/tests/test_distributions.py#L778 there are already other test cases dealing with the MvNormal.

@@ -327,7 +327,7 @@ def logp(self, value):
TensorVariable
"""
quaddist, logdet, ok = self._quaddist(value)
k = value.shape[-1].astype(theano.config.floatX)
k = intX(value.shape[-1]).astype(theano.config.floatX)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you want k to become floatX, you can use floatX(…) directly

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since, value.shape[-1] gives out an int value, I was trying to use intX to convert it to an array form, so that the error can be fixed, but I get your point, we can directly do, k=theano.config.floatX(value.shape[-1]).
Also, do you mean adding a test for testing this bug^ ? I'll do that right away.

@rpgoldman
Copy link
Contributor

@Ahanmr Could you add a comment to explain why you are using value.shape[-1]? I assume that this is because you are assuming something about the shape of value, and it would be nice to either comment about this or add an assert to verify that this assumption holds.

Also, it looks like first you are turning the value into a floatX and then into an intX. Is there some reason you can't simply turn it into an intX value and skip the intermediate step?

@Ahanmr
Copy link
Contributor Author

Ahanmr commented Mar 18, 2020

@rpgoldman This is in reference to Issue #3051 based on which, I proceeded to parse the numpy array into an array form to allow astype() operation, but I guess it can be declared directly as k=floatX(value.shape[-1]) and I've made that change already, but yes, I'm not too sure why shape[-1] is particularly taken here, but since logp calculates log-probability, we need dimension of input in the calculation right?

Ahanmr added 3 commits March 19, 2020 03:59
Added the deprecation of `sd` with `sigma` in newer version with DeprecationWarning on usage of `sd`.
@Ahanmr
Copy link
Contributor Author

Ahanmr commented Mar 19, 2020

@AlexAndorra I'm not sure why I'm getting this codecov error, any suggestions? Thanks a lot.

@michaelosthege
Copy link
Member

I'd like to get this thing closed, but this branch is a bit messed up. At some point you had the change I requested (commit 9df27b3), but then you reversed it (commit 1d53dcf). Also, the change to the release notes is about a different issue.

Git is hard, so here's what I'd like to recommend to fix the history:

  1. make sure you have configured two remotes, eg.:
    image
  2. checkout origin master
  3. make the fix & commit (to master on your local repository)
  4. force-push to your own GitHub remote (git push mine -f)

The force-push overrides the branch on your forked repository with the one that has a clean history. Since this PR references that branch, the history here will also clear up.

@twiecki
Copy link
Member

twiecki commented May 5, 2020

This looks pretty good to me and a squash-merge should take care of everything.

@twiecki
Copy link
Member

twiecki commented May 5, 2020

Did we have tests at one point?

@michaelosthege
Copy link
Member

We didn't test it. I'll try to get it integrated with the other logp tests

@twiecki twiecki merged commit 18f1e51 into pymc-devs:master May 5, 2020
@twiecki
Copy link
Member

twiecki commented May 5, 2020

Great stuff -- thanks @Ahanmr and @michaelosthege!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants