Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add helper function for easier Random Walks specification #4047

Closed
AlexAndorra opened this issue Aug 11, 2020 · 9 comments
Closed

Add helper function for easier Random Walks specification #4047

AlexAndorra opened this issue Aug 11, 2020 · 9 comments

Comments

@AlexAndorra
Copy link
Contributor

AlexAndorra commented Aug 11, 2020

This issue is up for grabs 🥳

Currently, users can use theano.tensor.cumsum() to create any random walk they need. We do have pm.GaussianRandomWalk though, which is a user-friendly wrapper but mostly allows one to specify a different init distribution, which is possible but awkward with the .cumsum() approach (it requires tt.stack()).

A nice addition to the library would be to turn this into a small helper function, like pm.RandomWalk('x', pm.StudentT.dist(), init=pm.Flat.dist()). This would give more adaptability to the way users can specify random walks 🎰🚶‍♂️

Feel free to signal your interest here or ask any questions if you want to work on a PR for this 🖖

@chandan5362
Copy link
Contributor

Hi @AlexAndorra , is this issue still open? If it is so,
I would like to work on it and a little more information about the issue would be deeply appreciated.

@AlexAndorra
Copy link
Contributor Author

Hi @chandan5362, yes this is still up for grabs, and thanks for your interest!

However, let me first check with @brandonwillard and @michaelosthege that it's still worth the investment: the coming v4 will use the new RandomVariable op, that all distributions will use instead of the current pm.Distribution.
So, guys, is it worth doing what's in the issue description and add this new wrapper, or will it have to be completely revamped anyway when switching to the Aesara backend?

@brandonwillard
Copy link
Contributor

brandonwillard commented Jan 20, 2021

I would definitely recommend that efforts be directed toward Aesara and the PyMC3 RandomVariable updates. After a certain point, those changes will radically simplify timeseries work.

@chandan5362
Copy link
Contributor

I am looking forward for your response @AlexAndorra .

@AlexAndorra
Copy link
Contributor Author

I'd defer to Brandon's opinion then, what he's saying makes sense -- @twiecki may have insights on this too.
If we put a pin on that issue while v4 comes, we can redirect you to issues converting pm.Distribution to the new RandomVariable when they appear @chandan5362. This will be the backbone of the new version and, as Brandon said, will simplify the timeseries work underlying this current issue!

@chandan5362
Copy link
Contributor

chandan5362 commented Jan 21, 2021

well, thanks @AlexAndorra
I will be happy to work on such issue. Please do ping me when it appears.

@ricardoV94
Copy link
Member

Done in #6072 and #6131

@ghost
Copy link

ghost commented Nov 28, 2022

Have these changes propagated into the various example notebooks that use random walks? If not, we probably need to open an issue in pymc-examples.

@ricardoV94
Copy link
Member

ricardoV94 commented Nov 28, 2022

Hmm good point. There is however this issue: #6098

Meaning we don't yet want to recommend using the class for unobserved RWs because it's going to sample worse than doing innovs.cumsum()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants