-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DOC] Improved walkthrough with GIFs #765
Conversation
* Update gitignore * New gifs and notebook. * Improve gifs. * More lovely figures. * Reduce gifs for speed-testing. * Start adding. * Some more work. * Update multi-echo_walkthrough.rst * Update multi-echo_walkthrough.rst
Codecov Report
@@ Coverage Diff @@
## main #765 +/- ##
=======================================
Coverage 93.19% 93.19%
=======================================
Files 27 27
Lines 2204 2204
=======================================
Hits 2054 2054
Misses 150 150 Continue to review full report at Codecov.
|
I agree that a Jupyter Book would fit the walkthrough much better. GIFs are definitely useful to understand the content, but I wonder if a dash app or an Observable notebook would be nicer. Folks would get to "learn by playing" with interactive plots. |
I'd vote for "all of the above" if folks have time and interest ! Specifically on this point:
Do we have to host the GIFs here ? Could we host them on e.g. OSF and just render them in the docs ? I've also successfully hosted animated SVGs as Gists in the past, which could be another option ? |
Would it be difficult to set up the Jupyter book in another repo and just let folks add to it as they're willing/able?
That seems feasible. I'll try it out. |
docs/multi-echo_walkthrough.rst
Outdated
.. image:: https://osf.io/m7aw3/ | ||
:alt: physics_signal_decay.png |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just requested access to that OSF link to see what it includes !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you should modify to the "render" link, e.g.
https://mfr.osf.io/render?url=https://osf.io/mx4ku/?direct%26mode=render%26action=download%26mode=render
But updating the url as appropriate !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's what's happening and (at least) two suggestions.
During the build process, Sphinx is trying to copy the files into the generated _build
directory. Because these are embedded images on a web page, this process fails and results in what you're showing above. To resolve this, we could (in no particular order) :
- Update those image tags to iframes, to embed the web pages
- Fetch the relevant files using cURL or the OSF API during the documentation build process so they would exist appropriately in the
_build
directory. - Switch to another provider (or revert to GitHub) where we can access the files themselves.
Do you have a preference, @tsalo ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for digging into this.
Do you have a preference, @tsalo ?
I'm not sure. I'm leaning toward iframes for right now, but I also really want to try out the Jupyter Book idea, in which case the GIFs would be generated by, and rendered in, the notebooks.
I'm just going to close this in favor of the Jupyter Book. If that doesn't work out for some reason, I can reopen this. |
Closes #690. Still a WIP.
https://tedana--765.org.readthedocs.build/en/765/multi-echo_walkthrough.html
The more I think about it, the more I wonder if maybe we should move things like the multi-echo walkthrough into a different location, like a Jupyter Book, with https://naturalistic-data.org as inspiration.
GIFs are going to take up a lot of space and bulk up our git repository, but I do think they're really helpful for learning about multi-echo fMRI.
Changes proposed in this pull request: