-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add workflow for link checking and spelling #58
base: master
Are you sure you want to change the base?
Conversation
- master | ||
push: | ||
branches: | ||
- master |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding this here, aside from doing one last check on master
(which would get a little more visibility... I guess badges would be better!), also ensures all the caches are created in that "context"... PRs can be use it as a fallback with restore-keys
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple quick comments in there - thanks for taking a stab at building some infra for QC!
|
||
- name: Find broken links | ||
run: | | ||
bash .ci_support/check_links.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another option is that we could use the link checker from Sphinx. Jupyter Book supports users calling it with jupyter-book build . --builder linkcheck
. It's a little finnicky in my experience, but maybe that's true of all link checkers. Could simply the build process a bit?
- name: Find misspelled words | ||
shell: bash -l {0} | ||
run: | | ||
bash .ci_support/check_spelling.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the likelihood that this is going to be triggered on a regular basis just because people are using non-standard words in JEPs etc? I'm a bit concerned that this is going to feel like a nagging bot that keeps telling people to slightly change wording that they think is correct. Do you know what I mean?
- name: Upload book | ||
uses: actions/upload-artifact@v2 | ||
with: | ||
name: _build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wow I didn't realize it was so easy to persist artifacts in GHA haha
On Fri, Jun 19, 2020, 10:36 Chris Holdgraf ***@***.***> wrote:
***@***.**** commented on this pull request.
A couple quick comments in there - thanks for taking a stab at building
some infra for QC!
------------------------------
In .github/workflows/check.yml
<#58 (comment)>
:
> + restore-keys: |
+ pip-check-links-
+ pip-
+
+ - name: Set up Python 3.7
+ uses: ***@***.***
+ with:
+ python-version: 3.7
+
+ - name: Install dependencies
+ run: |
+ pip install -U -r .ci_support/requirements-check-links.txt
+
+ - name: Find broken links
+ run: |
+ bash .ci_support/check_links.sh
Another option is that we could use the link checker from Sphinx. Jupyter
Book supports users calling it with jupyter-book build . --builder
linkcheck. It's a little finnicky in my experience, but maybe that's true
of all link checkers. Could simply the build process a bit?
- not sure if it caches: in ci, it's easy to get api limited. Gh in
particular.
- on another project with a heavily customized Sphinx build, I came to the
realization that i don't care how the link gets there, or what it is in
some intermediate form: all that matters is what ends up in the html, and
for all asset types, including those injected by themes, plugins, etc. that
might occur after the Sphinx link checker runs.
------------------------------
In .github/workflows/check.yml
<#58 (comment)>
:
> + restore-keys: |
+ conda-spelling-
+ conda-
+
+ - name: Install dependencies
+ uses: ***@***.***
+ with:
+ activate-environment: spelling
+ channel-priority: strict
+ environment-file: .ci_support/environment-spelling.yml
+ use-only-tar-bz2: true
+
+ - name: Find misspelled words
+ shell: bash -l {0}
+ run: |
+ bash .ci_support/check_spelling.sh
What is the likelihood that this is going to be triggered on a regular
basis just because people are using non-standard words in JEPs etc? I'm a
bit concerned that this is going to feel like a nagging bot that keeps
telling people to slightly change wording that they think is correct. Do
you know what I mean?
High. But that's that point. Heavy use of unexplained jargon is not helpful
in a spec. If a JEP introduces new terms, then they are new terms, and
there's a section for it called out in the template. We could fiat all
such terms directly, but only if they are headings under the the section in
question. Xeus would be a good example of this.
We could grep out capital words/mixed case words, which would catch
names... But then a typo in a name elsewhere would not be caught. And that
makes it harder to catch Jupyterhub vs JupyterHub.
------------------------------
In .github/workflows/check.yml
<#58 (comment)>
:
> + restore-keys: |
+ pip-build-
+ pip-
+
+ - name: Install dependencies
+ run: |
+ pip install -U -r .ci_support/requirements-build.txt
+
+ - name: Build the book
+ run: |
+ jupyter-book build .
+
+ - name: Upload book
+ uses: ***@***.***
+ with:
+ name: _build
wow I didn't realize it was so easy to persist artifacts in GHA haha
Yeah, it's decent... But not accessible between different workflow yaml
files. And they flake some time. And you can't "needs" them, just a whole
"job," which is kind of a bummer.
—
… You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#58 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAALCRGH6UOM6DH2P3GXPPTRXNZVZANCNFSM4OCMCRKQ>
.
|
yeah that makes sense - in that case I'm +1 on trying out this infrastructure and seeing how it feels in practice. Good point about the linkcheck cacheing and rate limits. also just a note that when you reply to these via email, it's injecting a ton of extra stuff into your comments :-) |
This adds a new workflow which performs:
hunspell
against en-US and en-GB (the latter is more frequently updated)conda-forge
pytest-check-links
pip
I've tried some caching (still learning actions), and the total delta seems not so long. I didn't touch the existing workflows, but they should probably share requirements files. It has been suggested that we might get reusable step definitions (a la azure pipelines) in the future.