-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Team update - Oct 05, 2022 #161
Comments
Updates from last week ✔️
Challenges I faced and things I'd like assistance with 🙏
My availability for next week
Important items for discussion 💬
|
Thanks I'd like to give 🙌
Updates from last week ✔️
Challenges I faced and things I'd like assistance with 🙏 My availability for next week
Important items for discussion 💬
|
@kulsoomzahra @isabela-pf @steff456 @gabalafou |
Sorry I'm late on this. I moved some meetings around without leaving myself time to write my update before the meeting. Thanks I'd like to give 🙌
Updates from last week ✔️
Challenges I faced and things I'd like assistance with 🙏
My availability for next week
|
@kulsoomzahra I’m not sure if you saw this in the team meeting, but I discovered that the design we were discussing for the running tabs panel has already been implemented in Notebook 7. To see it, follow the Binder link at the Discourse thread announcing the Notebook 7 pre-release. You can then take a look at the Running tab. :) |
@trallard, here's my follow up to our conversation, my proposal for the rest of this month. Here's what I would like my deliverable for the end of this month to be. I want to create a working service such that when I open a PR that fixes an accessibility issue on JupyterLab, I can ask the service to run a set of tests and post a comment on the PR that shows 1 failing test before the PR and all tests passing after the PR. For a first example, I will use my tab trap test plus fix. I will write up a document (in markdown) explaining this test and how to perform it manually. I will set up the testing service so that it can run only tests marked as regression tests. I will add a feature to the testing service that will allow it to perform a before/after comparison, highlighting failed tests and providing links to the markdown documentation for each failed test. In the end, my demo of this deliverable will be a PR on my fork of JupyterLab. On that PR, you will see a comment from the testing service that shows one test (tab trap) failing before the PR, and passing after the PR. In that comment, you will get a link to the markdown document that explains the failing tab trap test. To recap, here are the components of this deliverable:
As far as the end user (in this case a dev or reviewer) is concerned the only UI for this deliverable is on the PR. It will look something like the following. PR #X on gabalafou:jupyterlab:fix-tab-trap-branchcomment (me): @jupyter-a11y-testing please run before/after comment (jupyter-a11y-testing): Before this PR: 1 failing test
After this PR: No failing tests |
@trallard to respond specifically to your question about using personas, I have to think some more about it. The blog post frames the personas as being useful in the design process, doesn't say anything about using them in the testing process, but maybe there is a way to somehow incorporate the general idea into a testing framework. I'll have to think about that some more. |
This is a @Quansight-Labs/czi-a11y-grant team sync 🎉🎉🎉! This is a way for the Team Members to provide status reports on what they've been up to this week and request help and attention for things they are working on. This issue will be closed at the end of the day.
Copy and paste the template below, and answer questions as you wish!
Response Template
🔍 Needs Triage
The Needs Triage issues require an initial assessment and labeling.
No issues need triage! 🎉
🎯 Project boards
Please make sure the boards reflect the current status of the project.
Jupyter a11y grant project
The text was updated successfully, but these errors were encountered: