Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[misc] Getting developer & user feedback on solutions #853

Open
gregwhitworth opened this issue Sep 29, 2023 · 1 comment
Open

[misc] Getting developer & user feedback on solutions #853

gregwhitworth opened this issue Sep 29, 2023 · 1 comment
Labels
Misc It doesn't fall into one of the labels below but we want to denote that it was seen stale

Comments

@gregwhitworth
Copy link
Member

gregwhitworth commented Sep 29, 2023

In our telecon today I took the action to open an issue around getting feedback and testing of our solutions as they begin landing in browsers. There was discussion around timing of when this should occur, how it should be conducted and the correct venue for this.

To try and help with the conversation let's utilize <selectlist> as a concrete example for meaningful discourse on how best to approach this. As discussed in the meeting there are a variety of reasons and ways in which you can gather feedback. First let's look at reasons:

What we want to learn

  1. Are there use-cases that the author wasn't able to achieve?
  2. Are the solutions are they accessible?
  3. Did the author find building their solutions to be complicated?
  4. What was the satisfaction in the solution (CSAT).

When we want to learn this
In my opinion, we would want to conduct this research when:

  • There are no major outstanding issues for <selectlist> in Open UI nor the WG or WHATWG where the formal specification is landing that would impact the stability of the implementation
  • There is an implementation in at least one browser behind an opt-in solution (eg: flags, etc)
  • We would want the results of this research prior to a browser shipping to stable

How to conduct this research
There are numerous venues that each have their own strengths and weaknesses to providing valuable insights to Open UI. Some of which are:

  • User Studies: These are formal user research firms where the user or author are guided through specific tasks that shine lights on specific ways in which they utilize <selectlist> and issues that they find. The screens are typically recorded and transcribed to make finding answers easier.
    • Pros: This allows us to test end-users as well as developers to understand their opinions on the feature. It will be comprehensive and we can provide specific tasks which is especially valuable for developers since these will be brand new features. It can answer all of the above questions but enables us to heavily focus on 3 & 4.
    • Cons: This is not cheap so it's probably outside of Open UI's scope to fund; that would require us to rely on a member doing this research. While the member will find business value from doing this research, often the raw research is not made publicly available for legal reasons. Additionally, the amount of users that can be involved will be limited since it's a high touch process.
  • Request for demos: These are informal venues where the community group can work with various coding solutions, such as CodePen challenges to encourage developers to build cool solutions in contests.
    • Pros: This will result in a lot of output examples of the new feature and the Open UI community group can review the various results which will allow us to somewhat answer questions 1 and answers to 2. This will be cheaper and thus Open UI can probably make it happen.
    • Cons: We will need to setup in-depth documentation and guidance for developers to ensure that they know how to use the feature prior to asking them to produce cool examples. There is no built-in way in which to provide feedback on questions 1, 3 & 4.

Open UI's goal is to make it so that we solve the majority of use-cases and they are accessible by default across all form-factors. As such we should have concrete success metrics that:

  1. There are no <selectlist> usecases that aren't achievable
  2. The solution is accessible across all form-factors
  3. Authors and users provide a CSAT of 3.5+

This is meant to be a kickoff to begin formulating how we can go about ensuring that we're producing a successful solution, so please provide your feedback on any questions I may be missing, principles or timelines, success metrics?

@gregwhitworth gregwhitworth added the Misc It doesn't fall into one of the labels below but we want to denote that it was seen label Sep 29, 2023
Copy link

There hasn't been any discussion on this issue for a while, so we're marking it as stale. If you choose to kick off the discussion again, we'll remove the 'stale' label.

@github-actions github-actions bot added the stale label Mar 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Misc It doesn't fall into one of the labels below but we want to denote that it was seen stale
Projects
None yet
Development

No branches or pull requests

1 participant