-
Notifications
You must be signed in to change notification settings - Fork 41
Research norms and processes
Our research plans and findings are documented in the open as part of our GitHub repository. You can find the research at any time by changing the branch in the repository to the Research
branch and navigating into the "Research" folder.
This document describes past norms for how we've done research on the NRRD project, supplemented with resources on how we do research at 18F. These are not perfect best practices; this is just a list of tips, tricks, and common elements to help you get started.
- Plan your research
- Conduct research sessions
- Process what you heard
- Decide what to do based on the research
Planning research often takes longer than you expect, and is a crucial step for making the most of users’ time once you do sit down with them. It’s not unusual for research planning to take a week or two of dedicated time, and it often includes many conversations about goals, expectations, and logistics.
This document should make these things clear:
- What big questions are you trying to answer?
- Is it generative research? Stakeholder interviews? Usability testing?
- How many people are you hoping to talk to? This can be a range, but should help set expectations for scale. Most of our research rounds have included either 3-5 or 5-8 sessions. (Resource: Why You Only Need to Test with 5 Users)
- How will you find participants? What’s the plan for recruiting them? What connections or DOI communications can you use to recruit participants?
- When do you know you’re done with this round/sprint of research?
- What should this research help you do?
There are a few kinds of research we've used heavily in the past:
- Generative user interviews
- Remote usability testing
You can find sample scripts in each sprint folder in this Research branch. Most research scripts include these sections:
- Introductory preamble
- Remind them that this is not a test of them, and there are no wrong answers.
- Remind them that this is voluntary, and they can call it at any point if they no longer want to participate.
- Consent for note-taking or recording
- Tell them how you’ll use the notes
- Tell them whether you’ll attribute any quotes
- Consent form, if needed (for remote sessions, this likely needs to be handled in pre-interview correspondence)
- What questions to ask
- Potential follow-up questions
- Concluding thanks, questions, or invitations to follow up
Resource: Avoiding bias in the oh-so-human world of user testing
Figure out who will be participating in research, and work with them on the plan and script. If any research-team participants are new to research, make time to walk them through the plan and script so they can ask questions and understand what to expect. Set expectations for what good research note-taking is, and talk about how to work together and avoid stepping on toes during note-taking.
For some kinds of research, you’ll need to prepare something concrete to put in front of users. These materials should be identified in the research plan, so the team can be ready with the design and functionality you’re hoping to test. You’ll also need a plan to publish or share the prototype with users (in the past, we've used everything from paper print-outs to Federalist previews to the live site).
For each session, you’ll need:
- Interviewer
- Notetaker
- Location (physical or virtual)
- Participant
Resource: My Best Advice for Conducting User Interviews
- Spreadsheet of participants, with codenames and columns to track whether they’ve been contacted, interviewed, thanked, etc. Remember, this is Personally Identifiable Information (or PII): make sure this data is stored someplace appropriate (NOT GitHub), and only the core research team has access.
- Folder for research notes, with blank/template notes documents
- Never use real names in these documents
- Make sure permissions are restricted to core research team
- Don’t try to intersperse notes with the interview script, because it’s confusing (you can include the script for reference, but it’s much easier to have a blank “notes” section at the bottom of the doc for notetakers to work in)
- Invitation email (or other recruiting materials) explaining what the research is about and what to expect. Make sure your invitation email won't bias the research in unintended ways.
We often use detailed note taking in lieu of audio/video recording (mostly for logistical reasons). It's important to align about what kind of notes the team should be taking so that:
- Team members who weren’t able to attend the session are able to take a look at the notes and get a very clear sense of what the user said, did, and felt
- Researchers capture all the details so that when our memories do eventually fade, we can refer back to the notes
- The notes can become the starting point for synthesis sessions that will then lead to tangible changes in the app/service
Try to take verbatim notes as much as possible. Try simply typing down most of what a person says during each session. Avoid thinking ahead, summarizing what you think they mean, or planning what to fix/change. The idea is to capture as much as possible during this precious time with users, and avoid introducing bias by selecting what to write or not write.
It’s often impossible to get every single word; if need be, prioritize capturing more exact phrasing from the user, rather than the interviewer. Don’t fix spelling, punctuation, or grammar while taking notes — you can do this afterward. If there are two notetakers, trade off who takes notes for each answer. Avoid working too close to each other (for instance, fixing a word before the other person has finished typing the next word), as it can be distracting.
Resources: Tips to capturing the best data from user interviews
During research sessions, users are doing us a favor. Do whatever you can to ensure that they have a good experience.
- Schedule a little more time than you think you'll need — it can take a few minutes to get everyone on the line, or you may have an especially chatty participant. An extra 15 minutes can keep it from feeling rushed.
- Make sure calendar invites are clear, both to users and your fellow researchers (for PII-protection and clarity, it sometimes makes sense to use separate team-facing and user-facing invites).
- Send the correct docs to notetakers ahead of time, and make sure they know their role.
- Ask any research team members to show up 2-5 minutes early.
- Test your call-in line, screen sharing tools, or video conference before your first session.
- Confirm with participants that they’ll be able to use conferencing, video, or screen sharing tools, and come up with a backup plan if they can’t.
- Introduce everyone on the call, and make clear who's taking notes or observing.
- Leave enough time for notetakers or observers to ask follow-up questions at the end.
- Leave time for the interviewee to ask questions.
- Adapt the script as needed to make the interviewee comfortable and be respectful of their time.
- Thank them for their time and be very clear about any follow-up promised.
Resource: User Interviews: Bias and How to Reduce It
- Immediately after the session, set up a few minutes with any research team members who were in the interview.
- If it’s not possible to debrief immediately, do this by the end of the day, or within 24 hours.
- During this debrief, discuss surprises and reflect on what you heard.
- Identify a few key takeaways in writing, or by highlighting notes. It can be helpful to note these (often quotes from the user) at the top of the notes document.
- If you promised the user any follow-up communications, identify who will send them and when.
After you’ve completed all sessions, it’s time to rigorously synthesize what you heard and learned. Synthesis can involve different activities, but often includes:
- Time for all research participants to collaboratively gather insights from each session into a single document/space (For instance, this might mean pulling each key takeaway onto a sticky note on a Mural board.)
- Some way to track either individual users (e.g. each user gets a different color sticky) or user groups (e.g. each major user group gets a color)
- Grouping, sorting, and discussion:
- You want the team to work through the big themes, notice which things were heard in multiple sessions, and identify any outliers.
- This is important to do collaboratively for a few reasons: to make this work visible to the partner, to build consensus within the team, to harness the perspectives of different listeners, and to correct for biases that can arise (especially if not every research team member was in every interview).
- It helps to have a clear facilitator, and circle back multiple times to make sure the groupings actually make sense to everyone.
- Naming themes or insights; this forces more clarity in the affinity mapping. Statements or sentences (rather than words/phrases) can help work toward useful content that will become part of a report or summary.
- Avoid including people in synthesis who didn’t participate in any research sessions. If you do need to include team members who weren’t able to at least observe research sessions, make sure they understand that their role will be limited to listening and/or asking questions.
Resources:
After collaborative synthesis, a few team members can further boil down findings into a report, presentation, or summary document.
- This document should cover not just what you learned, but why it matters.
- Aim to make this document true and useful, rather than comprehensive.
- All research team members should review this document to make sure it rings true and accurately represents the research as they understood it.
- Put this document someplace where future team members can find it! (We usually post reports in this research branch, organized by sprint.)
After each round of research, the whole team — led by the PM — should identify how the research changes the next sprint’s work. This could include new bugs identified, new features to explore, or a different design focus.
- Problem statement
- Product vision
- User scenarios
- What we're not trying to do
- Product risks
- Prioritization scale
- Joining the team
- Onboarding checklist
- Working as a distributed team
- Planning and organizing our work
- Sample retro doc
- Content style guide
- Content editing and publishing workflow
- Publishing a blog post
- Content audits: a (sort-of) guide
- User centered design process
- Research norms and processes
- Usability testing process
- Observing user research
- Design and research in the federal government
- Shaping process
- Preview URLs
- How to prepare and review PRs
- Continuous integration tools
- Releasing changes
- Github Labels