In usability testing, researchers use discussion guides to direct the conversation toward areas of interest. These guides are not used like scripts. But rather, they are helpful guardrails to keep the session on track.
One part of a discussion guide that is treated like a script though is the introductory statement. Part of the standard introductory script goes something like this:
“Please give your open and honest feedback. I didn’t design this so don’t feel like you need to sugarcoat anything. You won’t hurt my feelings.”
It’s a line I’ve read at the beginning of every usability study since I became a design researcher. But is separating our involvement in the design, whether truthful or not, the best way to encourage participants to be open and honest with their feedback?
I decided to do some research. This is what I learned.
Framing the Problem
It takes more than a few prompts at the beginning of a usability test to encourage participants to give honest criticism. For most people, giving feedback is uncomfortable in the best circumstances, let alone to someone you just met. But the unfiltered truth is vital to the success of my clients’ projects.
They’re not paying for validation to hear that everything is swell. They want to see human error and hear real opinions about what’s not working before they sink resources into a project. Yet, in a manufactured lab environment, faced with a two-way mirror, human nature kicks in and people aim to please.
In my experience, using the standard script line can prevent the openness and honesty it’s intended to evoke. When I remind participants that a real person put time, energy, and care into this design that they may tear apart, I’m priming them to measure their responses.
Participants still hesitate to speak bluntly, often prefacing their opinions to soften the impact (e.g., “This is probably just me, but…”). This leads to ineffective feedback that obscures and diminishes the effect that design problems have on users.
I ran this experiment during a 10 participant usability test of an InVision prototype that I had designed. The client approved the variation of the new script. Sessions were one hour each in LiquidHub’s in-studio usability lab. There were no clients in the observation room.
Variations and Their Success
I tried out a few variations of the introductory line. If successful, the new line would elicit participants to give honest feedback without calculating their opinions with a fear of offending.
One that didn’t work well:
“We’re going to look at a prototype that I designed. I put time and energy into coming up with ideas, but I need your feedback to make it the best version it can be.”
This line was ineffective because it put unfair pressure on the participant to help me do my job. Also, identifying myself as the sole designer, set an adversarial tone (i.e., your opinions versus my ideas).
The variation that worked the best:
“My job is to help the client make this as amazing as possible. We’ve had some ideas already. And when we brainstormed, we put all our ideas in the design without judgement. You will help me figure out which ideas are great, which are duds, and come up with new ideas we didn’t think of.”
Why the New Line Worked
I learned that this new line changes the way participants approach the entire usability test interaction. With the successful line, participants began to act like a design partner, adopting a problem-solving mindset instead of only reacting to the design like a stimulus.
Participants were more comfortable leading the discussion toward areas of interest to them, resulting in more off-guide conversation (i.e., conversation that diverges from the planned discussion). These areas were ripe with insights.
The new line was also effective because people were more confident in their opinions. They associated being opinionated with being helpful, not judgmental.
It didn’t make them completely forget that there may be observers behind the mirror, but it gave them a clear purpose and understanding of how to meet our expectations. In short, it harnessed the human instinct for people-pleasing in a way that benefited the project.
The outcome of my experiment made me curious to test other accepted usability testing “rules”. For instance, in the lab we are instructed to take on a neutral, measured persona so we do not influence participants with verbal or nonverbal affirmations.
What would happen if we broke that rule? Would doing this “lead the witness,” as we are cautioned, or would it help us build rapport? What else can we do differently to crack the participant shell faster to get open and honest feedback?
I encourage fellow researchers and designers to join me in this exploration. As long as experimentation is done with respect for users and in service of the client goals, we should follow our own process to iterate, validate, and test.
After all, we are the experts in process improvement. If we do not turn our methods inward then we are missing the opportunity to evolve.