I’m a product designer for Pivotal Tracker. I also created Listacular and SpeechHero.

I design solutions to expensive problems with lean design practices.

The Lean Design Process

Recent post

Don’t ask users “would you use this?”, ask “would you pay for this?”

Validating the usefulness of a feature is still somewhat of a new concept to design teams. We’re certainly used to validating the usability of a design, but less so that the design is an effective and desirable solution to the person’s problem.

When design (and even business) teams do attempt to validate the business value of design, the question is frequently tacked onto the end of an interview without too much thought. This is unfortunate, because business validation is perhaps one of the most important things you can do for your product.

Many product teams who want to validate feature concepts recognize that validation is tricky, but are not sure how to do it. They may ask a task based question, like, “does this achieve [the objective]”. Questions like this are extremely leading and don’t lend themselves to understanding the value of the thing you’re proposing.

Recognizing this, an interviewer may then think of questions which wouldn’t be leading. Enter the oft-asked but highly ineffective “would you use this?”

“Would you use this?” is packed with ambiguity

When a interviewer asks about the business value of a design, they typically want to know if the person would find it useful. It feels natural to translate this directly into a question, “would you use this?”. It’s not a leading question, and on face value, seems that it would produce straightforward results.

The problem with “would you use this” is that it is vague, hypothetical and asks almost nothing of the user. 

When you ask “would you use this?” you’re effectively asking a person to assess the actions of a hypothetical future version of themselves. This relies on someone to correctly self-assess their future actions, values and habits. The problem with this is that the interviewee with always imagine the best possible scenario. This is called an “optimism bias”, as people are more likely to always imagine themselves in the best possible situation later in their life. Of course they can see using your time-saving and life-enhancing product one day down the road.

On top of this, people are famously bad self-reporters, and now you’re asking them to report on something they might do at some time in the future.

Asking “would you use this today” alleviates this problem to a degree, but the question is still non-specific. The frequency and depth of “use” is not specified. It could mean just once, or only for a few minutes. It’s like asking “would you try this?”. Of course they’d try it. It requires no commitment from the person using it.

Ask “would you pay X for this?” instead

Where X is the price of your product or enhancement. I’ve found it effective to start with a dollar and work my way up by multiples of 10.

This is a powerful validation question. This achieves the result of discovering if the person would value it enough to use it everyday.

When someone considers a purchase, their brain performs a myriad of mental gymnastics to qualify the purchase.

A big part of this purchase decision is frequency of use, another is pain relief. If the proposition isn’t great enough to justify forking over the cash, they won’t do it.

Even though it’s still hypothetical, you’re making the action more visceral the interviewee by asking them to commit to a cost which they would be willing to pay. Might you come back with a finished product and expect them to pay what they said they would?

If you were actually asking for money (which many people do!) this question is what sales people call a “direct close” because it’s the most direct way to ask someone to decide to buy. It’s the “action” part of AIDA (Attention, Interest, Desire, Action) framework as popularized in The swear-laden fear-inducing pep talk in Glengarry Glenross.

If you think about, user validation act like the worlds lowest-pressure sales pitch. You’re asking someone (who is presumably the target audience) to give you their opinion on a product you’re creating. If this meets their needs, they’ve probably built some degree of interest and desire. Now you’re asking them to take an (almost) real action: to part with some number of dollars, right now, to have this product. Asking “this is available today, would you pay X for this?” is a close as you can get to asking for a sale in a user interview.

Also ask questions which require the user to sacrifice something

“Would you pay X for this?” is a question which requires the user to consider giving up their money, which also translates to time and effort, to get your solution. Depending on what you’re trying to test, there are things other than money that you can ask the person to provide.

On a recent project, I was validating a design for a status report. It was a fairly limited feature that was really only designed for people already using the product, so asking them to pay more to have this extremely basic feature felt odd. Since this was replacing a process they were doing already (status reporting to stakeholders) I wanted to see if it was accomplishing this goal.

Instead of “would you pay for this?”, I asked “would you forward this on to stakeholders/customers in place of your existing report?”. This is a question which requires some thought to answer because it means replacing something they know is working now. It also means that this feature is powerful enough to risk a piece of their reputation with a client or team by sending it to them. If they said “no”, I’d ask “would you unsubscribe from this email or filter it out?” to understand the degree of disinterest. I then asked what was missing from the report we designed, and how we could change it to make it forward-worthy.

Set goals before any interviews

Business validation is a very different beast than usability validation. Even if your users can easily accomplish user testing tasks, far fewer will say they want to buy it, or make some other sacrifice to have it. This is why you should set goals early for validation questions to ensure you’re not biasing the test.

If your audience is more specialized, then presumably your solution more directly addresses their needs. It also means that your audience is relatively smaller, and fewer people will encounter (or need) your product. In this case, you would want to set your goal higher, like 6/10 “yes” vs. “no” responses.

If you’re building for a broad base of users, you might want to set a more modest goal, like 3/10 “yes” to “no”, as your audience is potentially much bigger, meaning a much larger volume of encounters with the product. It also means that needs may very differently from person-to-person.

Categorize your results

Qualitative feedback is difficult to benchmark, and strong validation questions are no different. Rarely does “will you pay for this?” yield direct “yes” or “no” answers, but often “yes/maybe” with some caveats.

To work with this challenge, I suggest categorizing your responses. This could simply be “yes” and “no”, where maybe’s translate to no’s. If you want a wider variety from yes and no, you could also break out the maybe’s into “maybe-yes” and “maybe-no” to see a more detailed categorization.

Here are two examples from a recent project, both for Pivotal Tracker. I was testing a few status reporting concepts with both customers and internal users (Pivotal Labs consultants).

The first design, a multi-project dashboard, our team was much more hopeful about. I asked customer and internal PMs alike “would you pay for this?”. Many internal users didn’t feel comfortable with this question (since they get the tool for free) and so I instead asked “would you send an expense request to get it?”.

Based on the responses of the customers I spoke to, the proposed design didn’t seem to be validating as we had hoped.

Screen Shot 2016-04-20 at 12.57.01 PM

The second design was the status report email I mentioned further up in the article. It was produced directly from user research findings. The design itself was incredibly simple, but we were surprised how well it tested.

Since this was a simple feature that we already had represented elsewhere in the product, it felt odd to ask them to pay for it. Instead I wanted to know if it would replace their pre-existing email reports, and asked “would you forward this on to stakeholders/customers in place of your existing report?”

Screen Shot 2016-04-20 at 12.56.19 PM

As you can see from the two above charts, they are almost polar opposite in their results. The status email was very well received. And despite its simplicity, many of the interviewees who previously said they would not have paid for the dashboard said they would pay for the email.

That being said, if we were to be cutthroat about sticking to our original goal (60% “yes”) and converted any “maybe’s” into “no’s”, then our email test was invalidated with a 50% success rate.

Ultimately, gauging the success of failure of a validation is up to you and your team. We could say that the email is still a valid design because it may also increase engagement. We could also say the multi-project design is valid because those who said “yes”, while few, were big customers willing to pay us an outsized amount of money for it.

Regardless of what you decide, asking strong validating questions and plotting the results gives you a lot more to work with. I’ve found it yields more direct and fruitful responses from users. Both in terms of what they need from your product, and if it’s worth doing at all.

Unravel customer pain faster with artifact research

A customer problem can often be elusive or difficult to tease out. You may be getting wildly different information from the people you’re speaking with, or vague answers to questions.

These murky results can happen for a combination of reasons. In many cases, this is because people don’t know what information to provide you. They may take something for granted in their every day work that is in fact vital information to you. In other cases, they have a specific feature in mind and are so focused on it that they leave out details they feel unimportant (consciously or unconsciously). Even more common is customers misreporting something about their behavior.

The fastest and most effective means I’ve found to quickly and bridge the gap between researchers and customers is artifact collection. In anthropological terms, artifacts represent anything created by people which gives information about the culture of its creator and users. Artifacts inherent utility to researchers is in that they help reflect the values, workflow, and environment of the person being researched. It represents an expression of their work so important that the person has dedicated very real time to it.

Why are artifacts so useful?

Artifacts demonstrate actual user behavior vs. reported

rep pricing guides

The above is an example from a field study I conducted with medical device sales reps from across the US and Canada. The stakeholders felt that electronic product catalogues would help sales reps improve their numbers. It turned out that most of what they needed was better (and more available) pricing data, as exemplified by pricing sheets they would manufacture weekly and carry with them to every call.

People are notoriously bad self-reporters. Users may, consciously or not, leave out important aspects of their work or misrepresent facts in some. Artifacts represent real qualities of someone’s actual work vs. speculation or hypothetical use.

Whenever a user or stakeholder say they need something specific, I often ask if they’re doing it already in some way? The artifact can demonstrate the actual need vs speculative. This becomes vitally important when you’re deciding on what to include or exclude on a feature set. Artifacts of their work is a strong signal of what they actually value and need.

Artifacts illustrate common needs across users

customer epic reports

User research is most successful when the outcome is a discovered common need (or needs) present across a number of people. This addresses a broad-based necessity of a tool or service, which can translate into larger market for the solution.

Check out these three reports above. I asked a number of Pivotal Tracker customers which reports they create, who they send them to, and what they do with them. Each described reports addressing high-level features, which translate to something called an “Epic” in Pivotal Tracker. They detailed the Epic status (complete, upcoming), the progress made, and when it might be complete. The fact that so many customers were reporting on Epics in one way or another illustrated the common need.

Artifacts frame workflow conversations

process diagram

A person’s workflow often revolves around or produces artifacts as a result of the process. These can be something as simple as a weekly report or a pricing sheet.

In the context of workflow, artifacts are anything a user produces or relies on in their day-to-day work. These can include something as simple as a pricing guide on their desk, a spreadsheet they keep open all day, or a notebook they carry with them to every meeting. Something seemingly average to them can provide tremendous insight to you.

Starting conversations around artifacts help frame what happens before and after the production or use of an artifact. You, as the interviewer, can ask questions about specific aspects of the artifact that may be mundane or forgotten to them but important to you.

Artifacts can act as a template for designs

epic as report

It’s usually a bad idea to take customer feedback and apply it directly to a design. But sometimes artifacts, especially if you see them regularly, can act as templates for new designs.

To dovetail off of the common needs section above, I was seeing almost the same feature report over and over from customers. I decided to take the general format of this report for our Epics design to test its effectiveness. The plan was to start with this skeleton design and build on it as we received more feedback and became more knowledgeable about customer needs.

Asking for artifacts is easy

It may seem invasive to ask someone to show you their work, but asking users for artifacts is simpler than you may think. Occasionally a person will be resistant to show off their work (usually because it contains confidential information) but 9 times out of 10 they’ll provide it if you ask for it.

The best time I’ve found to ask someone about their artifacts is during in-person visits to their environment. If you’re lucky enough to get access to the person’s office or home, you will likely see things on their desk, hanging on the wall, or on their computer. When you see these possible artifacts, ask the user what they mean and why they’re important. If they primarily work with a computer, ask them which applications/sites they regularly use in their work.

When you’re talking to someone someone about a process they are involved in, ask about any artifacts that they use. Do they record/manage the process in some way (e.g., in a task manager, spreadsheet)? Do they use any supporting files as inputs? Are there deliverables at the end? Who is the customer of these, producer of these, and what about these artifacts are important? All of these questions help visualize the important parts of someone’s process.