If You’re Always Right, You’re Doing it Wrong

Most of us have experienced some flavor of this situation: A team learns about the Lean Startup Methodology and they start running experiments. Magically, the team ends up proving that everything they believed is true. But strangely, when they build the full product, no one buys or uses it. What happened?

I recently conducted an AMA (Ask Me Anything) for The Leader’s Guide community where this topic was discussed. I’d like to take a moment to share a few snippets from the conversation, and expand upon that dialogue.

When I hear about a team that’s always right, the first thing I like to explore is the design of their experiments.  

Often teams don’t write down their hypotheses or their success metrics ahead of actually executing the experiment. Without documented success criteria, the team has ample room to interpret their findings in whatever way they’d like.  

Before you run the experiment, convert your assumptions into hypotheses – make it so they can be proven true or false. And before you run your experiment, be sure to document what you will determine a “successful” outcome will be. Otherwise teams end up “seeing what they see” and justifying their assumptions as being true.

This phenomenon is especially common in environments where failure is not safe, or where success is valued over truth.

After exploring a team’s hypotheses and success criteria, I like to get a look at the experiment itself. It’s not uncommon for teams to create situations that bias users toward doing the desired behavior or giving the desired feedback.

…one simple explanation is that if they’re doing an interview, perhaps they’re asking leading questions. You’ve probably heard the mantra, “get out of the building.” And darn it, you should go talk to your customers!  Most folks have received little to no training on how to talk to customers, so they end up asking leading questions.

Thankfully, there’s an easy fix for this: most large organizations have an internal research team or specialist within their walls, and those individuals are happy to provide feedback on the language in your experiment.

Assuming the design of your experiment looks good, there can still be profound challenges in the interpretation of the data. This is especially true in the early stages of a project when you’re hunting for directional feedback rather than quantitative validation.

When you talk to a person who is in your target demographic, and they say, “Yeah, I’d use that.” …that actually means: No, they will not use it.

You’re looking for the enthusiastic, jumping out of their seat, OMG TAKE MY MONEY reaction.

We love our own ideas, so it’s easy to fall into the trap of interpreting feedback as positive even though it’s actually lukewarm (lukewarm means ‘no’).  

Even if you ask for honesty, people don’t want to hurt your feelings and are likely to sugar-coat the answers they provide.

At this point you can see there’s all kinds of subtle traps that influence the results of an experiment, but we aren’t done yet:

More teams fall victim to the “Crystal Ball” than any other error. This is when you ask your customer about a future behavior, AKA: “Will you use this product?” Questions like this are about as accurate as flipping over an old-fashioned Magic 8-Ball, and are the equivalent of asking your target customer to look into a crystal ball to tell you what they see.

People are well-intentioned but SUPER bad at predicting what they will do in the future. Look no further than new year’s resolutions; well-intentioned, but no follow-through. So if you want to understand what someone *WILL* do, ask about what they have done in the past.

Ex: If you ask someone, how often do you workout? They might say ‘three times per week’. Ask that same person how often they worked out last week? ‘Once’.

One classic technique to avoid asking about future behavior is to ask how people solve the problem today.  

If the answer is that they simply do nothing, you’ve learned something profound: The problem is not meaningful enough to take any action, which I might interpret to mean that it isn’t that big of a problem at all. Will that customer pay for your solution? Maybe, but if I were betting on it, I’d say probably not.

Although this might be hard to imagine, the most successful teams that I’ve worked with prove their hypotheses are incorrect 80-90% of the time.  

It’s common for teams to struggle with this at first, but as we get increasingly sophisticated in our practice of Lean Startup principles, I’d expect that the number of experiments in which we disprove our beliefs will actually go up rather than down. While this might sound like a lot of failure, I prefer to think of it as a lot of learning. If you prefer, think about the piles of money and mountains of time that we just saved by not building the wrong thing.

 

 

Written by Lean Startup Co. Education Faculty Elliot SuselIf you seek to bring the entrepreneurial spirit to your large, complex organization, Lean Startup Company can help. We offer live and virtual training, coaching and consulting to empower people and companies to solve their own problems using entrepreneurial management, no matter their industry, size company, or sector of the economy. Email us.

 

Want more insights? Sign up for our newsletter