We are at risk of only listening to what we want to hear. The main lesson I learned from conducting almost 100 customer interviews for product discovery over the last six months is that there are many ways in which we can become biased during customer interviews.
I have read many books, articles and blog posts as well as listened to podcasts about how to effectively talk to customers, but doing a large number of customer interviews myself was how I realized just how easily we can be led astray. I am going to explain how I try to stay objective while talking to customers, in the hope that this can be useful to you in some way.
Note: Although I am using the word ‘customer’, I am referring to a ‘potential customer’ as I am writing about talking to customers to validate product ideas. I will use the word ‘customer’ throughout this article for simplicity.
Having a paying customer for a product we have built is very strong validation that our product is valuable. Whether the product will achieve product-market fit or earn enough revenue to sustain itself is a different story and is not the topic of this article.
There are other activities that suggest a customer finds our product valuable, but if they do not involve the customer paying for our product, they provide slightly weaker validation. For example, a signed Letter of Intent (LOI) to buy a product.
You are probably wondering why am I philosophizing about paying customers and LOIs when this article is supposed to be about customer interviews? You got me there. What I am trying to do is make the argument that there is a spectrum of customer validation activities with one end representing strong validation and the other end much weaker validation.
Where the cards are stacked against us is that positive customer feedback for our product idea during a customer interview is at the weaker end of the validation spectrum. It’s possible that a customer’s first impulse is to react positively to our product idea, but later as they think through it more they see flaws or realize they don’t really need a product like this. It could also be that they’d rather compliment us on our product idea instead of telling us that our idea is no good.
Whatever the reason, it is crucial that we are aware of this challenge and apply careful strategy during our customer interviews in order to distill the truth about our product idea.
Let’s examine what we can do.
First and foremost we need to delay bringing up our product idea until we have learned about what the customers’ problems really are. This is the first step in avoiding positive customer feedback that may not be true.
If we start by talking about our product idea right away, the natural next step is to ask for feedback on our product idea. By asking for feedback, we are pushing the burden of truth onto the customer and putting them in an emotionally exhausting and difficult situation, as Rob Fitzpatrick (author of The Mom Test: How to talk to customers & learn if your business is a good idea when everyone is lying to you) explains in Episode #154 of the Indie Hackers podcast.
Likely at that point the customer has discovered that we have invested time or money (or both) in our product idea. They may tie our product idea to us as a person, and therefore find it next to impossible to give honest feedback if they do not like our idea. They will tend to compliment us rather than provide negative feedback as it may hurt our feelings.
Think of the last time a friend or family member told you about some awesome business idea that they had. If you did not think that it was an awesome idea yourself, did you tell them? Not likely. It is basic human psychology and therefore we cannot blame our customers, we can only blame ourselves for bringing up our product idea too soon.
Bringing up our product idea at the start of a customer interview can also lead to the customer misinterpreting our attempt for early feedback as an attempt to sell the product to them. Customers quickly become defensive if they feel that they are being sold to. They may say that the product sounds like a great idea and that they are interested, but really they are just trying to get us off their backs.
I fell into this trap before and have tried to reassure customers that I am not trying to sell them anything and am simply looking for feedback, but I felt that this made me sound even more like a salesperson. You may have better luck though.
Now you are probably wondering: Well, Lena, what in the world should we talk about then if we can’t talk about our product idea?
I’m glad you asked. But before we dig into how to conduct actual customer interviews, there is one important task we need to complete before we begin that will help us stay focused during the customer interviews to come.
We usually have a few hypotheses around our product idea in the back of our mind, which we need to write down at this stage. I like to list up to three hypotheses and try to make them quite specific. We should also make a note of the target audience that we suspect will validate our hypotheses and what their characteristics are.
We will typically not get these hypotheses right from the very beginning because we will form them based on meagre data (e.g. a few anecdotes) or even just a hunch. But that’s okay, it’s a starting point. As we talk to more customers and gather more data, the hypotheses will transform; some will get modified, some will get invalidated, and new ones will be formed. We will keep a record of all this.
It’s best to start all of our early customer interactions with behavioural interviews. We ask customers to talk about their day-to-day experience or something general that we want to learn about them. The goal here is to understand the customer in terms of how they live their life, what problems they face, and what their motivations are behind solving them.
Even though we have listed a few hypotheses about the problem we believe the customer has, we will not blatantly ask whether our hypotheses are true. Instead, we will let the customer speak about their life and have them indirectly validate or invalidate our hypotheses. We cannot assume that our hypotheses are correct.
Here is an example from when I did a few behavioural interviews with parents of small children in daycare to test an initial hypothesis that they needed a better way to manage the pick-up and drop-off of their children to and from daycare.
In the first interview that I conducted with a father of an 18-month-old boy, I started broadly by asking him to describe his day-to-day schedule. It turned out that he lives very close to his son’s daycare so getting his son to and from daycare was not a huge pain for him. Although his wife’s work is not very flexible, his work is quite flexible because he runs his own agency and is able to work from home, which also makes things easier. My initial hypothesis, at least with this one customer, was invalidated.
He went on to describe the rest of his day and I learned he faced a different problem that was really troubling him. He was unsure of the level of social interaction his son was getting at daycare, which led him to wonder whether the cost of daycare was worth it.
Interesting. By starting with a behavioural interview, I was able to obtain unbiased customer data to use to validate or invalidate my initial hypothesis. The new information then helped me to form another hypothesis and continue learning.
In the past I have sometimes been impatient in validating my hypotheses and have been convinced that a customer’s behaviour was in line with one of my hypotheses. I would skip the behavioural interview and ask the customer to describe how a certain problem impacts them and what they are doing about it. I either ended up making an incorrect assumption, or lost sight of how the problem fits into their day.
If we focus on listening and ask the right questions, the problems will come.
Now, to confirm whether the customer really cares about the problem.
Problems are frustrating and may sometimes make people feel emotional, causing them to exaggerate the magnitude of the problem. This applies to customers too, so it is important to carefully measure the magnitude in an objective way before thinking or talking about potential solutions.
Anchoring a problem helps to determine how painful a problem the customer has brought up really is, an excellent technique Fitzpatrick describes in his book. The goal is to avoid ambiguous and generic comments from customers and focus on obtaining concrete facts, especially related to the past.
When a customer brings up a problem, no matter how frustrated they seem by it, we can ask them three simple questions:
These questions or any variation of them are what help us to bring the conversation back to reality and get a sense of whether they truly care about this problem or are on a bit of a rant.
An example might help to illustrate how to use this technique. I once had a customer interview where the customer exclaimed: “I hate it when that happens, it’s just the worst!”. I was convinced I was onto something. However, I remembered to anchor the problem and asked the customer how often this happens to them and when was the last time that it happened. They dismissively replied that it happens about twice a year and the last time was a few months ago.
I then asked how they dealt with it, to which they flatly replied that they just had to do a manual workaround in Excel for a grand total of two minutes and it didn’t bother them that much.
Aargh. How disappointing. But the good news was that I didn’t even have to ask my third question and I avoided going down a rabbit hole.
Anchoring can help us make corrections for exaggerated customer responses by providing a reality check.
Determining whether a customer is looking for a solution to their problem is critical because it sets the stage for us finally revealing our product idea. We need to first gain an understanding of what they are doing today to solve it (if anything). Fitzpatrick recommends asking “how are you coping with this today?” to direct the conversation.
Once we have asked this, we should sit back and listen to the customer and ask for clarification or details where needed. This should be familiar from when I described how to conduct behavioural interviews by not diving into the problem right away. Ideally, we want the customer to uncover what challenges they face with their current solution.
Next we need to find out how satisfied the customer is with their current solution. The less satisfied they are, the more likely they are in need of a better solution.
There is a great interview with Fitzpatrick on Episode #18 of the Bright and Early podcast where he describes the importance of this. He goes further to recommend asking about any failed attempts at fixing the problem. Learning about previous failed attempts at fixing the problem is a good way to learn what unmet needs they may still have and what might be missing from other solutions they have tried. We can use this information later to enhance our product idea to fit customers’ needs better.
We might sometimes find a customer who has “cobbled together their own makeshift solution”, as Fitzpatrick puts it in his book. These customers care so much about the problem, that they have taped together their own solution. Oftentimes it is an incomplete and temporary solution and they are searching for a more sophisticated solution.
A useful data point that can inform just how much they care about finding a more sophisticated solution is how much time or money they are spending on the current solution and any efforts to search for a better solution. The more time or money they are spending, the more they care about solving this problem. We might, for example, ask how many people are engaged in operating their current solution and what their wage is to arrive at a dollar value. This dollar value can act as a loose proxy for how much they may be willing to pay for a solution.
I have found myself once in a situation where a customer had created a homegrown solution using software that was not meant to be used like that. The solution was missing a lot of functionality and therefore was not a long-term solution. They had a senior engineer working full-time to search for better solutions. In this example, it was clear that the customer cared a lot about solving the problem.
I hinted at this statement earlier: not every customer is looking for a solution to their problem. If they are not doing anything to address the problem, it may be because they are not terribly inconvenienced by it. There may be other bigger problems that are higher on their priority list to tackle. This is a sign to move on to either a different customer or a different hypothesis.
The reason I say a different customer first, is because we may very well just be talking to the wrong customer profile. In the next section I will explain how paying attention to each customer’s profile can help us identify the customer segment concerned with a specific problem.
Simply hearing about a painful problem from one customer is not enough to validate our product idea. We must hear it from multiple customers; the more the better. The goal is to identify the customer segment with the most painful and urgent problem to solve. This is our beachhead opportunity, and it is what our Minimal Viable Product (MVP) should target.
As we conduct customer interviews, we keep a running list of problems we hear from different customers. By including information about how painful the problem seems to be and whether customers are looking for a solution, we can begin to rank the problems based on this. The list will evolve over time as we speak to more customers.
Not every customer we talk to will have exactly the same profile, though. And customers with different profiles will experience problems differently. Next to each problem on our running list, we will note down the characteristics of the customers who have that problem.
Characteristics can include things like demographics (e.g. gender, age), job title, size of organization they work at, personality traits (e.g. ambitious, organized, forgetful), etc.. We can also look at deeper characteristics like what motivates them to solve the problem they have, whether they are a technology enthusiast or early adopter, etc.
After some time, patterns will start to emerge around the customer segments that experience each problem. We should be able to identify the biggest problem of them all, along with the associated customer segment, and this is our beachhead opportunity.
Hopefully at this point we have managed to obtain unbiased data from customer interviews, transform our hypotheses using this customer data, measure the magnitude of a problem to ensure it is one the customer cares about, pinpoint the biggest problem we want to solve and for what target audience.
This customer framework is one that helps me stay on track during customer interviews and remain as objective as possible. I hope that some of the techniques can help you conduct effective customer interviews too.
No matter how much experience I gain from customer interviews, it is still ridiculously easy to get attached to my product idea and get tunnel vision. I continuously read articles and listen to podcasts so that I can learn about new tips and techniques.