The Root of Good Product

Most product frameworks are fine. They do not fail because they are wrong. They fail because they give you a place to put answers before you have anything worth putting there. If the thinking going in is shallow, the output is a well-organized version of shallow.

I have watched smart people adopt every new process, every prioritization method, every strategy template. I have been one of these people. We fill in the boxes. We build the trees. We present the output. The products are mediocre anyway. The framework did its job. The person inside it did not.

The difference between people who build great products and people who build mediocre ones is not which framework they use. It is the quality of their thinking. And thinking quality is a trainable skill. Not a gift. Not a personality trait. Not the residue of having good taste. It is a discipline you develop through specific, repeatable practices over time.

No framework teaches you to think well. Frameworks organize thinking that already exists. If you want better product outcomes, you have to develop the thinking first.

If this is true, most of what the industry spends time on, perfecting process, debating how to use it, optimizing prioritization methods, is wasted calories if the thinking going into them is wrong.

Two products

Let me start with evidence instead of theory.

I worked on two products that show the difference between thinking that looks right and thinking that is right.

Product A was a community platform, pre-product-market fit. The team was strong: a behavioral scientist, analytical minds, early funding, a mountain of external research. They had identified a broad macro-economic trend and assumed their users felt it as a problem. The research confirmed the problem was real. The data said the market was large.

What nobody had done was spend enough time with actual users to discover that the people they were building for did not experience the problem the way the team described it. Some did not experience it as a problem at all. The research was rigorous. But it was rigorous about the wrong thing: precise answers to a question their users were not asking.

I got swept up in it. These people had real credentials, a massive amount of research data, conviction. So I deferred. I let the appearance of rigor override the alarms in my own thinking. What I should have done was ask the basic questions: do the people we are building for actually feel this problem? How often? How deeply? I did not ask because the team's confidence made my questions feel unsophisticated.1

The social cost of basic questions kept the important questions out of the room.

Product B was a service management platform. The team partnered with a union and embedded with their target users. They were in the community Discord every day. They had dinner with users. They knew individual people by name, their workflows, their workarounds.

The result was not what you might expect from all that qualitative immersion. It was not a bloated product stuffed with every feature request. It was the opposite. They could identify one specific problem that nearly everyone experienced deeply. They built precisely the thing that solved it. The first week it shipped, word-of-mouth referrals poured in. Not because of marketing. Because the product did the right thing for the right people.

Product B did not succeed because they ignored data or avoided frameworks. They used both. Their advantage was that the thinking preceding the data and the frameworks was grounded in reality. They had built such a detailed mental model of their users that when a product question came up, they did not have to guess. They could feel the answer the way a musician who has practiced a song a thousand times can feel when a note is wrong.

That feeling was not a gift. It was the output of immersion. And it made every framework they touched more useful, because the inputs were real.

You can do a lot of research without doing much thinking. The two are not the same activity.

What quality product thinking is

So what did Product B's team actually have that Product A's did not? Not more intelligence. Not better frameworks. A specific kind of clarity that I can now break into three components.

Territory knowledge. Not the market as described in an AI summary or a market report (those are great later). The actual, specific ground your product occupies. The daily frustrations of real people. The workflows they have cobbled together from four different tools. What they do before they open your product and what they do after they close it. You get this from proximity, not from research decks. Product B had it. Product A did not.

Experience simulation. The ability to suspend your own assumptions long enough to run someone else's experience through your mind without coloring it with your preferences. This is not empathy in the warm interpersonal sense. It is a discipline. Most product people are bad at it because they have spent years training themselves to think like builders, not like the person trying to get something done at 9pm on a Tuesday. Product B's team could simulate their users' reactions because they had enough territory knowledge to make the simulation accurate.

Fog tolerance. The willingness to stay in uncertainty without reaching for a premature answer.1 Most bad product decisions are not rooted in choosing the wrong option. The wrong option is often the result of choosing too early, before the problem was understood, because the discomfort of not knowing became unbearable. Product A locked onto a problem statement before validating it because uncertainty felt like a lack of progress. Product B sat with the mess longer.3

These three work as a system, not a list. Territory knowledge makes experience simulation more accurate. Better simulation makes it easier to know when you have not yet seen clearly, which is what fog tolerance actually requires. You cannot tolerate ambiguity if you cannot tell the difference between "I have not looked hard enough" and "this is genuinely unclear." Territory knowledge gives you that calibration.

Why you do not have it yet

If thinking quality is trainable, why is it rare? The answer is not that it is mysterious or innate. The answer is that almost everything about how product work is structured prevents its development.

It starts with two stories that let you off the hook.

The first: good product thinking is a gift. Some people have taste, intuition, a feel for what users want. You either have it or you do not. This story is comforting because it removes responsibility. If the skill is innate, there is nothing to practice.

The second: good product thinking is a process problem. Adopt the right framework, the right research method, the right sprint cadence, and good products follow. This story is also comforting because it makes the answer external. Buy the right tool. Attend the right workshop.

Both stories are wrong in the same way. They locate the cause anywhere except in the quality of the thinking itself. And both let you off the hook: one says there is nothing to practice, the other says the practice is choosing better tools.

But the stories are only part of it. There are also three beliefs that product environments can reinforce accidentally.

I am the customer expert.
This one is most dangerous when it feels most certain. The moment you stop being surprised by what users tell you is usually the moment you have stopped listening. Expertise calcifies. The person who knew the customer deeply two years ago often knows a caricature of the customer today, and the caricature feels like knowledge because it used to be. This is why the practice has to be ongoing. Sustained contact keeps territory knowledge fresh and allows you to run accurate simulations instead of stale ones.

If I cannot measure it, it does not matter.
Here is the thing about measurement: it is essential for confirming what you have already understood. It is terrible for discovering what you do not yet understand. The most valuable signals are almost always qualitative before they become quantitative. A user's inarticulate sense that something is not quite right matters enormously. No dashboard will surface it for you. If your epistemology starts with "show me the data," you will only ever see the things you already knew to measure.2

I need to have an answer.
The pressure to perform competence in meetings, in documents, in Slack is relentless. It produces a specific kind of bad output: answers that sound sophisticated but miss the actual question. Teams reach for abstractions and jargon because they signal intelligence. Meanwhile the simple question, "what problem does this actually solve, and how often does that problem actually occur?" goes unasked. Not because it is hard. Because asking it feels like admitting the team does not know, and admitting you do not know is expensive in most organizations.

These beliefs are not character flaws. They are defenses, built by our past experiences, desire to be deemed proficient, and the pressure to move fast.

Once you see this, the mystery dissolves. Thinking quality is rare for the same reason any counter-cultural practice is rare. Not because it is hard to understand. Because it is hard to do in an environment that discourages it.

You cannot fix this by trying harder within the existing incentive structure. You have to practice against the grain.

How to develop quality thinking

So how do you actually train it? And how do you know it is working?

The mechanism is pattern accumulation through repeated contact with reality. That sounds abstract, so let me make it concrete.

"Get closer to the customer" is the most common advice in product, and I hate it. A good product person can think of a dozen ways to "get closer to the customer" and will default to the ways that yield volume and speed. People hear it and schedule three user interviews, rush them, and subconsciously try to confirm what they already have in mind. Most proximity advice treats closeness as a discrete action. Do five interviews. Run a survey. Read the NPS comments. These are fine activities. But they do not produce territory knowledge any more than reading a travel guide produces knowledge of a city. They produce familiarity with descriptions of the territory. That is a different thing.

Real territory knowledge comes from sustained, repeated, unglamorous immersion. Reading support tickets every week, not in a summary but the actual tickets. Joining the communities where your users talk to each other without you present, and immersing in them weekly. Watching someone use your product in silence, resisting the urge to explain what they are doing wrong. Having the same conversation with the fifteenth user and noticing the thing that is slightly different about how this person describes the problem.

The fifteenth conversation matters more than the first five. The first five give you the obvious patterns. Everyone says roughly the same thing and you walk away thinking you understand. The fifteenth is where you start to notice exceptions, edge cases, things people mention offhand that turn out to be the actual problem. Your mental model gets specific enough to be wrong in useful ways, which means it is finally specific enough to be right.

Here is what the trajectory feels like in practice.

In the first few months, you ask basic questions and feel stupid. You read support tickets and do not know what to look for. You watch a user session and can barely resist explaining what they are doing wrong. The questions feel too simple to be worth asking in front of experienced people. This is normal. The discomfort is the practice working.

After three to six months, you start noticing patterns across conversations. A user mentions something and you recall three other users who said something similar but framed it differently. You can predict, roughly, what a user will say about a problem before they say it. Your predictions are often wrong. But they are specific enough to be wrong in useful ways, and each wrong prediction sharpens the model.

After six months or more, when a feature proposal comes up in a meeting, you can feel whether it fits. Not because you are guessing. Because your mental model is detailed enough to simulate a user's reaction. You still get it wrong sometimes. But your error rate drops and your speed of correction increases.

This is what Product B's team had. It was not magic. It was time in proximity, compounded.

The musician analogy is exact. A musician who has practiced a song a thousand times can feel when a note is wrong. That feeling is not a gift. It is the output of structured repetition. Product thinking works the same way. The repetition is contact with reality. The structure is the three components: territory knowledge, experience simulation, fog tolerance.

Frameworks come second

So why not build a framework for developing thinking quality?

Because frameworks work by abstracting away from messy reality. That is their value. They take complex, ambiguous situations and give you clean categories. But abstraction is exactly what prevents them from doing this particular job. Thinking quality comes from contact with unabstracted reality. The mess. The exceptions. The user who does not fit your persona.

A framework for developing thinking quality would need to say: go sit with messy reality for six months. But that is not a framework. That is a practice. And the reason people reach for frameworks in the first place is to avoid sitting with messy reality. Frameworks feel productive. The thinking feels like stalling. That instinct is backwards, but it is nearly universal.

This is why the argument is about sequencing. Frameworks are not bad. They are prioritized before you consistently have a quality input. People reach for them before the thinking is done, because the framework provides structure and structure feels like progress. But structure applied to shallow thinking produces well-organized shallow output.

The simple questions are the bridge. What is this person trying to do? Why? How often? How painful? If you cannot answer these with specificity, you are not ready for the framework. The questions check whether your thinking is grounded enough to benefit from structure. If it is not, no amount of structure will fix it.

Three practices

Three things develop thinking quality. None are glamorous. All require regularity over intensity.

1Weekly immersion, not periodic research.
Put yourself in front of real users or their unfiltered output at least weekly. Not through a research team's summary. Not through dashboards. Through direct exposure to the raw material: support tickets, community forums, live sessions, actual conversations. The compounding happens in the regularity, not in any single session every few weeks.
2Simple questions before sophisticated ones.
When discussing a feature, when talking with the team, when interpreting a metric, answer the basic questions first. What problem does this solve? For whom? How often? How painful? Write them down. Get real answers, not your assumed version. If you cannot answer these with specificity, you are not ready for the sophisticated questions.
3Permission to wait.
In those moments when you feel the urge to answer immediately, whether from the pressure of perception or the discomfort of not knowing, resist it. The discomfort of ambiguity is not a signal that something is wrong. It is a signal that you have not yet seen clearly. The best product thinkers I know are not faster than everyone else. They are slower at the right moments. They buy themselves time to see, and that time is where the quality comes from.

What this is really about

People use what we build. They need it.

When the thinking is clear, the product does the right thing at the right moment. There is a beauty and delight that only nuanced understanding yields.

When a product is built from fear, from the pressure to appear productive, from the need to ship something by a deadline regardless of whether the thinking was done, that shows too. The user pays for our lack of clarity. They pay with their time, their frustration, their quiet decision to stop using the thing and go find something else.

Good product thinking is not a mystery. It is a discipline. It is territory knowledge, experience simulation, and fog tolerance, developed through sustained practice, compounding over time. No framework will build it for you.