All posts
Strategy

Market Research: How I Do It

By Beatriz7 min read

Pen and paper market research notes

When teams say they need market research, they often mean one of three things:

  • -->they do not understand the buyer well enough
  • -->they cannot explain why they win
  • -->they are unsure which market narrative to bet on

Those are different problems, but the same research system can help with all of them.

My goal is not to produce a giant deck. My goal is to reduce uncertainty enough to make a better strategic decision.

What am I trying to learn before I start?

I want clean answers to five questions:

  1. -->What problem feels urgent enough to change behavior or budget?
  2. -->What alternatives are buyers using today?
  3. -->How do buyers describe the situation in their own language?
  4. -->Which proof points make a new solution believable?
  5. -->Where are competitors overclaimed, under-explained, or weak?

If the research cannot help answer those, it is probably too broad.

Which sources matter most in a practical research sprint?

I use a mix of first-party and outside inputs.

The most useful sources are usually:

  • -->customer and prospect interviews
  • -->sales call recordings
  • -->support or onboarding questions
  • -->win-loss notes
  • -->competitor pages, demos, and pricing
  • -->category reports or ecosystem trend data

I do not treat every source equally. First-party language tends to matter more than abstract industry commentary.

Example: one founder kept referencing a large analyst report to justify a category bet. Buyers on real calls did not use that category language at all. The report was not wrong, but it was not the language that moved the market.

How do I structure the qualitative part?

I am listening for patterns in four buckets:

  • -->pain language
  • -->buying triggers
  • -->comparison language
  • -->trust signals

In interviews or call reviews, I highlight phrases buyers repeat, especially around moments of frustration, urgency, and evaluation.

What I do not do is summarize every conversation equally. I care more about repeated patterns than comprehensive notes.

That is usually enough to surface the real GTM questions:

  • -->are we solving an active pain or a nice-to-have?
  • -->are we being compared to direct competitors or a do-nothing alternative?
  • -->is the trust gap about product capability, pricing, or risk?

How do I use quantitative inputs without overcomplicating it?

Quantitative data is useful when it helps confirm or sharpen a hypothesis.

I usually look at:

  • -->conversion rates between key funnel steps
  • -->activation or feature usage by segment
  • -->pipeline movement by audience or source
  • -->page behavior on core narrative pages

This is not about building a perfect attribution model. It is about checking whether the behavior supports what the qualitative work suggests.

Example: a team believed mid-market was the priority because deal size looked attractive. Usage data showed smaller teams were reaching value faster and expanding more reliably. Research then showed the message was easier to land with smaller operators because the pain was more immediate. That changed the GTM sequence.

How do I translate research into a usable output?

My output is usually a short strategy memo, not a research graveyard.

It includes:

  • -->the priority audience
  • -->the urgent problem statement
  • -->the main alternatives in the buyer’s mind
  • -->the strongest differentiators
  • -->the proof the market needs
  • -->the biggest GTM risks or gaps

That becomes the input for positioning, messaging, launch planning, and content.

If the output does not force a decision, the research did not go far enough.

What mistakes make market research less useful?

The biggest ones:

  • -->studying the market without narrowing to a buyer
  • -->confusing competitor information with customer truth
  • -->collecting more notes than the team can act on
  • -->treating research like a one-time project

Research should create focus, not complexity.

That is why I like short, decision-led sprints. They are easier to socialize and easier to revisit as the company changes.

What does a strong example look like?

For a developer infrastructure client, the original assumption was that the main challenge was awareness. Research showed something different.

Buyers had heard of the category. The real issue was trust. The market did not believe the product was ready for production-critical use.

That changed the work immediately:

  • -->positioning had to emphasize operational credibility
  • -->messaging needed stronger proof
  • -->content had to explain implementation, not just vision
  • -->launch assets had to reduce perceived risk

Without the research step, the team would have spent a quarter trying to buy attention for the wrong problem.

How does this connect to the rest of the PMM system?

Market research is upstream of almost everything:

If those downstream assets feel generic, weak, or overcomplicated, the research layer is usually where I go first.

Good research does not make the decision for you. It makes the decision clearer.