All posts
GTM

Competitive Analysis: My Process

By Beatriz7 min read

[!note] Key takeaway: analysis is only useful if it changes positioning, enablement, or roadmap decisions.

Charts and competitive analytics

Most competitive analysis fails for one of two reasons:

  • -->it becomes a giant spreadsheet nobody reads
  • -->it turns into opinion disguised as research

My process is built to avoid both. I do not start with "compare everything." I start with the business question the team needs answered.

What question am I trying to answer with competitive analysis?

Before I open a doc, I define the job:

  • -->Are we trying to sharpen positioning?
  • -->Support a launch?
  • -->Help sales handle objections?
  • -->Understand pricing pressure?
  • -->Find whitespace in the category?

The question shapes the analysis.

If the goal is sales enablement, I care about objection handling and differentiation under pressure.

If the goal is positioning, I care more about narrative territory, proof patterns, and which buyer pain points are over- or under-served.

If the goal is roadmap input, I care about gaps buyers actually mention, not just features visible on a pricing page.

Without that scope, teams over-collect and under-decide.

How do I choose which competitors to include?

I use three buckets:

  • -->direct competitors: same buyer, similar problem, similar budget line
  • -->indirect competitors: different approach to the same problem
  • -->default alternatives: status quo, internal workflows, agencies, spreadsheets

That third bucket matters most and gets ignored most often.

Example: a PMM team wanted a battlecard against three named vendors. After calls, it became clear the real competitor in most deals was not those vendors. It was a fragile internal process run by one operations lead with a spreadsheet and a lot of tribal knowledge. That changed the messaging from "better than vendor X" to "lower risk than keeping this mission-critical workflow in one person's head."

What sources do I use to get signal instead of noise?

I use sources in this order:

  1. -->competitor homepage, product pages, pricing, docs
  2. -->sales call notes, win/loss, customer interviews
  3. -->demo videos, webinars, founder interviews
  4. -->review sites, community threads, analyst commentary

The goal is to compare:

  • -->how the company describes itself
  • -->how buyers describe the problem
  • -->where the two do or do not match

That mismatch is usually where opportunity lives.

For technical categories, I look closely at docs, setup friction, and proof surfaces, not just homepage claims. A product may sound differentiated in the hero section and then look identical once you inspect the implementation details. How to Write Docs That AI Tools Actually Cite is useful here because strong docs often reveal the strongest real positioning.

How do I compare competitors without drowning in features?

I avoid giant feature matrices until the end. First I score competitors across a few strategic dimensions:

  • -->target buyer
  • -->core problem narrative
  • -->category framing
  • -->differentiator claim
  • -->proof model
  • -->pricing posture
  • -->buying-motion fit

Then I layer in product detail only where it affects choice.

This is the key distinction: competitive analysis should explain why a buyer might choose each option, not just what each option includes.

Here is the kind of synthesis I want:

| Competitor type | What they win on | Where they are weak | | --- | --- | --- | | Premium incumbent | trust, integrations, procurement comfort | slow setup, generic messaging | | AI-native challenger | speed, modern workflow fit, founder energy | weak proof, unclear limits | | Default alternative | low switching cost, known process | hidden labor cost, poor scalability |

That table creates strategy. A 70-row feature sheet usually does not.

How do I turn the analysis into positioning recommendations?

I look for four output types:

  • -->overclaimed territory: everyone says the same thing
  • -->under-owned territory: buyers care, competitors say little
  • -->proof gaps: claims exist but evidence is weak
  • -->workflow gaps: the buyer journey is clunky or risky

That leads directly into positioning choices.

For example, if everyone claims automation but nobody explains when human review still matters, "automation with human control" may become the more credible narrative. If everyone says "enterprise-grade" but few explain implementation clearly, operational transparency may be the opening.

That is why I pair this work closely with Positioning Framework I Use for Clients. The analysis is input. Positioning is the decision.

What deliverables do I produce so the work actually gets used?

Usually three:

  • -->a concise market narrative memo
  • -->a comparison matrix for internal teams
  • -->battlecards for sales and customer-facing teams

If the team is early stage, the memo matters most.

If the team already has pipeline and an active sales motion, battlecards matter more.

I keep deliverables short enough that people will revisit them. The point is not archival completeness. The point is operating leverage.

What mistakes make competitive analysis useless?

The biggest ones:

  • -->collecting information with no decision owner
  • -->treating review sites as ground truth
  • -->optimizing for feature parity instead of buyer choice
  • -->ignoring the status quo as a competitor
  • -->updating the deck once a quarter and never feeding sales learnings back in

Competitive analysis should be alive. If sales keeps hearing the same objection for three weeks, the analysis should reflect it.

Bottom line

My process is simple on purpose:

  1. -->define the decision
  2. -->scope the real competitors
  3. -->gather evidence from buyer-facing surfaces
  4. -->synthesize around narrative, proof, and workflow
  5. -->turn it into usable recommendations

If the work does not change positioning, enablement, or prioritization, it was research theater.