How I did months of competitive research in an afternoon

When I started my current job, I was months behind on competitive context. The product had been around for a while in pilot and we were looking to scale (from 1 → 100). The market was moving fast and sales conversations were already happening. And I was the new kid on the block who needed to form a strategic direction without the luxury of spending weeks buried in desk research.

In a previous life (meaning pre AI), I would have opened 40 browser tabs, screenshot pricing pages, squinted at competitor changelogs, and slowly built a comparison doc that would be outdated before I finished writing it. But with AI tools, my approach is radically different. I used Claude and the whole thing took an afternoon.

I use Claude in two modes and they do very different things:

  • Deep research mode for mapping the landscape. Claude’s deep research feature goes out and actually searches the web, reads sources, and synthesizes what it finds into a structured report with citations. I use this when I need to understand who the competitors are, how they position themselves, what they charge, what developers say about them, etc.
  • Conversation mode for analysis. Once I have the raw material, which can range from a deep research report, competitor docs I’ve gathered myself or overheard in sales meetings, and public API specs, I bring it into a regular Claude conversation and ask it to do the heavy thinking. I’ll typically build a comparison matrix or ask it to run a gap analysis.

A quick note on tool choice: I’ve tried deep research features across ChatGPT, Perplexity, and Claude. ChatGPT’s deep research was historically my go to, but lately I’ve noticed that it tends to produce sprawling reports that need heavy editing before they’re useful. Often (and this is the big reason for why I stopped using it) there aren’t any sources to back up its claims either. Perplexity is great for quick, source-heavy lookups but less capable when you need structured analysis and reasoning over what it finds. Claude, by comparison, hits the sweet spot for me: the research output is well-structured, the sources are traceable, and I can move straight from research into analytical conversation without switching tools.

Here’s a typical prompt that I’ll use for market research:

Role & context:
You are a senior product strategist in [domain, e.g. corporate banking APIs / developer payments infrastructure]. I'm evaluating our competitive position against [Competitor A], [Competitor B], and [Competitor C]. The attached materials include public documentation, API references, feature lists, pricing pages, and developer guides for all four products.
Target buyer: [Persona, e.g. a technical integration lead at a mid-size corporate] choosing a provider for a production integration. Everything you produce should be filtered through what this buyer actually cares about.
Analysis — in three layers:
1. Comparison matrix
Evaluate across these dimensions, drawing only on what the attached materials can support. For each cell, state the specific evidence — not a judgment like "strong" or "good."
Core feature coverage and API capabilities
Documentation quality (completeness, structure, runnable examples, search/navigation)
Pricing model and transparency
Developer ergonomics (SDK quality, error handling, sandbox/testing environments)
Onboarding clarity (getting-started guides, time-to-first-call friction points you can infer from the docs)
Support model (stated SLAs, community, dedicated support tiers)
For any dimension where the documents don't give enough information to assess fairly, say so rather than speculating.
2. Buyer-facing differentiators
From the matrix, pull the 2–3 differences a sales team could use in a live client conversation. Each must pass this test: would the target buyer change their shortlist because of it? If not, cut it.
3. Buyer decision narrative
Now put yourself in the target buyer's seat. You're signing off on a production integration. For each of the four options: what makes you confident, what makes you nervous, and what's the one question you'd need answered before committing?
Output format:
The matrix as a structured table (dimensions × providers), with evidence in each cell, not ratings.
Differentiators as a short numbered list with one sentence of reasoning each.
The buyer narrative as a brief paragraph per provider.

Going back to my situation, two things came out of the analysis that I didn’t expect and were meaningful drivers for me to find that elusive blue ocean for my product:

  • Concrete product differentiators for marketing and sales material
  • Understanding that the real gap between us and competitors wasn’t features or data, it was the experience layer (things like onboarding, integration support) and that’s where I chose to invest

Having said this, I should be upfront about the limitations. Claude will sometimes confidently cite something that turns out to be subtly wrong or outdated. You have to check against primary sources. Equally, the biggest trap is that the output looks so polished you might convince yourself you don’t need to talk to customers anymore. You totally need to talk to customers! AI is simply a way to compress the desk research.

The real value is AI changing what you spend your time on. The hours I used to burn gathering and organising information now go toward interpreting it. I can focus more on the important product management skills like:

  • Making trade-off calls
  • Building conviction around a direction
  • Telling a coherent story to stakeholders

These are skills that AI can’t automate. But, it can free up time for me to actually do it.

Leave a comment