Section 3

Analytics & Research

How to combine quantitative analytics with qualitative research for robust experimentation decisions.

The Quant + Qual Principle

Quantitative data tells you what is happening. Qualitative research tells you why. Neither alone is sufficient. The most effective experimentation programs combine both:

Quantitative = What

Numbers, metrics, statistical significance. Answers: how many, how often, how much, what percentage. Scales well. Reveals patterns but not motivations.

Qualitative = Why

Stories, observations, emotions. Answers: why do they do that, what do they expect, what frustrates them. Deep but doesn't scale. Reveals motivations but can be biased.

The workflow: Use qualitative research to generate hypotheses (discover problems, understand context) and quantitative analytics to validate them (measure impact, confirm at scale). Then use qualitative research again to interpret the quantitative results (understand why the numbers look the way they do).

Quantitative Methods

A/B Testing (Split Testing)

Show two (or more) variants to randomly assigned user groups and measure which performs better on a key metric. The gold standard for causal inference in product experiments.

When to use:

  • • You have enough traffic for statistical significance
  • • Testing a specific, isolated change
  • • You need causal evidence (not just correlation)

Tools:

  • • LaunchDarkly, Optimizely, VWO
  • • Statsig, Eppo, GrowthBook
  • • Google Optimize (sunset — use alternatives)

Pitfall: Running A/B tests without sufficient sample size leads to false positives. Use a sample-size calculator before launching. Most tests need thousands of observations per variant.

Funnel Analysis

Map the step-by-step user journey (e.g., landing page → sign-up → onboarding → first action → payment) and measure drop-off at each stage. Reveals where users abandon the process.

When to use:

  • • Diagnosing where conversion breaks down
  • • Prioritizing which step to optimize first
  • • Measuring the impact of changes on each step

Tools:

  • • Mixpanel, Amplitude, PostHog
  • • Google Analytics 4, Heap
  • • Custom SQL on event data

Cohort Analysis

Group users by when they started (sign-up week/month) and track their behavior over time. Reveals whether retention is improving, whether changes affect new vs. existing users differently, and the true shape of engagement.

When to use:

  • • Measuring retention and churn
  • • Evaluating whether product changes help new users
  • • Spotting trends that aggregate metrics hide

Tools:

  • • Amplitude, Mixpanel (built-in cohort charts)
  • • Looker, Metabase (custom SQL)
  • • Spreadsheets for early-stage

Surveys at Scale

Structured questionnaires sent to large user populations. Useful for measuring satisfaction (NPS, CSAT), feature demand, and demographic segmentation. Straddles the quant/qual boundary.

Key survey types:

  • • NPS (Net Promoter Score)
  • • PMF Survey ("How would you feel if you could no longer use this product?")
  • • CSAT (Customer Satisfaction Score)
  • • CES (Customer Effort Score)

Tools:

  • • Typeform, SurveyMonkey
  • • Hotjar (in-app surveys)
  • • Sprig, Survicate (contextual)

Qualitative Methods

User Interviews

One-on-one conversations with target users to understand their goals, frustrations, workflows, and mental models. The most versatile qualitative method.

Best practices:

  • • Ask about past behavior, not future intentions
  • • Use open-ended questions ("Tell me about...")
  • • Follow the energy — probe where emotion appears
  • • 5–8 interviews per segment reveals major themes

Tools:

  • • Zoom, Google Meet (remote)
  • • Dovetail, Grain (transcript + analysis)
  • • Respondent, User Interviews (recruiting)

Usability Testing

Watch real users attempt specific tasks with your prototype or product. Reveals navigation confusion, missing affordances, and broken mental models.

Two formats:

  • Moderated: Facilitator guides the session in real-time. Deeper insights.
  • Unmoderated: User completes tasks independently (recorded). Faster, more scalable.

Tools:

  • • Maze, UserTesting (unmoderated)
  • • Lookback, Lyssna (moderated)
  • • Figma prototypes (as test material)

Session Recordings & Heatmaps

Replay actual user sessions to see clicks, scrolling, and rage clicks. Heatmaps aggregate interaction data visually. Useful for spotting friction in existing products.

Tools:

Hotjar, FullStory, PostHog, Microsoft Clarity (free), LogRocket

Card Sorting & Tree Testing

Card sorting asks users to organize items into categories (tests information architecture). Tree testing asks users to navigate a text-based hierarchy to find items (validates navigation structure).

Tools:

Optimal Workshop, Maze, UXtweak

Diary Studies

Participants log their experiences over days or weeks. Captures behavior in context over time — especially useful for understanding habits, triggers, and long-term value perception.

Tools:

dscout, Indeemo, Google Forms (low-budget), dedicated Slack/Telegram channels

Measurement Frameworks

Use these frameworks to structure what to measure, so you're not drowning in metrics without a thesis:

HEART Framework (Google)

Five dimensions of user experience quality:

HHappiness
satisfaction, NPS
EEngagement
frequency, depth
AAdoption
new users, upgrades
RRetention
churn, repeat use
TTask Success
completion, errors

For each dimension, define Goals → Signals → Metrics.

Pirate Metrics / AARRR (Dave McClure)

A funnel-based framework for growth:

Acquisition — How do users find us? Activation — Do they have a great first experience? Retention — Do they come back? Revenue — Do they pay? Referral — Do they tell others?

Jobs to Be Done (JTBD)

Not a metric framework per se, but a lens for understanding what to measure. Ask: "What job is the user hiring this product to do?" Then measure how well you perform that job — speed, reliability, convenience, cost. Pioneered by Clayton Christensen.

North Star Metric

A single metric that best captures the core value your product delivers. All experiments should ultimately move the North Star. Examples: Airbnb = "nights booked," Spotify = "time spent listening," Slack = "messages sent within teams."

Combining Quant & Qual: A Practical Workflow

1

Explore (Qual)

Conduct 5–8 user interviews to understand the problem space. Identify pain points, workarounds, and unmet needs. Generate hypotheses about what matters.

2

Quantify (Quant)

Analyze behavioral data to validate the qualitative findings at scale. How many users experience this pain point? How often? What's the drop-off rate at the step where users complained?

3

Prototype & Test (Both)

Build a prototype addressing the validated pain point. Run usability tests (qual) to refine the design. Then launch an A/B test or beta (quant) to measure real impact.

4

Interpret (Qual)

After the quant experiment concludes, interview users from both variants. Understand why the winning variant won. This informs the next iteration and prevents misinterpreting the numbers.

← Approaches Resources →