Synthesis Practitioner Guide

This guide translates the synthesis insights into actionable steps for teams building products, designing interventions, or scaling behavioral initiatives.

Evidence note: Any numeric thresholds on this page are illustrative heuristics unless linked to the Evidence Ledger or a primary source + date. Calibrate by domain.


Part 1: Behavior Selection (Before You Build)

Step 1.1: Observe What Users Are Already Trying to Do

Duration: 2–3 weeks Outputs: Behavior observations, friction points, workarounds

What to do:

  1. Conduct 10–15 user interviews focused on:
    • What problem are you trying to solve? (not what product do you want?)
    • How are you currently attempting to solve it? (workarounds, informal solutions, competing products?)
    • How often do you need to do this? (frequency)
    • What’s the hardest part? (friction point)
    • If this were frictionless, how often would you do it? (natural frequency)
  2. Observe actual behaviors (don’t rely on stated preferences):
    • Shadow users in their environment
    • Watch what they do with current solutions
    • Note workarounds and informal solutions
    • Identify the actual use case (often differs from stated need)
  3. Map the behavior chain:
    • What triggers the need? (context, timing, person)
    • What’s the first action? (decision, friction point)
    • What’s the reward? (intrinsic, social, economic)
    • What blocks repetition? (friction, motivation, context)

Red flags:

  • Users can’t articulate the need clearly
  • Behavior frequency is lower than your assumptions
  • Users are using competitor products inconsistently (suggests low fit)
  • Workarounds are complex or involve manual steps (suggests friction opportunity)

Green flags:

  • Users are already paying for partial solutions
  • Multiple workarounds suggest deep problem (good sign)
  • Users can articulate their frustration with status quo (high motivation)
  • Natural frequency is higher than you expected (suggests strong fit)

Example: Instagram’s Observation Phase

  • Observed: Users were manually cropping photos to square format before uploading elsewhere
  • Insight: Users want simple, beautiful photo sharing on mobile
  • Behavior selected: Photo posting (not check-ins)
  • Frequency: Users were already doing this multiple times per day with friction

Step 1.2: Screen Candidate Behaviors with the Behavior Fit Assessment

Duration: 1 week Output: Short list of viable behaviors + selected target behavior

What to do:

Create a short list of candidate behaviors (typically 3–10). For each candidate, use the Behavior Fit Assessment to score your population (not an individual) on three dimensions:

  • Identity Fit (who they are): Does this behavior align with how they see themselves?
  • Capability Fit (what they can do): Can they actually perform it with their current skills/resources?
  • Context Fit (what the environment supports): Does the real environment enable it reliably?

Threshold rule (canonical): all three dimensions must score ≥6/10 to proceed. If any dimension is below 6, you’re forcing, not matching.

Note: motivations are handled inside Context Fit (and explicitly inside the full Behavioral State Model when you need diagnostic depth). The Behavior Fit Assessment stays intentionally compact to prevent drift.

Template: Behavior Fit Assessment table

Candidate behavior Identity Capability Context Minimum Viable? Notes
Behavior A         Y / N  
Behavior B         Y / N  
Behavior C         Y / N  

Decision rule

  • If no behaviors are viable: generate more candidates or redesign constraints to raise the limiting dimension.
  • If multiple behaviors are viable: choose the behavior with the highest minimum score (tie‑break by expected outcome impact and measurement feasibility).

Step 1.3: Validate Behavior Market Fit

Duration: 4–6 weeks Output: Behavior Market Fit decision (proceed, pivot, or kill)

What to do:

Build Minimal Viable Prototype

Create the simplest possible version to test the target behavior:

  • Single behavior focus (strip out everything else)
  • Minimal UI (focus on behavior enablement, not design)
  • Real users (early adopters, not friends/colleagues)

Measure These KPIs

  1. Time to First Behavior (TTFB)
    • How long from signup to first behavior completion?
    • Target: <5 minutes for consumer, <15 minutes for enterprise
    • If >10 minutes: capability friction too high; simplify behavior or UX
  2. First Behavior Completion Rate
    • What % of new users complete first behavior?
    • Target: >20% is viable; >40% is strong; >60% is exceptional
    • If <10%: behavior not desired OR capability friction too high
  3. Natural Frequency (Week 1)
    • How many times do users perform behavior in first week without prompts?
    • Target: 3+ if behavior is daily, 2+ if behavior is episodic
    • If <1: motivation fit or context fit is low
  4. Day 7 Retention
    • What % of users return on day 7 to perform behavior again?
    • Target: >30% for consumer; >50% for enterprise
    • If <10%: behavior not sustainable; strong signal to pivot
  5. Day 30 Retention
    • What % of day-1 users perform behavior at least once in month 1?
    • Target: >20% for consumer; >40% for enterprise
    • If <5%: behavior has poor fit or solution adds friction

Analyze Results by Segment

Critical: Don’t average across segments. Break down:

  • Early adopters vs. mainstream
  • High-motivation users vs. low-motivation
  • Users in context-optimal environment vs. sub-optimal

What you’ll find:

  • Early adopters may show high retention (>50%) while mainstream shows low (<10%)
  • This signals: behavior may be high-fit for enthusiasts, low-fit for mainstream
  • Decision: either redesign for mainstream, or accept enthusiast-only product

Example: Proposify’s Validation

  • Behavior selected: “Send first proposal”
  • TTFB baseline: 14% of trial users sent proposal (14% TTFB completion rate)
  • Problem: Users got lost in template choices before reaching core behavior
  • Pivot: Redesigned onboarding to guide directly to proposal creation
  • Result: 46% TTFB completion rate (+32 percentage points)
  • Outcome: Users who sent first proposal were 7x more likely to convert to paid
  • Decision: This is the behavior to optimize; build solution around it

Step 1.4: Make the Build/Pivot/Kill Decision

Decision framework:

TTFB Completion D7 Retention D30 Retention Signal Recommendation
>40% >50% >30% Strong fit BUILD: Proceed to Solution Market Fit
20–40% 30–50% 15–30% Moderate fit BUILD with caution: Validate capability + motivation
10–20% 15–30% 5–15% Weak fit INVESTIGATE: Conduct user research; likely capability or context friction
<10% <15% <5% No fit PIVOT: Either select different behavior or redesign context

Red flags for pivoting:

  • TTFB completion stalls <10% despite UX improvements
  • Early adopters show high fit (>50% D30) but mainstream shows low (<10% D30)
  • Users report friction in the behavior itself, not the interface (behavior is too complex)
  • Intrinsic motivation is low; behavior requires external incentives to sustain

Example: Why Quibi Failed

  • Behavior selected: Passive short-form video consumption
  • TTFB: Users could start watching easily
  • D1-D7 retention: Initially >30% (users curious about content)
  • D30 retention: Collapsed to <10%
  • Why: Context mismatch. Mobile users are active, multitasking, in-control. Passive consumption doesn’t fit mobile use. Behavior couldn’t sustain despite high production value.
  • Lesson: High initial engagement masked weak context fit. By D7-D14, context mismatch killed retention.

Part 2: Solution Design (Behavior → Solution)

Step 2.1: Identify Current Friction Points

Duration: 1–2 weeks Output: Friction map (what’s blocking behavior today)

What to do:

  1. Map the behavior chain with current solution:
    • What’s the trigger? (what prompts the user to start?)
    • What’s the first step? (where does friction start?)
    • What’s the slowest part? (where does TTFB accumulate?)
    • What’s the point of decision? (where do users abandon?)
    • What’s the reward? (how does user know behavior worked?)
  2. Categorize friction by type:
    • Ability friction: User lacks skill, time, or resources (solved by simplifying behavior)
    • Motivation friction: User is unclear why behavior matters (solved by clarifying reward)
    • Trigger friction: No cue exists to start behavior (solved by environmental design)
    • Context friction: Environment prevents behavior (solved by context change)
  3. Quantify friction impact:
    • Which friction point has the highest abandonment rate?
    • Which friction point takes the most time?
    • Which friction point is highest effort for users?

Example: Proposify’s Friction Map

  • Trigger: User signs up for trial
  • First step: User sees template gallery (decision point: which template?)
  • Friction: Too many choices; user overwhelmed
  • Abandonment: 86% of users never reach proposal creation
  • Reward: Never experienced (users quit before sending proposal)

Solution: Remove choice friction; guide directly to core behavior


Step 2.2: Design for Simplicity, Not Features

Duration: 2–4 weeks Output: Minimum Viable Behavior spec

What to do:

  1. Define Minimum Viable Behavior (MVB):
    • What is the absolute minimum version of the target behavior?
    • Remove all optional steps
    • Remove all choices that don’t directly support the behavior
    • Remove all features that aren’t required for first success

    Example: MVB for photo sharing

    • Take photo OR import from device
    • Optionally add caption
    • Post to feed
    • View feedback (likes, comments)
    • That’s it. No filters, no stories, no explore tab, no discovery.
  2. Reduce steps to first reward:
    • Goal: <5 steps to behavior completion
    • Each step should move toward first reward
    • Remove intermediate choices or configurations

    Example: MVB for proposal sending (Proposify)

    • Create proposal with template (auto-filled, minimal choices)
    • Customize headline and details (in-context editing)
    • Send to client
    • Get notification when client views
    • That’s it. No design tweaks, no branding options, no template exploration.
  3. Clarify the reward:
    • What is the immediate, visible reward for completing behavior?
    • Make this reward salient and immediate
    • Don’t delay gratification

    Examples:

    • Instagram: See photo in feed, get likes/comments (within seconds)
    • Proposify: See proposal accepted or client engagement (within hours)
    • Duolingo: See streak increase, get immediate feedback (within seconds)
    • Slack: Message appears in channel instantly; others see it (immediate)
  4. Remove decision points:
    • Every choice before behavior completion is friction
    • Default to the most common choice
    • Make alternatives discoverable after success

    Example: Remove choice friction

    • Instead of: “Choose template, customize settings, pick colors, then post”
    • Do this: “Create, auto-fill with best template, post immediately”
    • Then: “Customize after you’ve experienced success”

Step 2.3: Prototype and Test TTFB

Duration: 2 weeks Output: TTFB benchmark; hypothesis for friction reduction

What to do:

  1. Build simplest possible prototype focusing only on the behavior path
  2. Test with 20–30 real users (target segment, not friends/team)
  3. Measure:
    • TTFB (time from start to behavior completion)
    • Completion rate (% who complete first behavior)
    • Friction points (where do users get stuck?)
  4. Compare to baseline:
    • If baseline TTFB was 10 minutes, target <5 minutes (50% reduction)
    • If baseline completion was 14%, target >40% (180% improvement)
  5. Iterate until:
    • TTFB <5 minutes
    • Completion rate >40%
    • Users report behavior felt “natural” or “obvious”

Red flag: If TTFB doesn’t improve with iteration, behavior itself may be too complex. Consider simplifying the behavior, not just the interface.


Part 3: Fit Validation (Solution → Market)

Step 3.1: Pilot with Target Segment

Duration: 4–6 weeks Output: Solution Market Fit data (does solution enable behavior?)

What to do:

  1. Segment your users:
    • Segment A: Early adopters (high motivation, best context fit)
    • Segment B: Target mainstream (medium motivation, varied context)
    • Test separately; don’t average
  2. Measure SMF metrics:
    • TTFB: Time to first behavior with new solution
    • Δ-B (behavior change): % point improvement in behavior frequency
    • Effort reduction: User effort score (before vs. after)
    • D30 retention: % of day-1 users performing behavior by day 30
  3. Interpret results:
Metric Early Adopters Mainstream Interpretation
TTFB 3 min 7 min Solution works; mainstream needs more guidance
Δ-B +45pp +25pp Behavior adoption is segment-dependent
D30 Retention 45% 20% Mainstream has lower motivation or context fit
Recommendation N/A N/A Consider SMF achieved if early adopters >50% D30 retention
  1. Watch for context fit failures:
    • Does retention drop at specific times/contexts?
    • Does one segment dramatically underperform?
    • Are there environmental barriers you missed?

Step 3.2: Identify Unmet Needs (Before Scaling)

Duration: 1–2 weeks Output: List of context barriers, capability gaps

What to do:

  1. Conduct exit interviews with users who abandoned:
    • When did you stop using this? (when did retention drop?)
    • Why did you stop? (context, motivation, friction, or forgot?)
    • What would make you come back? (what’s the missing piece?)
  2. Segment abandonment by reason:
    • Context failures: “I don’t have time” / “Can’t do this on mobile” / “Don’t have access”
    • Motivation failures: “Don’t care about this anymore” / “It’s boring”
    • Friction failures: “It’s too complicated” / “I keep forgetting how to use it”
    • Identity failures: “This isn’t for me” / “I don’t see myself using this”
  3. Prioritize fixes by segment size:
    • If 30% abandon due to context (e.g., time), fix context
    • If 20% abandon due to friction (e.g., steps), fix design
    • If 10% abandon due to motivation, investigate whether behavior fit was wrong

Part 4: Scaling Decisions (When to Scale, When to Pivot)

Step 4.1: Check Your Scaling Readiness

Before you scale, validate:

Metric Threshold Your Baseline Status
TTFB completion rate >40% % ✓ / ✗
D7 retention >30% % ✓ / ✗
D30 retention >20% % ✓ / ✗
Users perform behavior 2+ times in week 1 >50% % ✓ / ✗
Net Promoter Score (or satisfaction) >7/10 score ✓ / ✗
No major context barriers in target segment Yes Yes/No ✓ / ✗

Rule of thumb: If you can’t check ✓ on at least 5 of 6, you’re not ready to scale.


Step 4.2: Segment-Specific Scaling Strategy

Pattern: Behavior fit varies by segment. Plan accordingly.

Segment 1: Early Adopters (High Motivation, High Context Fit)

  • Characteristics: Intrinsically motivated, behavior fits their context perfectly
  • Retention: Often 50%+ D30
  • Scaling strategy: Focus on activation, referral, organic growth
  • Caution: Don’t assume mainstream will have same experience

Segment 2: Target Mainstream (Medium Motivation, Varied Context Fit)

  • Characteristics: Moderately interested; context varies; needs more guidance
  • Retention: Often 20–40% D30
  • Scaling strategy: Invest in onboarding, context adaptation, guided paths
  • Caution: Focus on removing friction, not adding incentives

Segment 3: Aspirational Market (Low Current Motivation, Variable Context Fit)

  • Characteristics: Might want this behavior, but motivation is low
  • Retention: Often <10% D30
  • Scaling strategy: Don’t scale here until behavior fit is validated
  • Caution: High churn; low LTV; likely money-losing segment at scale

Decision rule:

  • Scale aggressively to segments with >40% D30 retention
  • Scale cautiously to segments with 20–40% D30 retention
  • Don’t scale to segments with <10% D30 retention without re-validating behavior fit

Step 4.3: Watch for Scaling Failures

Red flags that indicate problems:

  1. Retention declines as you scale
    • Signal: Early adopters retained at 50%, but as you expand to mainstream, cohort retention drops to 15%
    • Cause: Behavior fit is segment-dependent; mainstream has lower fit
    • Fix: Either re-select behavior or invest in context adaptation for mainstream
  2. TTFB increases despite UX improvements
    • Signal: Earlier cohorts had TTFB of 3 min; new cohorts stall at 10 min
    • Cause: New users have different context or lower motivation
    • Fix: Investigate why new cohorts are different; adjust onboarding or targeting
  3. Negative word-of-mouth despite high engagement
    • Signal: Users engage initially but don’t recommend; reviews are negative
    • Cause: Behavior fit is actually low; early adoption was curiosity, not value
    • Fix: Re-validate behavior market fit; likely time to pivot
  4. Incentive dependency grows
    • Signal: As growth slows, you increase points/bonuses/rewards to maintain adoption
    • Cause: Intrinsic motivation was never strong; incentives are crowding out whatever motivation existed
    • Fix: Likely behavior fit issue; consider pivoting to higher-fit behavior

Part 5: Decision Framework (Build, Pivot, or Kill)

When to Build (Green Light)

Proceed to building and scaling if:

  • ✓ TTFB completion rate >40% in Behavior Market Fit validation
  • ✓ D30 retention >20% without incentives
  • ✓ Early adopters show 50%+ D30 retention
  • ✓ Target behavior aligns across identity, context, capability, motivation dimensions
  • ✓ No major context barriers in your target market
  • ✓ Users can articulate intrinsic motivation for the behavior

Expected outcome: Behavior adoption scales; churn stabilizes; unit economics improve


When to Pivot (Yellow Light)

Consider pivoting if:

  • ⚠ TTFB completion 20–40% (moderate fit, improvable)
  • ⚠ D30 retention 10–20% (weak but not impossible)
  • ⚠ Fit is segment-dependent (works for early adopters, not mainstream)
  • ⚠ One fit dimension is much lower than others (e.g., high identity fit, low context fit)
  • ⚠ Context barriers exist but are addressable

Decision criteria:

  • Can you change the context or environment to improve fit? If yes, pivot to context change
  • Can you simplify the behavior to improve capability fit? If yes, pivot to simpler behavior
  • Can you find a segment with better fit? If yes, pivot to target segment
  • None of the above? Kill.

Examples of successful pivots:

  • Instagram: Pivoted from check-ins (10% fit) to photo sharing (80% fit)
  • Slack: Pivoted from internal tool to team platform (30% fit → 90% fit)
  • YouTube: Expanded from dating videos to user-selected content (20% fit → 85% fit)

When to Kill (Red Light)

Stop and kill the project if:

  • ✗ TTFB completion <10% despite multiple iterations
  • ✗ D30 retention <5% (no matter segment)
  • ✗ Users report low intrinsic motivation; behavior requires external incentives
  • ✗ Context barriers are fundamental and can’t be changed
  • ✗ Identity fit is negative (behavior contradicts user identity)
  • ✗ You’ve iterated 3+ times on behavior or UX with no improvement

Why:

  • These are signals that either:
    • You’ve selected the wrong behavior (structural problem, not execution problem)
    • The market doesn’t want what you’re building
    • You’ve misunderstood user motivation

What to do:

  • Document why this behavior didn’t work
  • Identify the root cause (conceptual, design, context, scaling, or motivation)
  • Use that learning to select a different behavior or target a different market
  • Restart with Step 1: Observe

Examples of projects that should have killed earlier:

  • Google+: Poor adoption despite high spend; should have killed after D30 retention stayed <5%
  • Quibi: High spend but poor retention despite strong production value; should have killed when D7 retention hit <10%

Quick Reference: Checklists

Pre-Launch Checklist

  • Observed users attempting behavior (10+ interviews)
  • Ranked behavior against alternatives (scored 16+/20)
  • Validated Behavior Market Fit (TTFB >20%, D7 >30%)
  • Built Minimum Viable Behavior (simplified, <5 steps)
  • Tested TTFB in prototype (achieved 50%+ reduction)
  • Measured Solution Market Fit (retention >20% by D30)
  • Identified context barriers and mitigation
  • Confirmed target segment size (large enough to build on)
  • Planned for segment-specific scaling (early adopters vs. mainstream)

Launch Readiness Checklist

  • TTFB completion rate >40% (or 20–40% if pivoting)
  • D7 retention >30%
  • D30 retention >20% (without incentives)
  • Early adopter segment shows >50% D30 retention
  • No unresolved context barriers in target segment
  • Onboarding optimized for mainstream (not just early adopters)
  • Support/help resources in place for context barriers
  • KPI dashboard tracks TTFB, D7, D30, behavior frequency

Scaling Readiness Checklist

  • Retention remains stable as you scale (no cohort degradation)
  • Early adopter and mainstream segments both >30% D7 retention
  • TTFB remains <5 minutes as you scale
  • New user acquisition cost is sustainable relative to LTV
  • Users perform behavior 2+ times in first week consistently
  • Word-of-mouth / NPS remains positive
  • No “incentive dependency” (behavior sustains without rewards)
  • Infrastructure can handle 5–10x current user load

Closing: The Behavioral Mindset

This guide gives you processes and checklists. But the core insight is mindset:

Behavioral Strategy is about selecting the right behavior first, then optimizing the path to it. It is not about building what seems technically interesting or commercially appealing.

The best competitive advantages come from:

  1. Observing what users actually do (not what they say they want)
  2. Validating fit before building (not building first, validating later)
  3. Simplifying behavior, not just interface (removing friction at the behavior level)
  4. Measuring durable outcomes (retention, frequency, not engagement proxies)
  5. Scaling segment by segment (not averaging across heterogeneous users)

Teams that internalize this mindset consistently outcompete teams that optimize for execution quality in isolation.


Last Updated: 2026-01-31 For questions or additions: jason@thebehavioralscientist.com