Tools & Templates

Why Tools Matter in Behavioral Strategy

These tools are reusable scaffolding for behavior selection, feasibility validation, and measurement. They aim to produce decision-grade evidence (what will people do, in what context, and how will we know?) rather than persuasive narratives.

Note: Where templates include thresholds or example values, treat them as starting heuristics. Set thresholds based on (1) the decision you need to make, (2) the cost of being wrong, and (3) the constraints of the real context.

Available Tools

Diagnostic Toolkit

Interactive decision trees and diagnostic frameworks

Use this to identify the root cause of behavioral problems and decide what to validate next.

Includes:

  • Master diagnostic decision tree for common problems
  • Problem diagnosis tool with severity scoring
  • Behavior selection diagnostic
  • Solution design friction analyzer
  • Organizational readiness assessment

Use When:

  • Starting a new behavioral initiative
  • Troubleshooting failed interventions
  • Assessing team capabilities
  • Planning resource allocation

Essential Templates

Problem Validation Interview Guide

Purpose: Uncover genuine user problems and validate problem-seeking behavior

## Opening
"I'm researching how people handle recurring bills. There are no right or wrong answers - I'm just trying to understand your experience."

## Core Questions
1. "Tell me about the last time you missed or nearly missed a bill payment"
   - Listen for: Specific examples, emotional reactions, frequency

2. "Walk me through what you did to try to solve it"
   - Listen for: Current solutions, workarounds, effort invested

3. "How much time/money/energy have you spent on solutions?"
   - Listen for: Quantifiable investment, resource allocation

4. "What other approaches have you tried?"
   - Listen for: Solution history, what didn't work

5. "What would change if this problem were solved?"
   - Listen for: Value of solution, downstream impacts

6. "On a scale of 1-10, how painful is this problem?"
   - Follow up: "What makes it a 7 and not a 5?"

## Closing
"If I could wave a magic wand and solve one aspect of this problem, what would be most valuable to you?"

Behavior Observation Protocol

Purpose: Document actual behaviors in natural context

## Pre-Observation Setup
- Observer: J. Lee
- Date/Time: Feb 2, 2026, 10:00 AM
- Location: Mobile banking app (iOS)
- Subject: Participant 12

## Context Documentation
- Physical Environment: Kitchen table, laptop + phone, time pressure before work
- Social Environment: Roommate nearby, occasional interruptions
- Temporal Context: Morning on a weekday
- Emotional State: Focused but mildly stressed

## Behavior Tracking
For each behavior observed:
1. Trigger: What initiated the behavior?
2. Action: What exactly did they do? (Be specific)
3. Duration: How long did it take?
4. Friction: Where did they struggle or pause?
5. Workarounds: How did they adapt to obstacles?
6. Outcome: What was the result?
7. Reaction: How did they respond to the outcome?

## Pattern Recognition
- Repeated behaviors: Checks balance before paying, screenshots confirmation screens
- Abandoned behaviors: Started auto‑pay setup but exited at verification step
- Surprising observations: Uses calendar reminders as a backup system

## Insights
- Key barriers to desired behavior: Verification friction and unclear payment timing
- Environmental factors influencing behavior: Time pressure, multi‑device switching
- Opportunities for intervention: Pre‑filled enrollment and clearer due‑date cues

Behavioral KPI Dashboard Template

Purpose: Track behavioral metrics systematically

# Behavioral KPI Dashboard Configuration

## Real-Time Metrics (Updated every minute)
real_time:
  active_behaviors:
    definition: "Users currently performing target behavior"
    calculation: "COUNT(active_sessions.behavior_in_progress)"
    threshold_good: 100
    threshold_great: 500
    
  completion_rate:
    definition: "Rolling 60-min behavior completion rate"
    calculation: "completed_behaviors / started_behaviors"
    threshold_good: 0.7
    threshold_great: 0.85

## Daily Metrics
daily:
  unique_performers:
    definition: "Unique users performing behavior today"
    calculation: "COUNT(DISTINCT user_id WHERE behavior_completed)"
    comparison: "vs_yesterday, vs_last_week"
    
  avg_behaviors_per_user:
    definition: "Average behavior frequency"
    calculation: "total_behaviors / unique_performers"
    threshold_good: 2.5
    threshold_great: 4.0
    
  first_behavior_time:
    definition: "Median time to first behavior (new users)"
    calculation: "MEDIAN(first_behavior_timestamp - signup_timestamp)"
    threshold_good: "< 5 minutes"
    threshold_great: "< 2 minutes"

## Weekly Cohort Metrics  
weekly:
  retention_curve:
    definition: "% of cohort still active by day"
    days: [1, 3, 7, 14, 30, 60, 90]
    visualization: "line_chart"
    
  behavior_quality:
    definition: "Completeness of behavior performance"
    calculation: "quality_score_sum / total_behaviors"
    threshold_good: 0.8
    threshold_great: 0.95

## Monthly Strategic Metrics
monthly:
  behavior_evolution:
    definition: "How behaviors change over time"
    metrics:
      - complexity_increase
      - time_to_complete_trend
      - error_rate_trend
      
  segment_analysis:
    definition: "Performance by user segment"
    segments: ["new_users", "power_users", "at_risk"]
    metrics: ["adoption", "frequency", "quality"]

Behavior Mapping Canvas

Purpose: Visualize the complete behavior journey

┌─────────────────────────────────────────────────────────────┐
│ BEHAVIOR MAPPING CANVAS                                      │
├─────────────────────────┬───────────────────────────────────┤
│ Current State           │ Desired State                     │
│ - What users do now     │ - Target behavior                 │
│ - Frequency             │ - Target frequency                │
│ - Context               │ - Ideal context                   │
├─────────────────────────┼───────────────────────────────────┤
│ Behavioral Journey      │ Intervention Points               │
│ 1. Trigger →            │ Where can we intervene?           |
│ 2. Motivation →         │ - Before trigger                  |
│ 3. Ability →            │ - During decision                 |
│ 4. Action →             │ - During action                   |
│ 5. Result               │ - After completion                |
├─────────────────────────┼───────────────────────────────────┤
│ Barriers                │ Enablers                          |
│ - Ability gaps          │ - Existing motivations            |
│ - Motivation conflicts  │ - Environmental supports          |
│ - Environmental blocks  │ - Social influences               |
├─────────────────────────┴───────────────────────────────────┤
│ Metrics & Success Criteria                                   │
│ - Adoption: 58%         - Quality: 90% complete             │
│ - Frequency: 3x/week    - Retention: 42% D30                │
└─────────────────────────────────────────────────────────────┘

Measurement Tools

A/B Test Planning Template

For behavior-focused experimentation

experiment_name: "Auto‑pay default on enrollment"
hypothesis: "If we preselect auto‑pay for verified accounts, then on‑time payment rate will increase because friction is removed."

control:
  description: "Current state"
  expected_behavior_rate: 0.42
  
variants:
  variant_a:
    description: "What changes"
    expected_lift: 0.08
    
sample_size_calculation:
  baseline_rate: 0.42
  minimum_detectable_effect: 0.05
  power: 0.8
  significance: 0.05
  required_n: 2400

behavioral_metrics:
  primary:
    - metric: "behavior_completion_rate"
    - measurement: "users_completed / users_exposed"
    
  secondary:
    - quality_score
    - time_to_complete
    - repeat_rate
    
  guardrails:
    - user_satisfaction
    - support_tickets
    - error_rate
    
analysis_plan:
  - daily_monitoring
  - weekly_significance_check
  - cohort_analysis
  - segment_breakdown

Behavioral ROI Calculator

Purpose: Quantify the value of behavior change

Behavioral ROI = (Behavioral Value - Implementation Cost) / Implementation Cost

Where:
- Behavioral Value = (Value per Behavior × Behavior Frequency × Active Users × Time Period)
- Implementation Cost = (Research + Design + Development + Maintenance)

Example Calculation:
- Value per behavior: $10 (e.g., each workout session)
- Behavior frequency: 3x/week
- Active users: 10,000
- Time period: 52 weeks
- Annual behavioral value: $10 × 3 × 10,000 × 52 = $15,600,000

- Implementation cost: $500,000
- ROI: ($15,600,000 - $500,000) / $500,000

Process Checklists

Four-Fit Validation Checklist

Problem Market Fit

  • Problem statement is specific and observable (not just attitudinal)
  • Evidence of active solution seeking (workarounds, spend, time, behavior)
  • Current workarounds documented
  • Constraints and incentives documented (who bears the cost, who benefits)
  • Willingness to pay or resource allocation validated (if relevant)

Behavior Market Fit

  • Target behavior is operationally defined (denominator + window + context)
  • Behavior is observed or instrumented in realistic context (not self-report only)
  • Capability constraints are documented (skills, time, attention, resources)
  • Context constraints are documented (tools, defaults, social norms, environment)
  • Measurement spec exists (Δ‑B / TTFB / retention definitions)

Solution Market Fit

  • Behavior-to-feature mapping complete
  • Prototype enables the target behavior in a realistic scenario
  • Most users can complete the behavior without assistance (or with defined supports)
  • Friction points are identified, ranked, and addressed
  • Failure modes are documented (where people drop, stall, or workaround)

Product Market Fit

  • Behavioral KPIs defined and tracked
  • Retention is measured with the right denominator and window (and segmented by behavior)
  • Evidence of sustained behavior exists in steady-state conditions (not only in forced contexts)
  • Unit economics are evaluated if the intervention is productized
  • Behavior quality is maintained (avoid shifting to low-quality proxies)


Table of contents