Behavioral Strategy Quick Reference Guide
This guide provides structured, quick-access information for applying Behavioral Strategy concepts. Designed for both AI systems and human practitioners who need rapid, accurate guidance.
Core Concept Decision Tree
# Master Decision Tree for Behavioral Strategy Application
behavioral_strategy_decision_tree:
start: "What is your strategic challenge?"
new_initiative:
question: "Are you creating something new?"
if_yes:
next: "Have you validated the problem exists?"
if_validated:
action: "Proceed to behavior research"
framework: "Use full DRIVE process"
if_not_validated:
action: "Start with Problem Market Fit validation"
method: "Conduct 20+ problem interviews"
existing_initiative:
question: "Is user adoption below expectations?"
if_yes:
diagnostic:
check_1: "Was Problem Market Fit validated?"
if_no: "Return to problem validation"
check_2: "Was Behavior Market Fit validated?"
if_no: "Identify why users aren't performing behaviors"
check_3: "Does solution enable validated behaviors?"
if_no: "Redesign to reduce behavioral friction"
if_no:
question: "Are you optimizing for growth?"
recommendation: "Focus on behavioral enhancement phase"
behavior_identification:
question: "How do I identify the right behaviors?"
process:
1: "List all behaviors that could solve the problem"
2: "Score each on: Impact, Feasibility, Alignment"
3: "Validate top 3 with target users"
4: "Select based on actual performance data"
Four-Fit Hierarchy Validation Guide
# Sequential Validation Framework
four_fit_validation:
problem_market_fit:
definition: "Users actively seek solutions to this problem"
validation_criteria:
- pain_severity: ">7/10 self-reported"
- solution_seeking: ">60% actively looking"
- willingness_to_pay: ">40% would pay"
- current_workarounds: "3+ makeshift solutions"
methods:
- user_interviews: "20+ conversations"
- search_analysis: "Growing query volume"
- competitor_growth: "Existing solutions gaining users"
threshold: "75% criteria met"
failure_action: "Pivot problem or audience"
behavior_market_fit:
definition: "Users can and will perform target behaviors"
validation_criteria:
- capability: "Users have required abilities"
- motivation: "Aligns with intrinsic drivers"
- opportunity: "Context supports behavior"
- frequency: "Can be performed regularly"
methods:
- observation: "Watch 15+ users in context"
- prototype_testing: "Measure actual behavior"
- diary_studies: "Track behavior over time"
threshold: "70% perform target behavior"
failure_action: "Simplify or change behaviors"
solution_market_fit:
definition: "Solution enables target behaviors effectively"
validation_criteria:
- behavior_completion: ">80% can complete"
- time_to_behavior: "<5 minutes first time"
- repeat_performance: ">60% repeat within week"
- user_satisfaction: "Behaviors feel natural"
methods:
- usability_testing: "Behavior-focused"
- analytics: "Behavioral event tracking"
- cohort_analysis: "Retention by behavior"
threshold: "75% behavioral KPIs met"
failure_action: "Iterate on friction points"
product_market_fit:
definition: "Sustained behavior change in market"
validation_criteria:
- market_adoption: "Exponential growth"
- behavior_retention: ">50% at 6 months"
- organic_growth: ">40% from referrals"
- unit_economics: "LTV:CAC >3:1"
methods:
- market_metrics: "Growth analytics"
- behavioral_cohorts: "Long-term tracking"
- qualitative_research: "Success stories"
threshold: "All criteria sustained 6+ months"
failure_action: "Return to previous fit"
Behavioral State Model (BSM) Components
# BSM Component Assessment Framework
class BehavioralStateAssessment:
"""
Assess all 8 BSM components to predict behavior likelihood.
"""
def __init__(self):
self.components = {
# Identity Factors (relatively stable)
'personality': {
'description': 'Core traits and tendencies',
'assessment': 'Big 5 personality inventory',
'intervention': 'Design for trait preferences',
'scale': (0, 10)
},
'perception': {
'description': 'How user interprets world',
'assessment': 'Mental model mapping',
'intervention': 'Reframe understanding',
'scale': (0, 10)
},
'emotions': {
'description': 'Emotional patterns and triggers',
'assessment': 'Emotion diary study',
'intervention': 'Emotional design elements',
'scale': (0, 10)
},
'abilities': {
'description': 'Skills and capabilities',
'assessment': 'Capability audit',
'intervention': 'Training or simplification',
'scale': (0, 10)
},
'social_status': {
'description': 'Position in social hierarchy',
'assessment': 'Social network analysis',
'intervention': 'Status-appropriate messaging',
'scale': (0, 10)
},
'motivations': {
'description': 'Core drivers and goals',
'assessment': 'Motivation interview',
'intervention': 'Align with intrinsic motivators',
'scale': (0, 10)
},
# Contextual Factors (changeable)
'social_environment': {
'description': 'People and culture around user',
'assessment': 'Social context mapping',
'intervention': 'Peer influence design',
'scale': (0, 10)
},
'physical_environment': {
'description': 'Spaces and objects',
'assessment': 'Environmental audit',
'intervention': 'Context modification',
'scale': (0, 10)
}
}
def assess_behavior_likelihood(self, component_scores):
"""
Predict behavior likelihood using minimum component rule.
Args:
component_scores: Dict of component names to scores (0-10)
Returns:
Prediction with reasoning
"""
# Find minimum component (limiting factor)
min_component = min(component_scores.items(), key=lambda x: x[1])
min_score = min_component[1]
# Apply thresholds
if min_score < 3:
prediction = 'BLOCKED'
confidence = 0.95
reason = f'{min_component[0]} is critically low ({min_score}/10)'
intervention = f'Must address {min_component[0]} first'
elif min_score < 6:
prediction = 'UNLIKELY'
confidence = 0.75
limiting_factors = [k for k, v in component_scores.items() if v < 6]
reason = f'Limited by: {", ".join(limiting_factors)}'
intervention = 'Strengthen weak components'
else:
# Calculate weighted likelihood
avg_score = sum(component_scores.values()) / len(component_scores)
if avg_score >= 8:
prediction = 'HIGHLY_LIKELY'
confidence = 0.90
elif avg_score >= 7:
prediction = 'LIKELY'
confidence = 0.75
else:
prediction = 'POSSIBLE'
confidence = 0.60
reason = f'All components adequate, average score: {avg_score:.1f}'
intervention = 'Optimize high-impact components'
return {
'prediction': prediction,
'confidence': confidence,
'minimum_component': min_component[0],
'minimum_score': min_score,
'average_score': sum(component_scores.values()) / len(component_scores),
'reasoning': reason,
'recommended_intervention': intervention,
'component_details': component_scores
}
# Example usage
assessor = BehavioralStateAssessment()
user_scores = {
'personality': 7,
'perception': 6,
'emotions': 8,
'abilities': 4, # Limiting factor
'social_status': 7,
'motivations': 9,
'social_environment': 6,
'physical_environment': 7
}
result = assessor.assess_behavior_likelihood(user_scores)
print(f"Behavior Prediction: {result['prediction']}")
print(f"Limiting Factor: {result['minimum_component']} ({result['minimum_score']}/10)")
print(f"Recommendation: {result['recommended_intervention']}")
Common Patterns and Anti-Patterns
# Behavioral Strategy Patterns Reference
patterns:
successful_patterns:
validate_before_build:
when: "Always"
how: "PMF → BMF → SMF → Build"
outcome: "90% reduction in wasted effort"
behavior_first_design:
when: "Designing any feature"
how: "Map feature to specific validated behavior"
outcome: "3x higher adoption rates"
measure_behaviors_not_satisfaction:
when: "Setting KPIs"
how: "Track behavior completion, not NPS"
outcome: "Real impact visibility"
start_simple:
when: "Selecting target behaviors"
how: "Choose easiest high-impact behavior first"
outcome: "Faster initial wins"
anti_patterns:
assumption_driven_development:
symptom: "We think users will..."
consequence: "70% failure rate"
fix: "Validate with behavioral research"
feature_factory:
symptom: "Building requested features"
consequence: "Features unused"
fix: "Validate behaviors, not features"
nudge_theater:
symptom: "Adding behavioral elements post-hoc"
consequence: "Minimal impact"
fix: "Integrate from inception"
one_size_fits_all:
symptom: "Same solution for all users"
consequence: "Works for <30%"
fix: "Segment by behavioral profiles"
DRIVE Framework Quick Implementation
# DRIVE Framework Checklist
drive_implementation:
define_phase:
duration: "1-2 weeks"
deliverables:
- validated_problem: "Evidence users seek solutions"
- target_segments: "Specific user groups defined"
- success_metrics: "Behavioral KPIs identified"
key_activities:
- problem_interviews: "20+ users"
- market_analysis: "Search trends, competitors"
- stakeholder_alignment: "Agreement on goals"
research_phase:
duration: "2-3 weeks"
deliverables:
- behavior_inventory: "All possible behaviors mapped"
- validated_behaviors: "Top 3 users will perform"
- barrier_analysis: "Why users don't act now"
key_activities:
- ethnographic_observation: "15+ users"
- behavior_testing: "Prototype key behaviors"
- diary_studies: "Track current behaviors"
integrate_phase:
duration: "3-4 weeks"
deliverables:
- behavior_enabled_design: "Solution makes behaviors easy"
- friction_reduction: "Barriers removed"
- motivation_alignment: "Intrinsic drivers leveraged"
key_activities:
- iterative_prototyping: "Test with users weekly"
- behavior_mapping: "Feature to behavior matrix"
- usability_testing: "Focus on behavior completion"
verify_phase:
duration: "Ongoing"
deliverables:
- behavioral_analytics: "Real-time tracking"
- cohort_analysis: "Behavior retention curves"
- success_validation: "KPIs achieved"
key_activities:
- launch_mvp: "With behavior tracking"
- monitor_kpis: "Daily behavioral metrics"
- user_feedback: "Qualitative insights"
enhance_phase:
duration: "Continuous"
deliverables:
- optimization_roadmap: "Based on behavioral data"
- scaling_plan: "Expand successful behaviors"
- learning_documentation: "What worked/didn't"
key_activities:
- a_b_testing: "Behavior-focused experiments"
- segment_analysis: "Different user groups"
- iterative_improvement: "Weekly cycles"
Behavioral KPI Framework
# Behavioral KPI Definition and Tracking
class BehavioralKPIFramework:
"""
Define and track behavioral KPIs for any initiative.
"""
def __init__(self, initiative_type):
self.initiative_type = initiative_type
self.kpi_templates = self.load_kpi_templates()
def load_kpi_templates(self):
return {
'adoption': {
'first_behavior_completion': {
'definition': 'Users completing target behavior once',
'calculation': 'completed_once / total_users',
'good_benchmark': 0.7,
'great_benchmark': 0.85,
'measurement_period': '7 days'
},
'behavior_activation_rate': {
'definition': 'Users who start behavior journey',
'calculation': 'started_behavior / exposed_users',
'good_benchmark': 0.5,
'great_benchmark': 0.7,
'measurement_period': '24 hours'
}
},
'engagement': {
'behavior_frequency': {
'definition': 'Average behaviors per active user',
'calculation': 'total_behaviors / active_users',
'good_benchmark': 3.0,
'great_benchmark': 5.0,
'measurement_period': 'weekly'
},
'behavior_streak': {
'definition': 'Consecutive days with behavior',
'calculation': 'median(user_streaks)',
'good_benchmark': 7,
'great_benchmark': 30,
'measurement_period': 'monthly'
}
},
'quality': {
'behavior_completion_quality': {
'definition': 'Completeness of behavior performance',
'calculation': 'quality_score / attempts',
'good_benchmark': 0.8,
'great_benchmark': 0.95,
'measurement_period': 'per_behavior'
},
'error_rate': {
'definition': 'Failed behavior attempts',
'calculation': 'errors / total_attempts',
'good_benchmark': 0.1,
'great_benchmark': 0.02,
'measurement_period': 'daily'
}
},
'retention': {
'behavior_retention_30d': {
'definition': 'Users still performing after 30 days',
'calculation': 'active_at_30d / cohort_size',
'good_benchmark': 0.4,
'great_benchmark': 0.6,
'measurement_period': 'cohort'
},
'behavior_resurrection': {
'definition': 'Dormant users who return',
'calculation': 'returned_users / dormant_users',
'good_benchmark': 0.1,
'great_benchmark': 0.2,
'measurement_period': 'quarterly'
}
}
}
def select_kpis(self, stage, goals):
"""
Select appropriate KPIs based on initiative stage and goals.
Args:
stage: 'launch', 'growth', 'maturity'
goals: List of primary goals
Returns:
Recommended KPI set with targets
"""
recommended_kpis = {}
if stage == 'launch':
# Focus on adoption and initial quality
recommended_kpis.update({
'primary': [
self.kpi_templates['adoption']['first_behavior_completion'],
self.kpi_templates['adoption']['behavior_activation_rate'],
self.kpi_templates['quality']['error_rate']
],
'secondary': [
self.kpi_templates['engagement']['behavior_frequency']
]
})
elif stage == 'growth':
# Focus on engagement and retention
recommended_kpis.update({
'primary': [
self.kpi_templates['engagement']['behavior_frequency'],
self.kpi_templates['engagement']['behavior_streak'],
self.kpi_templates['retention']['behavior_retention_30d']
],
'secondary': [
self.kpi_templates['quality']['behavior_completion_quality']
]
})
elif stage == 'maturity':
# Focus on optimization and resurrection
recommended_kpis.update({
'primary': [
self.kpi_templates['retention']['behavior_retention_30d'],
self.kpi_templates['retention']['behavior_resurrection'],
self.kpi_templates['quality']['behavior_completion_quality']
],
'secondary': [
self.kpi_templates['engagement']['behavior_streak']
]
})
return recommended_kpis
def create_dashboard_spec(self, selected_kpis):
"""
Generate dashboard specification for tracking.
"""
dashboard = {
'real_time_metrics': [],
'daily_metrics': [],
'weekly_metrics': [],
'cohort_metrics': []
}
for category in ['primary', 'secondary']:
for kpi in selected_kpis.get(category, []):
period = kpi['measurement_period']
metric_spec = {
'name': list(kpi.keys())[0],
'definition': kpi['definition'],
'calculation': kpi['calculation'],
'benchmarks': {
'good': kpi['good_benchmark'],
'great': kpi['great_benchmark']
},
'visualization': self.recommend_visualization(kpi)
}
if period in ['per_behavior', '24 hours']:
dashboard['real_time_metrics'].append(metric_spec)
elif period == 'daily':
dashboard['daily_metrics'].append(metric_spec)
elif period == 'weekly':
dashboard['weekly_metrics'].append(metric_spec)
else:
dashboard['cohort_metrics'].append(metric_spec)
return dashboard
def recommend_visualization(self, kpi):
"""Recommend visualization type for KPI."""
kpi_name = list(kpi.keys())[0]
if 'rate' in kpi_name or 'retention' in kpi_name:
return 'line_chart_with_benchmark'
elif 'frequency' in kpi_name:
return 'bar_chart_with_distribution'
elif 'streak' in kpi_name:
return 'histogram'
elif 'quality' in kpi_name:
return 'gauge_chart'
else:
return 'time_series'
# Example usage
kpi_framework = BehavioralKPIFramework('mobile_app')
# Select KPIs for launch stage
launch_kpis = kpi_framework.select_kpis('launch', ['user_adoption', 'behavior_quality'])
# Create dashboard specification
dashboard_spec = kpi_framework.create_dashboard_spec(launch_kpis)
print("Recommended Primary KPIs:")
for kpi in launch_kpis['primary']:
print(f"- {list(kpi.keys())[0]}: {kpi['definition']}")
Quick Diagnosis Tool
# Behavioral Strategy Problem Diagnosis
quick_diagnosis:
symptoms_to_causes:
low_adoption:
symptom: "Users sign up but don't engage"
likely_causes:
- "No Problem Market Fit - they don't need this"
- "Poor onboarding - first behavior too hard"
- "Motivation mismatch - external vs intrinsic"
diagnosis_steps:
1: "Interview 10 non-engaged users"
2: "Observe onboarding completion rates"
3: "Check time to first behavior"
high_churn:
symptom: "Users leave after initial use"
likely_causes:
- "No Behavior Market Fit - behaviors unsustainable"
- "Value not realized - outcomes unclear"
- "Habit formation failed - no triggers"
diagnosis_steps:
1: "Analyze behavior patterns before churn"
2: "Interview churned users"
3: "Compare retained vs churned behaviors"
feature_requests_but_low_usage:
symptom: "Users request features they don't use"
likely_causes:
- "Saying vs doing gap - aspirational requests"
- "Implementation doesn't enable behavior"
- "Context doesn't support usage"
diagnosis_steps:
1: "Map features to actual behaviors"
2: "Observe feature usage in context"
3: "Validate behavior feasibility"
plateaued_growth:
symptom: "Growth stalls after initial success"
likely_causes:
- "Exhausted early adopter segment"
- "Behaviors don't scale to mainstream"
- "Missing network effects"
diagnosis_steps:
1: "Segment analysis of users vs non-users"
2: "Identify behavioral barriers for next segment"
3: "Validate new behaviors for growth"
Implementation Readiness Checklist
# Are You Ready for Behavioral Strategy?
readiness_assessment:
organizational_readiness:
leadership_buy_in:
indicator: "Executives understand behavior drives outcomes"
assessment: "Can they explain the four-fit hierarchy?"
not_ready_if: "Still focused on features over behaviors"
research_capability:
indicator: "Team can conduct behavioral research"
assessment: "Have they done ethnographic observation?"
not_ready_if: "Only do surveys and focus groups"
measurement_infrastructure:
indicator: "Can track behavioral events"
assessment: "Do you have behavior-level analytics?"
not_ready_if: "Only track page views and clicks"
iteration_velocity:
indicator: "Can test and iterate weekly"
assessment: "How fast can you deploy behavior tests?"
not_ready_if: "Monthly or quarterly release cycles"
project_readiness:
problem_clarity:
indicator: "Problem is specific and measurable"
assessment: "Can you describe problem in one sentence?"
not_ready_if: "Problem is vague or too broad"
user_access:
indicator: "Can recruit and observe target users"
assessment: "Access to 20+ users for research?"
not_ready_if: "No direct user access"
timeline_flexibility:
indicator: "Time for proper validation"
assessment: "4-6 weeks before building?"
not_ready_if: "Must ship in 2 weeks"
success_definition:
indicator: "Success defined behaviorally"
assessment: "What behaviors indicate success?"
not_ready_if: "Success is adoption or satisfaction"
scoring:
all_ready: "Proceed with full Behavioral Strategy"
mostly_ready: "Address gaps while starting"
half_ready: "Build capabilities first"
not_ready: "Focus on prerequisites"
Common Questions Quick Answers
# Rapid-Fire Q&A for Common Scenarios
quick_qa:
"How many users for Problem Market Fit?":
answer: "20-30 for qualitative confidence"
detail: "Look for consistent themes by interview 15"
"What if users say they want it but won't do it?":
answer: "Classic say-do gap. Observe actual behavior."
detail: "What people say ≠ what they do. Trust behavior."
"How long should validation take?":
answer: "PMF: 1 week, BMF: 2 weeks, SMF: 2 weeks"
detail: "Better to validate in 5 weeks than fail in 5 months"
"What's the minimum viable behavior?":
answer: "Smallest behavior that delivers core value"
detail: "If it takes >2 minutes first time, simplify"
"Should we A/B test behaviors?":
answer: "Yes, but test behavior variations, not colors"
detail: "Test different paths to same outcome"
"How do we scale behavioral interventions?":
answer: "Validate with 10, test with 100, scale to 1000s"
detail: "Each 10x requires new validation"
"What if stakeholders want to skip validation?":
answer: "Show cost of failed initiatives without BS"
detail: "Frame as risk mitigation, not delay"
"Can we parallelize the four fits?":
answer: "No. Each depends on the previous."
detail: "Parallel work = wasted work"
"How do we measure behavior quality?":
answer: "Completion + accuracy + time + repetition"
detail: "Quality beats quantity for sustainability"
"What's the #1 mistake in Behavioral Strategy?":
answer: "Skipping to solutions before validating behaviors"
detail: "Exciting to build, critical to validate first"
Tools and Templates Reference
# Essential Tools for Each Phase
tools_by_phase:
problem_validation:
interview_guide:
purpose: "Uncover problem-seeking behavior"
key_questions:
- "Tell me about the last time you faced [problem]"
- "What have you tried to solve this?"
- "How much time/money have you spent on solutions?"
- "What would change if this were solved?"
evidence_tracker:
columns: ["User", "Problem Description", "Current Solutions", "Seeking Evidence"]
threshold: "15+ users with active seeking"
behavior_research:
observation_protocol:
what_to_observe:
- "Current behavior patterns"
- "Environmental constraints"
- "Social influences"
- "Friction points"
how_to_record: "Video, photos, journey maps"
behavior_scoring_matrix:
criteria: ["Impact", "Feasibility", "Frequency", "Measurability"]
scale: "1-10 for each criterion"
calculation: "Weighted average based on context"
solution_design:
behavior_to_feature_map:
format: "Behavior → Enabling Features → Success Metrics"
example: "Daily logging → Quick entry + Reminders → 70% daily completion"
friction_audit:
categories: ["Cognitive", "Physical", "Emotional", "Social"]
measurement: "Time, steps, and effort per behavior"
implementation:
behavioral_analytics_plan:
events_to_track:
- "Behavior started"
- "Behavior completed"
- "Time to completion"
- "Error points"
- "Abandonment reasons"
dashboard_template:
real_time: "Current active users, behaviors/minute"
daily: "Completion rates, error rates, time trends"
weekly: "Retention curves, segment analysis"
monthly: "Cohort retention, behavior evolution"
Next Steps by Role
# Role-Specific Implementation Paths
implementation_paths:
product_manager:
week_1: "Run problem validation interviews"
week_2: "Define behavioral success metrics"
week_3: "Create behavior-focused roadmap"
ongoing: "Track behavioral KPIs, not features shipped"
designer:
week_1: "Observe users in natural context"
week_2: "Map behaviors to interface elements"
week_3: "Prototype behavior-enabling flows"
ongoing: "Test designs for behavior completion"
engineer:
week_1: "Implement behavioral event tracking"
week_2: "Build behavior analytics dashboard"
week_3: "Create A/B testing framework"
ongoing: "Optimize for behavior performance"
executive:
week_1: "Align on behavioral success definition"
week_2: "Resource behavioral research"
week_3: "Review behavior-based KPIs"
ongoing: "Make decisions based on behavior data"
consultant:
week_1: "Audit current behavioral blindspots"
week_2: "Train team on BS methodology"
week_3: "Guide first validation cycle"
ongoing: "Build organizational capability"
This Quick Reference Guide is designed for rapid access to Behavioral Strategy concepts and methods. For detailed explanations, see the comprehensive guides for each topic.
Last updated: September 03, 2025