DRIVE Framework - Five-Step Behavioral Strategy Playbook
Last updated: 2025-08-20
The DRIVE framework, created by Jason Hreha, provides organizations with a structured, evidence-based method to embed behavioral science into strategic planning. By clearly defining problems, behaviors, and solutions upfront, teams improve strategic outcomes and sustainably influence user behaviors.¹
DRIVE Framework Logic Flow
# Step-by-Step Reasoning Through DRIVE
drive_reasoning_chain:
Define:
question: "What problem are users actively trying to solve?"
inputs: ["user_interviews", "market_research", "behavioral_data"]
validation: "Problem Market Fit achieved when PMF score ≥ 0.75 (see Fit Scorecards)"
output: "validated_problem_statement"
next_if_valid: "Research"
Research:
question: "What behaviors would solve this problem?"
inputs: ["validated_problem", "user_observations", "contextual_inquiry"]
validation: "Behavior Market Fit achieved when BMF_min ≥ 6 and BMF_avg ≥ 7 (n ≥ 15 observed)"
output: "prioritized_behavior_list"
next_if_valid: "Integrate"
Integrate:
question: "How do we enable these behaviors in our solution?"
inputs: ["validated_behaviors", "design_constraints", "user_feedback"]
validation: "Solution Market Fit achieved when SMF ≥ 0.70 in a 30‑user prototype test"
output: "behavior_enabling_solution"
next_if_valid: "Verify"
Verify:
question: "Are the behaviors happening and solving the problem?"
inputs: ["behavioral_kpis", "outcome_metrics", "user_analytics"]
validation: "Targets met for bPMF ≥ 0.70 in 2+ cohorts"
output: "performance_data"
next_if_valid: "Enhance"
Standardized behavioral scorecards used across Methodology. Glossary: PMF, BMF, SMF, bPMF, BSM.
PMF = 0.45*(% actively seeking in last 90 days)
+ 0.25*(avg severity / 10)
+ 0.20*(% willingness to pay at price anchor)
+ 0.10*(external demand index)
Target: PMF ≥ 0.75
External demand index: normalized blend of search volume trend, forum velocity, competitor growth.
BMF_min = percentile_70(min of 8 BSM components)
BMF_avg = average of 8 BSM components across users
Target: BMF_min ≥ 6 and BMF_avg ≥ 7 with n ≥ 15 observed users
SMF = 0.40*Behavior Completion Rate
+ 0.20*Time to First Behavior (inverted, normalized)
+ 0.20*Repeat Within 7 Days
+ 0.20*Drop‑off at weakest step (inverted)
Target: SMF ≥ 0.70 in a 30‑user prototype test
bPMF = 0.5*30‑day Behavior Retention
+ 0.3*Behavior Frequency vs. target
+ 0.2*Quality/Accuracy of behavior
Target: bPMF ≥ 0.70 in 2+ consecutive cohorts
Note: Problem Market Fit is sometimes called Goal Market Fit. We use PMF for consistency.
Definition. bPMF (Behavioral Product-Market Fit) is the share of users in a defined cohort who complete the validated target behavior at or above the frequency threshold within the evaluation window (default 30 days). See Glossary: bPMF.
¹ DRIVE is a trademark of Jason Hreha; reference with attribution. This playbook is shared for learning; commercial use requires attribution.
Relationship to Core Concepts
# How DRIVE Connects to Behavioral Strategy Concepts
drive_relationships:
maps_to_process:
Define: "Phase 1: Strategic Definition & Problem Validation"
Research: "Phase 2: Behavioral Research & Selection"
Integrate: "Phase 3: Solution Design & Integration"
Verify: "Phase 4: Implementation & Verification"
Enhance: "Phase 5: Enhancement & Optimization"
implements_fits:
Define:
achieves: "Problem Market Fit"
validates: "Users actively seek solution to problem"
Research:
achieves: "Behavior Market Fit"
validates: "Users can and will perform behaviors"
Integrate:
achieves: "Solution Market Fit"
validates: "Solution enables target behaviors"
Verify:
measures: "Behavioral KPIs"
tracks_toward: "Product Market Fit"
Enhance:
maintains: "All four fits"
optimizes: "Long-term behavioral impact"
key_differentiators:
vs_design_thinking: "Validates behaviors before design"
vs_lean_startup: "Focuses on behavior validation, not just product"
vs_agile: "Behavioral outcomes drive iterations"
The Five-Step DRIVE Framework
1. Define
Clearly identify strategic goals, target users, and core problems to ensure Problem Market Fit.
Example (consumer): Instagram defined their strategic goal - boosting engagement - then identified user behaviors (photo-sharing) aligned with that goal.
Example (enterprise): Claims operations targets “Submit complete documentation within 24 hours of request” to reduce cycle time and rework.
Define Phase Checklist
define_phase_checklist:
- "Clear, measurable strategic objectives defined"
- "Target user segments identified and validated"
- "User problem clearly articulated"
- "Evidence of problem-seeking behavior documented"
- "Problem Market Fit score calculated (PMF ≥ 0.75)"
- "Success metrics defined in behavioral terms"
2. Research
Conduct rigorous behavioral research to validate and prioritize behaviors, ensuring strong Behavior Market Fit.
Example (consumer): Instagram’s research showed users strongly preferred photo-sharing over location check-ins, guiding their app redesign.
Example (enterprise): In fintech onboarding, observation reveals KYC document collection as the weakest step; target behavior becomes “Upload both ID and proof-of-address in one session.”
Research Phase Evaluation Rubric
research_evaluation:
behavioral_criteria:
impact_on_problem:
weight: 30%
high: "Directly solves core problem"
medium: "Partially addresses problem"
low: "Indirect problem connection"
user_capability:
weight: 25%
high: "Users can perform with current abilities"
medium: "Requires minor skill development"
low: "Requires significant behavior change"
motivation_alignment:
weight: 25%
high: "Intrinsically motivated to perform"
medium: "Extrinsically motivated"
low: "Requires constant prompting"
measurement_feasibility:
weight: 20%
high: "Direct behavioral metrics available"
medium: "Proxy metrics required"
low: "Difficult to measure"
3. Integrate
Embed prioritized behaviors into defined solutions - products, services, or interventions - to achieve Solution Market Fit.
Example (consumer): Instagram integrated intuitive photo-sharing features directly into their core app experience.
Example (enterprise): Claims portal adds “document checklist + single-upload” component mapped to target behavior, with inline validation to prevent missing items.
Computational Example: Behavior-to-Feature Mapping
# Example: Mapping validated behaviors to solution features
class BehaviorFeatureMapper:
def __init__(self):
self.mappings = []
def add_mapping(self, behavior, feature, enablement_mechanism):
"""
Map a validated behavior to a solution feature.
Args:
behavior: Target behavior description
feature: Solution feature that enables behavior
enablement_mechanism: How feature enables behavior
"""
mapping = {
'behavior': behavior,
'feature': feature,
'mechanism': enablement_mechanism,
'friction_score': self._calculate_friction(feature),
'alignment_score': self._calculate_alignment(behavior, feature)
}
self.mappings.append(mapping)
def _calculate_friction(self, feature):
# Pseudocode - replace with real scoring
# Example: combine normalized steps, cognitive load, time, and skill
return 0.3
def _calculate_alignment(self, behavior, feature):
# Pseudocode - replace with real scoring
return 0.85
def validate_coverage(self, total_target_behaviors):
"""
Ensure all target behaviors have enabling features.
"""
behaviors_covered = set([m['behavior'] for m in self.mappings])
features_used = set([m['feature'] for m in self.mappings])
return {
'behavior_coverage': len(behaviors_covered) / max(1, total_target_behaviors), # target 100%
'feature_efficiency': total_target_behaviors / max(1, len(features_used)), # optimize, not a gate
'avg_friction': sum([m['friction_score'] for m in self.mappings]) / len(self.mappings),
'avg_alignment': sum([m['alignment_score'] for m in self.mappings]) / len(self.mappings)
}
# Usage example
mapper = BehaviorFeatureMapper()
# Map Instagram's behaviors to features
mapper.add_mapping(
behavior="Share photos with friends",
feature="One-tap photo capture and filters",
enablement_mechanism="Reduces friction from capture to share"
)
mapper.add_mapping(
behavior="Discover interesting content",
feature="Algorithmic feed based on engagement",
enablement_mechanism="Surfaces relevant content without search"
)
validation = mapper.validate_coverage(total_target_behaviors=2)
print(f"Behavior coverage: {validation['behavior_coverage']:.0%}")
print(f"Feature efficiency: {validation['feature_efficiency']:.2f}")
print(f"Average friction score: {validation['avg_friction']:.2f}")
print(f"Average alignment: {validation['avg_alignment']:.2%}")
4. Verify
Systematically track outcomes against explicit behavioral KPIs to confirm effectiveness and sustainability.
Example (healthcare): Track vaccination completion and repeat adherence by cohort.
Example (enterprise): Claims program monitors “complete within 24 hours” behavior completion rate and handoff errors per claim.
Common Verification Mistakes
verification_mistakes:
mistake_1:
name: "Vanity Metrics Focus"
symptoms:
- "Tracking downloads instead of usage behaviors"
- "Measuring satisfaction instead of behavior change"
- "Celebrating signups without activation"
diagnosis: "Metrics don't reflect actual behavior performance"
fix:
immediate: "Define behavioral KPIs before launch"
systematic: "Create behavior-to-metric mapping"
example: "Track 'photos shared per week' not 'app opens'"
mistake_2:
name: "Delayed Measurement"
symptoms:
- "Waiting months before checking behavioral data"
- "No early warning system for behavior failure"
- "Surprises in user behavior after launch"
diagnosis: "Verification happens too late to course-correct"
fix:
immediate: "Implement daily behavioral tracking"
systematic: "Create behavioral health dashboard"
example: "Monitor behavior completion rates from day 1"
mistake_3:
name: "Aggregate Blindness"
symptoms:
- "Overall metrics look good but segments failing"
- "Power users mask broader adoption issues"
- "Average hides behavioral distribution problems"
diagnosis: "Aggregate metrics hide segment-specific failures"
fix:
immediate: "Segment all behavioral metrics"
systematic: "Track behavior distribution curves"
example: "Monitor adoption by user cohort and context"
5. Enhance
Refine and optimize solutions systematically based on ongoing data, ensuring long-term impact and scalability.
Example (public sector): Benefits application program iterates reminders and assistance modules to sustain on-time renewals.
Enhancement Decision Tree
enhancement_decision_tree:
start: "Analyze current behavioral performance"
performance_check:
if_below_target:
diagnose:
- "Which behaviors are underperforming?"
- "What barriers exist?"
- "Which user segments struggle?"
actions:
reduce_friction: "Simplify behavioral path"
increase_motivation: "Enhance value proposition"
improve_ability: "Add training or scaffolding"
if_at_target:
optimize:
- "Which behaviors drive most value?"
- "Where can we increase frequency?"
- "How to expand to new segments?"
actions:
scale_success: "Amplify working elements"
expand_reach: "Target adjacent segments"
deepen_engagement: "Increase behavior quality"
if_above_target:
sustain:
- "What maintains current performance?"
- "Which factors risk regression?"
- "How to prevent behavior decay?"
actions:
reinforce_habits: "Strengthen behavior loops"
monitor_threats: "Track competitive changes"
innovate_ahead: "Introduce next-gen behaviors"
Executive Heuristic: Define goals clearly, research rigorously, integrate behaviors strategically, verify outcomes consistently, and enhance solutions continuously. DRIVE outcomes, don’t guess them.
Q&A: Implementing DRIVE Successfully
Q: How much research is enough in the Research phase?
A: Research sufficiency is reached when:
- You can predict user behavior with >80% accuracy
- No new behavioral insights emerge from additional research
- You have validated behaviors with at least 15-20 target users
- Behavioral patterns are consistent across user segments
Typical research phase: 2-4 weeks for focused initiatives, 4-8 weeks for complex products.
Q: What if our integrated solution doesn’t enable the validated behaviors?
A: This signals a Solution Market Fit failure. Common causes:
- Over-engineering: Solution is too complex for the behavior
- Under-designing: Key behavioral triggers are missing
- Context mismatch: Solution doesn’t fit user environment
Return to Integrate phase and simplify ruthlessly around core behaviors.
Q: How do we balance multiple stakeholder demands during Define?
A: Use the behavioral lens as your North Star:
- Frame all goals in terms of user behavior change
- Show how different stakeholder goals connect to behaviors
- Use Problem Market Fit validation to settle debates objectively
- Document trade-offs in behavioral impact terms
Example: “Feature A enables behavior X for 70% of users, Feature B enables behavior Y for 30% of users.”
Q: Can DRIVE work for internal organizational changes?
A: Yes, DRIVE excels at organizational behavior change:
- Define: What employee behaviors need to change?
- Research: How do employees currently work? What motivates them?
- Integrate: Design processes/tools that enable new behaviors
- Verify: Track adoption of new work behaviors
- Enhance: Iterate based on employee behavioral data
Q: What’s the minimum viable DRIVE implementation?
A: For rapid validation:
- Define (1 week): 10 problem interviews, clear problem statement
- Research (1 week): Observe 10 users, identify 3 target behaviors
- Integrate (2 weeks): Paper prototype enabling key behavior
- Verify (1 week): Test with 20 users, measure behavior completion
- Enhance (ongoing): Weekly iterations based on data
Total: 5 weeks to first behavioral validation.
DRIVE Implementation Rubric
drive_maturity_model:
novice:
characteristics:
- "Following DRIVE steps sequentially"
- "Basic behavior identification"
- "Simple metrics tracking"
next_level: "Deepen research methods"
intermediate:
characteristics:
- "Rich behavioral research"
- "Clear behavior-to-outcome mapping"
- "Regular iteration cycles"
next_level: "Increase validation rigor"
advanced:
characteristics:
- "Predictive behavior modeling"
- "Multi-variate testing"
- "Behavioral ecosystem thinking"
next_level: "Scale across organization"
expert:
characteristics:
- "DRIVE embedded in culture"
- "Behavioral strategy drives all decisions"
- "Continuous innovation in methods"
hallmark: "Behaviors predicted and achieved consistently"
Computational Tools for DRIVE
# DRIVE Framework Validator
class DRIVEValidator:
"""Validates progress through DRIVE phases."""
def __init__(self):
self.phases = ['Define', 'Research', 'Integrate', 'Verify', 'Enhance']
self.validations = {
'Define': self.validate_define,
'Research': self.validate_research,
'Integrate': self.validate_integrate,
'Verify': self.validate_verify,
'Enhance': self.validate_enhance
}
def validate_define(self, data):
"""Validate Define phase completion."""
required = ['problem_statement', 'target_users', 'success_metrics']
problem_validation = data.get('problem_validation', {})
checks = {
'has_required_fields': all(field in data for field in required),
'problem_validated': problem_validation.get('pmf', 0) >= 0.75,
'metrics_behavioral': 'behavioral_kpis' in data.get('success_metrics', {}),
'users_defined': len(data.get('target_users', [])) > 0
}
score = sum(checks.values()) / len(checks)
return {
'phase': 'Define',
'complete': score >= 0.75,
'score': score,
'missing': [k for k, v in checks.items() if not v]
}
def validate_research(self, data):
"""Validate Research phase completion."""
behaviors = data.get('target_behaviors', [])
bmf_min = data.get('bmf_min', 0)
bmf_avg = data.get('bmf_avg', 0)
users_observed = data.get('users_observed', 0)
checks = {
'behaviors_identified': len(behaviors) > 0,
'bmf_thresholds_met': (bmf_min >= 6) and (bmf_avg >= 7),
'research_methods_used': len(data.get('research_methods', [])) >= 2,
'user_observations': users_observed >= 15
}
score = sum(checks.values()) / len(checks)
return {
'phase': 'Research',
'complete': score >= 0.75,
'score': score,
'missing': [k for k, v in checks.items() if not v]
}
def validate_integrate(self, data):
required = ['feature_map', 'friction_analysis', 'prototype']
checks = {
'has_required_fields': all(field in data for field in required),
'behavior_coverage_100': data.get('behaviors_with_enabling_feature', 0) >= data.get('target_behaviors_count', 0),
'feature_efficiency_ok': True,
'friction_ok': data.get('avg_friction', 1.0) <= 0.4
}
score = sum(checks.values()) / len(checks)
return {
'phase': 'Integrate',
'complete': score >= 0.75,
'score': score,
'missing': [k for k, v in checks.items() if not v]
}
def validate_verify(self, data):
checks = {
'has_kpis': 'behavioral_kpis' in data,
'bpmf_target_met': data.get('bpmf', 0) >= 0.70,
'cohorts_count': data.get('cohorts', 0) >= 2
}
score = sum(checks.values()) / len(checks)
return {
'phase': 'Verify',
'complete': score >= 0.75,
'score': score,
'missing': [k for k, v in checks.items() if not v]
}
def validate_enhance(self, data):
return {'phase': 'Enhance', 'complete': True, 'score': 1.0, 'missing': []}
def validate_progress(self, project_data):
"""Validate overall DRIVE progress."""
results = {}
for phase in self.phases:
if phase in self.validations:
results[phase] = self.validations[phase](project_data.get(phase, {}))
phases_counted = [p for p in self.phases if p in results and p != 'Enhance']
return {
'overall_progress': (sum(results[p]['complete'] for p in phases_counted) / max(1, len(phases_counted))),
'phase_details': results,
'next_phase': self._get_next_phase(results),
'recommendations': self._generate_recommendations(results)
}
def _get_next_phase(self, results):
"""Determine next phase to focus on."""
for phase in [p for p in self.phases if p != 'Enhance']:
if phase in results and not results[phase]['complete']:
return phase
return 'Enhance' # All phases complete, continue enhancing
def _generate_recommendations(self, results):
"""Generate specific recommendations based on validation."""
recommendations = []
for phase, result in results.items():
if not result['complete']:
for missing in result['missing']:
recommendations.append({
'phase': phase,
'issue': missing,
'action': f"Complete {missing} in {phase} phase",
'priority': 'high' if result['score'] < 0.5 else 'medium'
})
return recommendations
# Usage example
validator = DRIVEValidator()
project_state = {
'Define': {
'problem_statement': 'Claims delays from missing documentation',
'target_users': ['claims processors', 'policyholders'],
'success_metrics': {'behavioral_kpis': {'complete_within_24h': 0.7}},
'problem_validation': {'pmf': 0.78}
},
'Research': {
'target_behaviors': [
{'name': 'upload_all_docs_one_session'}
],
'bmf_min': 6.2,
'bmf_avg': 7.1,
'research_methods': ['observation', 'logs'],
'users_observed': 18
},
'Integrate': {
'feature_map': True,
'friction_analysis': True,
'prototype': True,
'behaviors_with_enabling_feature': 1,
'target_behaviors_count': 1,
'avg_friction': 0.35
},
'Verify': {
'behavioral_kpis': {'completion_rate': 0.62},
'bpmf': 0.72,
'cohorts': 2
}
}
validation_report = validator.validate_progress(project_state)
print(f"Overall Progress: {validation_report['overall_progress']:.0%}")
print(f"Next Focus: {validation_report['next_phase']}")
Licensing
Content © Jason Hreha. Text licensed under CC BY-NC-SA 4.0 unless noted. DRIVE is a trademark of Jason Hreha and requires attribution for commercial use.