Behavior Ranking & Selection
TLDR: Not all behaviors are equal. Behavior ranking systematically evaluates potential target behaviors on impact, feasibility, and strategic alignment to identify the highest-leverage behaviors for intervention.
Overview
After identifying potential behaviors through behavioral research, you must select which behaviors to target. This selection process determines the success of your entire behavioral strategy.
Poor behavior selection is the #1 cause of behavioral intervention failure.
Example (public sector): In a SNAP recertification program, candidates included “Start renewal 15 days before deadline,” “Upload required documents in a single session,” and “Attend assistance clinic.” Using the rubric, “Upload in a single session” ranked highest (direct impact on completion; BSM feasibility > 6 across contexts). “Start renewal early” scored lower on feasibility due to environmental constraints. The portfolio focused on enabling the single‑session upload (checklist + one‑upload flow) first.
The Behavior Selection Framework
Core Evaluation Criteria
Every behavior is evaluated on three dimensions:
behavior_score = (impact_score × feasibility_score × alignment_score) ^ (1/3)
Use the BSM minimum‑component rule as a hard constraint: if the minimum BSM component for the target user segment is < 6, cap feasibility at 3 and flag INFEASIBLE.
Guardrail: Do not over‑optimize proxy scores. Audit whether higher ranked behaviors still causally resolve the validated problem.
1. Impact Assessment
Question: If users perform this behavior, how much value is created?
Sub-factors:
impact_factors:
problem_resolution:
weight: 0.4
question: "How completely does this behavior solve the validated problem?"
scoring:
0-3: "Minimal problem resolution"
4-6: "Partial problem resolution"
7-10: "Complete problem resolution"
value_creation:
weight: 0.3
question: "What's the economic/social value per behavior instance?"
scoring:
0-3: "Low value (<$10 or minor benefit)"
4-6: "Moderate value ($10-100 or significant benefit)"
7-10: "High value (>$100 or transformative benefit)"
network_effects:
weight: 0.2
question: "Does this behavior influence others to act?"
scoring:
0-3: "Individual only"
4-6: "Influences 1-2 others"
7-10: "Influences many others"
sustainability:
weight: 0.1
question: "Does impact persist after behavior stops?"
scoring:
0-3: "Impact ends immediately"
4-6: "Impact lasts days/weeks"
7-10: "Impact lasts months/years"
2. Feasibility Analysis
Question: Can target users realistically perform this behavior?
BSM-Based Assessment:
def assess_feasibility(behavior, user_segment):
"""
Evaluate feasibility using Behavioral State Model
"""
bsm_scores = {
'ability_match': assess_ability_requirements(behavior, user_segment),
'motivation_fit': assess_motivation_alignment(behavior, user_segment),
'environmental_support': assess_context_compatibility(behavior, user_segment),
'perception_alignment': assess_belief_compatibility(behavior, user_segment)
}
# Minimum component rule
min_score = min(bsm_scores.values())
if min_score < 6:
return {
'feasibility_score': 3, # cap when below feasibility floor
'verdict': 'INFEASIBLE',
'blocker': min(bsm_scores, key=bsm_scores.get),
'recommendation': 'Address blocker or choose different behavior'
}
# Weighted average for feasible behaviors
weights = {
'ability_match': 0.35,
'motivation_fit': 0.25,
'environmental_support': 0.25,
'perception_alignment': 0.15
}
feasibility = sum(bsm_scores[k] * weights[k] for k in bsm_scores)
return {
'feasibility_score': feasibility,
'verdict': 'FEASIBLE' if feasibility > 6 else 'CHALLENGING',
'improvement_areas': [k for k, v in bsm_scores.items() if v < 7]
}
3. Strategic Alignment
Question: Does this behavior advance our strategic objectives?
Alignment Matrix:
| Strategic Objective | Behavior Contribution | Score |
|-------------------|---------------------|--------|
| User Acquisition | New users attracted by behavior | 0-10 |
| User Retention | Behavior creates stickiness | 0-10 |
| Revenue Generation | Direct monetization potential | 0-10 |
| Brand Building | Behavior enhances brand | 0-10 |
| Competitive Advantage | Unique/defensible behavior | 0-10 |
The Ranking Process
Step 1: Behavior Inventory
Create comprehensive list of candidate behaviors:
behavior_inventory:
- behavior_1:
name: "Daily progress tracking"
description: "User logs progress once per day"
current_adoption: "12% do this naturally"
required_effort: "2 minutes/day"
- behavior_2:
name: "Weekly planning session"
description: "User plans upcoming week"
current_adoption: "5% do this naturally"
required_effort: "30 minutes/week"
- behavior_3:
name: "Share achievement"
description: "User shares success with network"
current_adoption: "22% do this naturally"
required_effort: "1 minute per achievement"
Step 2: Multi-Criteria Scoring
Score each behavior systematically:
class BehaviorRanker:
def __init__(self, strategic_weights=None):
self.strategic_weights = strategic_weights or {
'acquisition': 0.3,
'retention': 0.4,
'revenue': 0.2,
'brand': 0.1
}
def rank_behaviors(self, behaviors, user_segment):
"""
Rank behaviors by combined score
"""
scored_behaviors = []
for behavior in behaviors:
# Calculate three core dimensions
impact = self.calculate_impact(behavior)
feasibility = self.calculate_feasibility(behavior, user_segment)
alignment = self.calculate_alignment(behavior, self.strategic_weights)
# Combined score (geometric mean)
combined_score = (impact * feasibility * alignment) ** (1/3)
# Confidence based on data quality
confidence = self.assess_confidence(behavior)
scored_behaviors.append({
'behavior': behavior,
'scores': {
'impact': impact,
'feasibility': feasibility,
'alignment': alignment,
'combined': combined_score
},
'confidence': confidence,
'rank': None # Set after sorting
})
# Sort by combined score
scored_behaviors.sort(key=lambda x: x['scores']['combined'], reverse=True)
# Assign ranks
for i, sb in enumerate(scored_behaviors):
sb['rank'] = i + 1
return scored_behaviors
Legacy Criteria Crosswalk (Overview Article)
For teams familiar with the original article rubric, the legacy criteria map into the framework above as follows:
Legacy criterion | Where it maps now |
---|---|
Compelling (exciting) | Motivation fit (Feasibility) and early value (Impact/value_creation) |
Reasonable (not strange) | Perception alignment (Feasibility) and social acceptability |
Socially acceptable | Social environment (Feasibility) and brand/competitive alignment (Strategic) |
Physically simple | Ability match (Feasibility) and TTFB friction (Impact via completion) |
Cognitively simple | Ability/perception alignment (Feasibility) and path complexity (TTFB) |
Expensive (reverse) | Environmental support and value/economics (Impact + Strategic) |
Rewarding | Early value and reinforcement (Impact/value_creation, retention potential) |
Useful (solves problem) | Problem resolution (Impact) |
Impactful | Aggregate Impact dimension |
You can keep using the legacy checklist as a sanity check; the current framework simply collapses it into Impact, Feasibility (BSM‑based), and Strategic Alignment for consistency and scoring reliability.
Step 3: Sensitivity Analysis
Test how robust rankings are:
# Runnable example
import numpy as np
from collections import defaultdict
import copy
rng = np.random.default_rng(42)
def add_measurement_noise(behaviors, std=0.5):
noisy = copy.deepcopy(behaviors)
for b in noisy:
for k in ['impact','feasibility','alignment']:
b[k] = max(0, min(10, b[k] + rng.normal(0, std)))
return noisy
def rank_behaviors(behaviors):
out = []
for b in behaviors:
combined = (b['impact'] * b['feasibility'] * b['alignment']) ** (1/3)
out.append({**b, 'combined': combined})
out.sort(key=lambda x: x['combined'], reverse=True)
for i, b in enumerate(out): b['rank'] = i+1
return out
def sensitivity_analysis(behaviors, variations=100):
"""
Monte Carlo simulation of ranking stability
"""
rank_distributions = defaultdict(list)
for _ in range(variations):
noisy_behaviors = add_measurement_noise(behaviors, std=0.5)
rankings = rank_behaviors(noisy_behaviors)
for behavior in rankings:
rank_distributions[behavior['name']].append(behavior['rank'])
stability_report = {}
for behavior, ranks in rank_distributions.items():
stability_report[behavior] = {
'mean_rank': float(np.mean(ranks)),
'rank_std': float(np.std(ranks)),
'rank_range': (min(ranks), max(ranks)),
'top_3_probability': sum(r <= 3 for r in ranks) / len(ranks)
}
return stability_report
Step 4: Behavioral Dependencies
Consider behavior chains and prerequisites:
graph TD
A[Account Creation] -->|Enables| B[Profile Completion]
B -->|Enables| C[First Post]
C -->|Enables| D[Community Engagement]
D -->|Enables| E[Habit Formation]
A -.->|Also Enables| C
B -.->|Influences| D
Dependency Analysis:
def analyze_dependencies(behaviors):
"""
Identify behavioral prerequisites and sequences
"""
dependency_graph = {}
for behavior in behaviors:
dependencies = {
'hard_prerequisites': [], # Must happen first
'soft_prerequisites': [], # Helpful but not required
'enables': [], # This behavior enables others
'reinforces': [] # Mutual reinforcement
}
# Example logic
if behavior['name'] == 'daily_tracking':
dependencies['hard_prerequisites'] = ['account_setup', 'initial_goal']
dependencies['enables'] = ['weekly_review', 'streak_building']
dependencies['reinforces'] = ['motivation_maintenance']
dependency_graph[behavior['name']] = dependencies
return optimize_behavior_sequence(dependency_graph)
Selection Decision Matrix
The 2x2 Prioritization Grid
High Impact ┃ Quick Wins │ Strategic Priorities
┃ (Do second) │ (Do first)
┃ │
┣━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━
┃ Questionable │ Stepping Stones
Low Impact ┃ (Usually skip) │ (Do if enables priority)
┃ │
┗━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━
Low Feasibility High Feasibility
Selection Rules
- Always Start with One: Focus beats dilution
- High Feasibility First: Build momentum with wins
- Address Dependencies: Enable before requiring
- Test Assumptions: Pilot before full rollout
- Monitor Cannibalization: Don’t compete with yourself
Advanced Selection Techniques
Machine Learning Prediction
import pandas as pd # required for the example below
from sklearn.ensemble import RandomForestRegressor
def ml_behavior_prediction(historical_data):
"""
Use ML to predict behavior success
"""
# Features: behavior characteristics
features = historical_data[[
'complexity_score',
'time_requirement',
'social_component',
'immediate_reward',
'ability_requirement',
'motivation_type'
]]
# Target: actual adoption rate
target = historical_data['adoption_success']
# Train model
model = RandomForestRegressor(n_estimators=100)
model.fit(features, target)
# Feature importance
importance = pd.DataFrame({
'feature': features.columns,
'importance': model.feature_importances_
}).sort_values('importance', ascending=False)
return model, importance
Portfolio Optimization
Select behavior portfolio for maximum impact:
from scipy.optimize import minimize
def optimize_behavior_portfolio(behaviors, constraints=None):
"""
Select optimal mix of behaviors given constraints
"""
n_behaviors = len(behaviors)
# Objective: maximize total impact
def objective(weights):
total_impact = sum(
w * b['impact'] * b['feasibility']
for w, b in zip(weights, behaviors)
)
return -total_impact # Minimize negative
# Constraints
cons = [
{'type': 'eq', 'fun': lambda w: sum(w) - 1}, # Weights sum to 1
{'type': 'ineq', 'fun': lambda w: w} # Non-negative
]
# Bounds
bounds = [(0, 1) for _ in range(n_behaviors)]
# Initial guess
initial = [1/n_behaviors] * n_behaviors
# Optimize
result = minimize(objective, initial, method='SLSQP',
bounds=bounds, constraints=cons)
# Return portfolio
portfolio = [
{'behavior': b, 'allocation': w}
for b, w in zip(behaviors, result.x)
if w > 0.05 # 5% threshold
]
return sorted(portfolio, key=lambda x: x['allocation'], reverse=True)
Common Selection Mistakes
Mistake 1: Complexity Bias
Wrong: Choose the most sophisticated behavior Right: Choose the simplest behavior that solves the problem
Mistake 2: Ignoring Prerequisites
Wrong: Jump to ideal end-state behavior Right: Build stepping stones to target behavior
Mistake 3: Perfect Information Paralysis
Wrong: Wait for complete data before selecting Right: Make best guess, test quickly, iterate
Mistake 4: Kitchen Sink Approach
Wrong: Target many behaviors simultaneously Right: Master one behavior before adding more
Behavior Selection Checklist
Before finalizing selection, verify:
- Problem-Behavior Fit: Does this behavior actually solve the validated problem?
- User Capability: Can >50% of target users perform this with minimal training?
- Measurement Plan: Can we reliably measure if behavior occurs?
- Intervention Ideas: Do we have 3+ ways to enable this behavior?
- Failure Recovery: If this behavior fails, what’s Plan B?
- Ethical Screen: Behavior promotes user welfare and avoids coercion
- Competitive Analysis: Are others already “owning” this behavior?
Templates and Tools
Behavior Ranking Spreadsheet Template
Behavior | Impact (0-10) | Feasibility (0-10) | Alignment (0-10) | Combined Score | Confidence | Rank |
---|---|---|---|---|---|---|
Daily Check-in | 7 | 9 | 8 | 8.0 | High | 1 |
Weekly Planning | 9 | 5 | 7 | 6.7 | Medium | 2 |
Peer Sharing | 6 | 8 | 6 | 6.6 | High | 3 |
Decision Documentation Template
## Behavior Selection Decision
**Date**: [Date]
**Selected Behavior**: [Name]
**Decision Makers**: [Names]
### Rationale
- Impact Score: X/10 because [reasoning]
- Feasibility Score: Y/10 because [reasoning]
- Alignment Score: Z/10 because [reasoning]
### Alternatives Considered
1. [Behavior 2] - Rejected because [reason]
2. [Behavior 3] - Rejected because [reason]
### Success Criteria
- Adoption target: X% of users within Y days
- Frequency target: Z times per week
- Quality target: [Specific metric]
### Risk Mitigation
- Risk: [Description] → Mitigation: [Plan]
Next Steps
- Apply ranking to your behavioral research findings
- Use selection to guide solution design
- Track selected behaviors with measurement frameworks
- Review case studies of successful behavior selection
Licensing
Content © Jason Hreha. Text licensed under CC BY-NC-SA 4.0 unless noted. DRIVE is a trademark of Jason Hreha and requires attribution for commercial use.